Mar 17 17:57:59.001932 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:09:25 -00 2025 Mar 17 17:57:59.001965 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:57:59.001980 kernel: BIOS-provided physical RAM map: Mar 17 17:57:59.001987 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 17:57:59.001994 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 17:57:59.002001 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 17:57:59.002009 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Mar 17 17:57:59.002016 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Mar 17 17:57:59.002023 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:57:59.002030 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 17:57:59.002041 kernel: NX (Execute Disable) protection: active Mar 17 17:57:59.002051 kernel: APIC: Static calls initialized Mar 17 17:57:59.002059 kernel: SMBIOS 2.8 present. Mar 17 17:57:59.002066 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Mar 17 17:57:59.002075 kernel: Hypervisor detected: KVM Mar 17 17:57:59.002083 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:57:59.002097 kernel: kvm-clock: using sched offset of 3397278672 cycles Mar 17 17:57:59.002106 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:57:59.002114 kernel: tsc: Detected 2494.140 MHz processor Mar 17 17:57:59.002122 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:57:59.002130 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:57:59.002138 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Mar 17 17:57:59.002146 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 17:57:59.002154 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:57:59.002167 kernel: ACPI: Early table checksum verification disabled Mar 17 17:57:59.002174 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Mar 17 17:57:59.002182 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:57:59.002190 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:57:59.002198 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:57:59.002206 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 17 17:57:59.002214 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:57:59.002221 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:57:59.002231 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:57:59.002243 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:57:59.002251 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Mar 17 17:57:59.002259 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Mar 17 17:57:59.002267 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 17 17:57:59.002275 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Mar 17 17:57:59.002283 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Mar 17 17:57:59.002291 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Mar 17 17:57:59.002304 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Mar 17 17:57:59.002316 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 17:57:59.002327 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 17:57:59.002336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 17:57:59.002345 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 17 17:57:59.002353 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Mar 17 17:57:59.002362 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Mar 17 17:57:59.002374 kernel: Zone ranges: Mar 17 17:57:59.002383 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:57:59.002391 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Mar 17 17:57:59.002400 kernel: Normal empty Mar 17 17:57:59.002408 kernel: Movable zone start for each node Mar 17 17:57:59.002417 kernel: Early memory node ranges Mar 17 17:57:59.002425 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 17:57:59.002433 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Mar 17 17:57:59.002442 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Mar 17 17:57:59.002450 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:57:59.002463 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 17:57:59.002477 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Mar 17 17:57:59.002489 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:57:59.002501 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:57:59.002512 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:57:59.002524 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:57:59.002536 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:57:59.002548 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:57:59.002562 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:57:59.002584 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:57:59.002596 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:57:59.002610 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:57:59.002625 kernel: TSC deadline timer available Mar 17 17:57:59.002637 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 17:57:59.002649 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:57:59.002662 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Mar 17 17:57:59.002680 kernel: Booting paravirtualized kernel on KVM Mar 17 17:57:59.002692 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:57:59.002712 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 17:57:59.002724 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 17:57:59.002806 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 17:57:59.002818 kernel: pcpu-alloc: [0] 0 1 Mar 17 17:57:59.002830 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 17:57:59.002845 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:57:59.002858 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:57:59.002872 kernel: random: crng init done Mar 17 17:57:59.002892 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:57:59.002905 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 17:57:59.002918 kernel: Fallback order for Node 0: 0 Mar 17 17:57:59.002929 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Mar 17 17:57:59.002941 kernel: Policy zone: DMA32 Mar 17 17:57:59.002953 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:57:59.002966 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43476K init, 1596K bss, 127196K reserved, 0K cma-reserved) Mar 17 17:57:59.002978 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:57:59.002992 kernel: Kernel/User page tables isolation: enabled Mar 17 17:57:59.003012 kernel: ftrace: allocating 37910 entries in 149 pages Mar 17 17:57:59.003025 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:57:59.003036 kernel: Dynamic Preempt: voluntary Mar 17 17:57:59.003048 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:57:59.003063 kernel: rcu: RCU event tracing is enabled. Mar 17 17:57:59.003078 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:57:59.003092 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:57:59.003105 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:57:59.003119 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:57:59.003139 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:57:59.003154 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:57:59.003166 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 17:57:59.003186 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:57:59.003199 kernel: Console: colour VGA+ 80x25 Mar 17 17:57:59.003213 kernel: printk: console [tty0] enabled Mar 17 17:57:59.003225 kernel: printk: console [ttyS0] enabled Mar 17 17:57:59.003237 kernel: ACPI: Core revision 20230628 Mar 17 17:57:59.003251 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:57:59.003273 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:57:59.003288 kernel: x2apic enabled Mar 17 17:57:59.003303 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:57:59.003316 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:57:59.003330 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Mar 17 17:57:59.003353 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Mar 17 17:57:59.003367 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 17:57:59.003381 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 17:57:59.003417 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:57:59.003431 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:57:59.003445 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:57:59.003458 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:57:59.003490 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 17 17:57:59.003505 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:57:59.003518 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:57:59.003533 kernel: MDS: Mitigation: Clear CPU buffers Mar 17 17:57:59.003545 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:57:59.003565 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:57:59.003575 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:57:59.003584 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:57:59.003593 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:57:59.003603 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 17:57:59.003612 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:57:59.003621 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:57:59.003630 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:57:59.003644 kernel: landlock: Up and running. Mar 17 17:57:59.003654 kernel: SELinux: Initializing. Mar 17 17:57:59.003663 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:57:59.003673 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:57:59.003682 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Mar 17 17:57:59.003692 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:57:59.003701 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:57:59.003710 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:57:59.003721 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Mar 17 17:57:59.003740 kernel: signal: max sigframe size: 1776 Mar 17 17:57:59.003769 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:57:59.003785 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:57:59.005828 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 17:57:59.005845 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:57:59.005855 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:57:59.005865 kernel: .... node #0, CPUs: #1 Mar 17 17:57:59.005882 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:57:59.005892 kernel: smpboot: Max logical packages: 1 Mar 17 17:57:59.005910 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Mar 17 17:57:59.005919 kernel: devtmpfs: initialized Mar 17 17:57:59.005929 kernel: x86/mm: Memory block size: 128MB Mar 17 17:57:59.005938 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:57:59.005948 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:57:59.005957 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:57:59.005966 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:57:59.005975 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:57:59.005985 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:57:59.005999 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:57:59.006008 kernel: audit: type=2000 audit(1742234278.572:1): state=initialized audit_enabled=0 res=1 Mar 17 17:57:59.006018 kernel: cpuidle: using governor menu Mar 17 17:57:59.006027 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:57:59.006036 kernel: dca service started, version 1.12.1 Mar 17 17:57:59.006045 kernel: PCI: Using configuration type 1 for base access Mar 17 17:57:59.006054 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:57:59.006064 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:57:59.006073 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:57:59.006086 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:57:59.006099 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:57:59.006112 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:57:59.006125 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:57:59.006138 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:57:59.006150 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:57:59.006163 kernel: ACPI: Interpreter enabled Mar 17 17:57:59.006177 kernel: ACPI: PM: (supports S0 S5) Mar 17 17:57:59.006189 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:57:59.006208 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:57:59.006222 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:57:59.006235 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 17 17:57:59.006247 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:57:59.006577 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:57:59.006722 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 17 17:57:59.006865 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 17 17:57:59.006889 kernel: acpiphp: Slot [3] registered Mar 17 17:57:59.006898 kernel: acpiphp: Slot [4] registered Mar 17 17:57:59.006908 kernel: acpiphp: Slot [5] registered Mar 17 17:57:59.006917 kernel: acpiphp: Slot [6] registered Mar 17 17:57:59.006926 kernel: acpiphp: Slot [7] registered Mar 17 17:57:59.006935 kernel: acpiphp: Slot [8] registered Mar 17 17:57:59.006944 kernel: acpiphp: Slot [9] registered Mar 17 17:57:59.006953 kernel: acpiphp: Slot [10] registered Mar 17 17:57:59.006963 kernel: acpiphp: Slot [11] registered Mar 17 17:57:59.006972 kernel: acpiphp: Slot [12] registered Mar 17 17:57:59.006985 kernel: acpiphp: Slot [13] registered Mar 17 17:57:59.006994 kernel: acpiphp: Slot [14] registered Mar 17 17:57:59.007003 kernel: acpiphp: Slot [15] registered Mar 17 17:57:59.007013 kernel: acpiphp: Slot [16] registered Mar 17 17:57:59.007022 kernel: acpiphp: Slot [17] registered Mar 17 17:57:59.007031 kernel: acpiphp: Slot [18] registered Mar 17 17:57:59.007040 kernel: acpiphp: Slot [19] registered Mar 17 17:57:59.007049 kernel: acpiphp: Slot [20] registered Mar 17 17:57:59.007058 kernel: acpiphp: Slot [21] registered Mar 17 17:57:59.007071 kernel: acpiphp: Slot [22] registered Mar 17 17:57:59.007080 kernel: acpiphp: Slot [23] registered Mar 17 17:57:59.007089 kernel: acpiphp: Slot [24] registered Mar 17 17:57:59.007099 kernel: acpiphp: Slot [25] registered Mar 17 17:57:59.007108 kernel: acpiphp: Slot [26] registered Mar 17 17:57:59.007117 kernel: acpiphp: Slot [27] registered Mar 17 17:57:59.007126 kernel: acpiphp: Slot [28] registered Mar 17 17:57:59.007135 kernel: acpiphp: Slot [29] registered Mar 17 17:57:59.007144 kernel: acpiphp: Slot [30] registered Mar 17 17:57:59.007153 kernel: acpiphp: Slot [31] registered Mar 17 17:57:59.007166 kernel: PCI host bridge to bus 0000:00 Mar 17 17:57:59.007291 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:57:59.007391 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:57:59.007543 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:57:59.007650 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 17 17:57:59.007779 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Mar 17 17:57:59.009006 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:57:59.009212 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 17:57:59.009440 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 17 17:57:59.009592 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 17 17:57:59.009698 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Mar 17 17:57:59.011002 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 17 17:57:59.011224 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 17 17:57:59.011348 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 17 17:57:59.011454 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 17 17:57:59.011664 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Mar 17 17:57:59.013843 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Mar 17 17:57:59.014041 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 17:57:59.014150 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 17 17:57:59.014263 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 17 17:57:59.014385 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 17 17:57:59.014492 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 17 17:57:59.014645 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Mar 17 17:57:59.017037 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Mar 17 17:57:59.017252 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Mar 17 17:57:59.017359 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:57:59.017499 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:57:59.017609 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Mar 17 17:57:59.017718 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Mar 17 17:57:59.018660 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Mar 17 17:57:59.019937 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:57:59.020075 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Mar 17 17:57:59.020183 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Mar 17 17:57:59.020299 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Mar 17 17:57:59.020515 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Mar 17 17:57:59.020653 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Mar 17 17:57:59.020847 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Mar 17 17:57:59.020953 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Mar 17 17:57:59.021096 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:57:59.021202 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 17:57:59.021321 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Mar 17 17:57:59.021427 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Mar 17 17:57:59.021576 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:57:59.021773 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Mar 17 17:57:59.021901 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Mar 17 17:57:59.022016 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Mar 17 17:57:59.022149 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Mar 17 17:57:59.022300 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Mar 17 17:57:59.022403 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Mar 17 17:57:59.022415 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:57:59.022425 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:57:59.022434 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:57:59.022444 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:57:59.022453 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 17:57:59.022469 kernel: iommu: Default domain type: Translated Mar 17 17:57:59.022478 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:57:59.022488 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:57:59.022497 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:57:59.022506 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 17:57:59.023826 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Mar 17 17:57:59.024038 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 17 17:57:59.024183 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 17 17:57:59.024317 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:57:59.024346 kernel: vgaarb: loaded Mar 17 17:57:59.024362 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:57:59.024377 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:57:59.024391 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:57:59.024400 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:57:59.024410 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:57:59.024419 kernel: pnp: PnP ACPI init Mar 17 17:57:59.024428 kernel: pnp: PnP ACPI: found 4 devices Mar 17 17:57:59.024444 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:57:59.024453 kernel: NET: Registered PF_INET protocol family Mar 17 17:57:59.024472 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:57:59.024487 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 17:57:59.024497 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:57:59.024506 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:57:59.024517 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 17 17:57:59.024526 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 17:57:59.024535 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:57:59.024550 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:57:59.024559 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:57:59.024568 kernel: NET: Registered PF_XDP protocol family Mar 17 17:57:59.024704 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:57:59.026037 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:57:59.026253 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:57:59.026412 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 17 17:57:59.026540 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Mar 17 17:57:59.026744 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 17 17:57:59.026885 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 17:57:59.026900 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 17 17:57:59.027005 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 36431 usecs Mar 17 17:57:59.027018 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:57:59.027027 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 17:57:59.027037 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Mar 17 17:57:59.027046 kernel: Initialise system trusted keyrings Mar 17 17:57:59.027057 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 17:57:59.027074 kernel: Key type asymmetric registered Mar 17 17:57:59.027086 kernel: Asymmetric key parser 'x509' registered Mar 17 17:57:59.027102 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:57:59.027117 kernel: io scheduler mq-deadline registered Mar 17 17:57:59.027129 kernel: io scheduler kyber registered Mar 17 17:57:59.027143 kernel: io scheduler bfq registered Mar 17 17:57:59.027171 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:57:59.027186 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 17 17:57:59.027202 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 17:57:59.027224 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 17:57:59.027238 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:57:59.027247 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:57:59.027257 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:57:59.027266 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:57:59.027276 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:57:59.027438 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 17 17:57:59.027455 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:57:59.027558 kernel: rtc_cmos 00:03: registered as rtc0 Mar 17 17:57:59.027693 kernel: rtc_cmos 00:03: setting system clock to 2025-03-17T17:57:58 UTC (1742234278) Mar 17 17:57:59.027901 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 17 17:57:59.027928 kernel: intel_pstate: CPU model not supported Mar 17 17:57:59.027943 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:57:59.027956 kernel: Segment Routing with IPv6 Mar 17 17:57:59.027969 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:57:59.027982 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:57:59.028005 kernel: Key type dns_resolver registered Mar 17 17:57:59.028021 kernel: IPI shorthand broadcast: enabled Mar 17 17:57:59.028035 kernel: sched_clock: Marking stable (991008038, 96948994)->(1195251329, -107294297) Mar 17 17:57:59.028049 kernel: registered taskstats version 1 Mar 17 17:57:59.028062 kernel: Loading compiled-in X.509 certificates Mar 17 17:57:59.028077 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 2d438fc13e28f87f3f580874887bade2e2b0c7dd' Mar 17 17:57:59.028092 kernel: Key type .fscrypt registered Mar 17 17:57:59.028108 kernel: Key type fscrypt-provisioning registered Mar 17 17:57:59.028122 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:57:59.028144 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:57:59.028158 kernel: ima: No architecture policies found Mar 17 17:57:59.028171 kernel: clk: Disabling unused clocks Mar 17 17:57:59.028180 kernel: Freeing unused kernel image (initmem) memory: 43476K Mar 17 17:57:59.028195 kernel: Write protecting the kernel read-only data: 38912k Mar 17 17:57:59.028240 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 17 17:57:59.028258 kernel: Run /init as init process Mar 17 17:57:59.028274 kernel: with arguments: Mar 17 17:57:59.028285 kernel: /init Mar 17 17:57:59.028299 kernel: with environment: Mar 17 17:57:59.028309 kernel: HOME=/ Mar 17 17:57:59.028319 kernel: TERM=linux Mar 17 17:57:59.028328 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:57:59.028361 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:57:59.028380 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:57:59.028392 systemd[1]: Detected virtualization kvm. Mar 17 17:57:59.028406 systemd[1]: Detected architecture x86-64. Mar 17 17:57:59.028421 systemd[1]: Running in initrd. Mar 17 17:57:59.028431 systemd[1]: No hostname configured, using default hostname. Mar 17 17:57:59.028442 systemd[1]: Hostname set to . Mar 17 17:57:59.028451 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:57:59.028461 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:57:59.028472 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:57:59.028482 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:57:59.028497 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:57:59.028519 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:57:59.028539 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:57:59.028587 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:57:59.028605 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:57:59.028626 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:57:59.028643 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:57:59.028666 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:57:59.028696 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:57:59.028712 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:57:59.028733 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:57:59.029797 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:57:59.029830 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:57:59.029849 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:57:59.029863 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:57:59.029877 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:57:59.029892 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:57:59.029906 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:57:59.029922 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:57:59.029935 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:57:59.029949 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:57:59.029962 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:57:59.029983 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:57:59.029998 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:57:59.030012 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:57:59.030027 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:57:59.030042 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:57:59.030059 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:57:59.030071 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:57:59.030143 systemd-journald[181]: Collecting audit messages is disabled. Mar 17 17:57:59.030184 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:57:59.030203 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:57:59.030218 systemd-journald[181]: Journal started Mar 17 17:57:59.030241 systemd-journald[181]: Runtime Journal (/run/log/journal/ef7db9a8c6b94ba5bd115587d56dd2f7) is 4.9M, max 39.3M, 34.4M free. Mar 17 17:57:59.025103 systemd-modules-load[182]: Inserted module 'overlay' Mar 17 17:57:59.041802 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:57:59.058781 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:57:59.060892 systemd-modules-load[182]: Inserted module 'br_netfilter' Mar 17 17:57:59.075205 kernel: Bridge firewalling registered Mar 17 17:57:59.077290 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:57:59.083869 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:57:59.084830 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:57:59.095173 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:57:59.099092 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:57:59.102073 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:57:59.105309 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:57:59.131225 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:57:59.136475 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:57:59.142069 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:57:59.144007 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:57:59.149101 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:57:59.153037 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:57:59.174681 dracut-cmdline[218]: dracut-dracut-053 Mar 17 17:57:59.179833 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:57:59.205035 systemd-resolved[220]: Positive Trust Anchors: Mar 17 17:57:59.205057 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:57:59.205095 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:57:59.208769 systemd-resolved[220]: Defaulting to hostname 'linux'. Mar 17 17:57:59.210323 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:57:59.211781 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:57:59.303799 kernel: SCSI subsystem initialized Mar 17 17:57:59.314789 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:57:59.326799 kernel: iscsi: registered transport (tcp) Mar 17 17:57:59.350857 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:57:59.350977 kernel: QLogic iSCSI HBA Driver Mar 17 17:57:59.410389 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:57:59.416122 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:57:59.447903 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:57:59.448032 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:57:59.448057 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:57:59.496813 kernel: raid6: avx2x4 gen() 15937 MB/s Mar 17 17:57:59.513818 kernel: raid6: avx2x2 gen() 19229 MB/s Mar 17 17:57:59.531054 kernel: raid6: avx2x1 gen() 16712 MB/s Mar 17 17:57:59.531166 kernel: raid6: using algorithm avx2x2 gen() 19229 MB/s Mar 17 17:57:59.549120 kernel: raid6: .... xor() 18175 MB/s, rmw enabled Mar 17 17:57:59.549238 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:57:59.573818 kernel: xor: automatically using best checksumming function avx Mar 17 17:57:59.754808 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:57:59.772913 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:57:59.785089 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:57:59.804486 systemd-udevd[403]: Using default interface naming scheme 'v255'. Mar 17 17:57:59.812829 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:57:59.824173 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:57:59.857128 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Mar 17 17:57:59.916815 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:57:59.924097 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:58:00.023023 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:58:00.035038 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:58:00.072228 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:58:00.079967 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:58:00.081302 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:58:00.081634 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:58:00.090113 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:58:00.120187 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:58:00.156961 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Mar 17 17:58:00.236207 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 17 17:58:00.236518 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:58:00.236543 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:58:00.236564 kernel: GPT:9289727 != 125829119 Mar 17 17:58:00.236579 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:58:00.236591 kernel: GPT:9289727 != 125829119 Mar 17 17:58:00.236603 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:58:00.236629 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:58:00.236641 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Mar 17 17:58:00.248109 kernel: ACPI: bus type USB registered Mar 17 17:58:00.248135 kernel: usbcore: registered new interface driver usbfs Mar 17 17:58:00.248149 kernel: usbcore: registered new interface driver hub Mar 17 17:58:00.248167 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) Mar 17 17:58:00.248326 kernel: usbcore: registered new device driver usb Mar 17 17:58:00.248361 kernel: scsi host0: Virtio SCSI HBA Mar 17 17:58:00.248556 kernel: libata version 3.00 loaded. Mar 17 17:58:00.248571 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 17 17:58:00.322181 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:58:00.322225 kernel: scsi host1: ata_piix Mar 17 17:58:00.322498 kernel: AES CTR mode by8 optimization enabled Mar 17 17:58:00.322521 kernel: scsi host2: ata_piix Mar 17 17:58:00.322921 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Mar 17 17:58:00.322969 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Mar 17 17:58:00.232948 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:58:00.379530 kernel: BTRFS: device fsid 16b3954e-2e86-4c7f-a948-d3d3817b1bdc devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (457) Mar 17 17:58:00.379580 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (447) Mar 17 17:58:00.233184 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:58:00.234005 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:58:00.234518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:58:00.234849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:00.235536 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:58:00.245038 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:58:00.246455 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:58:00.406136 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Mar 17 17:58:00.411443 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Mar 17 17:58:00.411694 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Mar 17 17:58:00.411991 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Mar 17 17:58:00.412130 kernel: hub 1-0:1.0: USB hub found Mar 17 17:58:00.412314 kernel: hub 1-0:1.0: 2 ports detected Mar 17 17:58:00.373677 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:58:00.382186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:00.434387 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:58:00.464088 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:58:00.467863 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:58:00.483166 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:58:00.489208 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:58:00.492048 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:58:00.502096 disk-uuid[540]: Primary Header is updated. Mar 17 17:58:00.502096 disk-uuid[540]: Secondary Entries is updated. Mar 17 17:58:00.502096 disk-uuid[540]: Secondary Header is updated. Mar 17 17:58:00.516882 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:58:00.534426 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:58:01.535115 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:58:01.535252 disk-uuid[542]: The operation has completed successfully. Mar 17 17:58:01.608524 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:58:01.608665 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:58:01.646051 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:58:01.650485 sh[561]: Success Mar 17 17:58:01.669825 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 17:58:01.762286 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:58:01.763856 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:58:01.767884 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:58:01.810124 kernel: BTRFS info (device dm-0): first mount of filesystem 16b3954e-2e86-4c7f-a948-d3d3817b1bdc Mar 17 17:58:01.810240 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:58:01.812002 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:58:01.812155 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:58:01.813039 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:58:01.825689 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:58:01.827214 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:58:01.832178 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:58:01.836021 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:58:01.863372 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:58:01.863484 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:58:01.863506 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:58:01.869822 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:58:01.889833 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:58:01.889891 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:58:01.898650 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:58:01.907136 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:58:02.050639 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:58:02.059296 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:58:02.089866 ignition[656]: Ignition 2.20.0 Mar 17 17:58:02.089882 ignition[656]: Stage: fetch-offline Mar 17 17:58:02.089975 ignition[656]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:02.089991 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:58:02.090192 ignition[656]: parsed url from cmdline: "" Mar 17 17:58:02.090201 ignition[656]: no config URL provided Mar 17 17:58:02.090211 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:58:02.090226 ignition[656]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:58:02.090235 ignition[656]: failed to fetch config: resource requires networking Mar 17 17:58:02.090828 ignition[656]: Ignition finished successfully Mar 17 17:58:02.094030 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:58:02.123432 systemd-networkd[752]: lo: Link UP Mar 17 17:58:02.123454 systemd-networkd[752]: lo: Gained carrier Mar 17 17:58:02.127857 systemd-networkd[752]: Enumeration completed Mar 17 17:58:02.128504 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Mar 17 17:58:02.128513 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Mar 17 17:58:02.129942 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:58:02.130477 systemd[1]: Reached target network.target - Network. Mar 17 17:58:02.131988 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:58:02.131995 systemd-networkd[752]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:58:02.133372 systemd-networkd[752]: eth0: Link UP Mar 17 17:58:02.133379 systemd-networkd[752]: eth0: Gained carrier Mar 17 17:58:02.133399 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Mar 17 17:58:02.140185 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:58:02.140188 systemd-networkd[752]: eth1: Link UP Mar 17 17:58:02.140818 systemd-networkd[752]: eth1: Gained carrier Mar 17 17:58:02.140849 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:58:02.155938 systemd-networkd[752]: eth1: DHCPv4 address 10.124.0.23/20 acquired from 169.254.169.253 Mar 17 17:58:02.164903 systemd-networkd[752]: eth0: DHCPv4 address 134.199.208.120/20, gateway 134.199.208.1 acquired from 169.254.169.253 Mar 17 17:58:02.172289 ignition[757]: Ignition 2.20.0 Mar 17 17:58:02.173458 ignition[757]: Stage: fetch Mar 17 17:58:02.173847 ignition[757]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:02.173865 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:58:02.174027 ignition[757]: parsed url from cmdline: "" Mar 17 17:58:02.174034 ignition[757]: no config URL provided Mar 17 17:58:02.174042 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:58:02.174055 ignition[757]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:58:02.174103 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Mar 17 17:58:02.210258 ignition[757]: GET result: OK Mar 17 17:58:02.210479 ignition[757]: parsing config with SHA512: 2331b723c112a020a1cafcbc3d9c584f37b6c4a730358860702aac6c8f89ef6125b7d6374a84495fd4ab6c25da997342bd4ba7d7176dd693e9aa48a94e1c402e Mar 17 17:58:02.216531 unknown[757]: fetched base config from "system" Mar 17 17:58:02.216549 unknown[757]: fetched base config from "system" Mar 17 17:58:02.217022 ignition[757]: fetch: fetch complete Mar 17 17:58:02.216560 unknown[757]: fetched user config from "digitalocean" Mar 17 17:58:02.217032 ignition[757]: fetch: fetch passed Mar 17 17:58:02.217145 ignition[757]: Ignition finished successfully Mar 17 17:58:02.220922 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:58:02.229288 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:58:02.264272 ignition[764]: Ignition 2.20.0 Mar 17 17:58:02.264871 ignition[764]: Stage: kargs Mar 17 17:58:02.265277 ignition[764]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:02.265319 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:58:02.266462 ignition[764]: kargs: kargs passed Mar 17 17:58:02.266558 ignition[764]: Ignition finished successfully Mar 17 17:58:02.267902 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:58:02.275176 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:58:02.312817 ignition[770]: Ignition 2.20.0 Mar 17 17:58:02.312834 ignition[770]: Stage: disks Mar 17 17:58:02.314165 ignition[770]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:02.314194 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:58:02.317726 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:58:02.315671 ignition[770]: disks: disks passed Mar 17 17:58:02.319255 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:58:02.316126 ignition[770]: Ignition finished successfully Mar 17 17:58:02.320620 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:58:02.321243 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:58:02.322046 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:58:02.322707 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:58:02.329151 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:58:02.362897 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:58:02.365825 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:58:02.814099 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:58:02.947849 kernel: EXT4-fs (vda9): mounted filesystem 21764504-a65e-45eb-84e1-376b55b62aba r/w with ordered data mode. Quota mode: none. Mar 17 17:58:02.950100 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:58:02.951293 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:58:02.962012 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:58:02.965737 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:58:02.968145 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Mar 17 17:58:02.976278 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:58:02.977375 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:58:02.977433 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:58:02.989886 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (786) Mar 17 17:58:02.994793 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:58:02.994908 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:58:02.994924 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:58:02.992161 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:58:03.007818 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:58:03.011120 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:58:03.019115 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:58:03.092979 coreos-metadata[789]: Mar 17 17:58:03.092 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:58:03.103927 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:58:03.105043 coreos-metadata[788]: Mar 17 17:58:03.103 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:58:03.106701 coreos-metadata[789]: Mar 17 17:58:03.106 INFO Fetch successful Mar 17 17:58:03.111303 coreos-metadata[789]: Mar 17 17:58:03.111 INFO wrote hostname ci-4230.1.0-6-424e48892b to /sysroot/etc/hostname Mar 17 17:58:03.112640 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:58:03.116630 coreos-metadata[788]: Mar 17 17:58:03.116 INFO Fetch successful Mar 17 17:58:03.120161 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:58:03.124431 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Mar 17 17:58:03.124580 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Mar 17 17:58:03.127679 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:58:03.135255 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:58:03.274745 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:58:03.280975 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:58:03.293118 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:58:03.306856 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:58:03.331156 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:58:03.347822 ignition[908]: INFO : Ignition 2.20.0 Mar 17 17:58:03.347822 ignition[908]: INFO : Stage: mount Mar 17 17:58:03.350550 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:03.350550 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:58:03.350550 ignition[908]: INFO : mount: mount passed Mar 17 17:58:03.350550 ignition[908]: INFO : Ignition finished successfully Mar 17 17:58:03.351276 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:58:03.360082 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:58:03.809691 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:58:03.816171 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:58:03.829819 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (919) Mar 17 17:58:03.832897 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:58:03.833094 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:58:03.833114 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:58:03.838022 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:58:03.840093 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:58:03.873585 ignition[936]: INFO : Ignition 2.20.0 Mar 17 17:58:03.874442 ignition[936]: INFO : Stage: files Mar 17 17:58:03.875137 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:03.876716 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:58:03.876716 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:58:03.877908 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:58:03.877908 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:58:03.880984 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:58:03.881842 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:58:03.882940 unknown[936]: wrote ssh authorized keys file for user: core Mar 17 17:58:03.883700 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:58:03.887809 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:58:03.887809 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:58:03.887809 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:58:03.887809 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:58:03.887809 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:58:03.887809 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:58:03.887809 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:58:03.887809 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 17 17:58:04.006100 systemd-networkd[752]: eth1: Gained IPv6LL Mar 17 17:58:04.134524 systemd-networkd[752]: eth0: Gained IPv6LL Mar 17 17:58:04.260128 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 17 17:58:04.600047 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:58:04.602312 ignition[936]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:58:04.604695 ignition[936]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:58:04.604695 ignition[936]: INFO : files: files passed Mar 17 17:58:04.604695 ignition[936]: INFO : Ignition finished successfully Mar 17 17:58:04.606510 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:58:04.617214 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:58:04.621107 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:58:04.645358 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:58:04.645530 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:58:04.659879 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:58:04.659879 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:58:04.661630 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:58:04.664698 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:58:04.666379 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:58:04.672108 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:58:04.712139 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:58:04.712296 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:58:04.714115 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:58:04.714974 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:58:04.715461 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:58:04.724206 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:58:04.745651 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:58:04.753226 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:58:04.783079 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:58:04.784034 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:58:04.786375 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:58:04.787575 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:58:04.788198 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:58:04.790572 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:58:04.791421 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:58:04.793503 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:58:04.794403 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:58:04.796230 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:58:04.797719 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:58:04.799012 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:58:04.799795 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:58:04.800869 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:58:04.801868 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:58:04.802651 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:58:04.802966 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:58:04.804258 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:58:04.805525 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:58:04.806711 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:58:04.808154 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:58:04.809863 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:58:04.810103 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:58:04.811262 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:58:04.811502 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:58:04.813199 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:58:04.813395 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:58:04.814292 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:58:04.814521 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:58:04.827345 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:58:04.832257 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:58:04.833347 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:58:04.833680 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:58:04.835392 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:58:04.836263 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:58:04.848242 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:58:04.848924 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:58:04.860701 ignition[989]: INFO : Ignition 2.20.0 Mar 17 17:58:04.860701 ignition[989]: INFO : Stage: umount Mar 17 17:58:04.864802 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:04.864802 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:58:04.864802 ignition[989]: INFO : umount: umount passed Mar 17 17:58:04.864802 ignition[989]: INFO : Ignition finished successfully Mar 17 17:58:04.867519 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:58:04.867702 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:58:04.870618 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:58:04.872549 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:58:04.874120 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:58:04.874286 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:58:04.875654 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:58:04.876747 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:58:04.877480 systemd[1]: Stopped target network.target - Network. Mar 17 17:58:04.879944 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:58:04.880099 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:58:04.881083 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:58:04.881547 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:58:04.881929 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:58:04.883064 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:58:04.901458 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:58:04.902352 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:58:04.902457 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:58:04.903051 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:58:04.903115 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:58:04.903629 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:58:04.903733 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:58:04.906377 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:58:04.906486 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:58:04.907618 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:58:04.908188 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:58:04.916292 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:58:04.917470 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:58:04.917608 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:58:04.922172 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:58:04.922669 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:58:04.922946 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:58:04.925531 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:58:04.925734 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:58:04.928913 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:58:04.931915 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:58:04.932015 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:58:04.933894 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:58:04.934009 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:58:04.940083 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:58:04.941208 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:58:04.941384 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:58:04.942208 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:58:04.942323 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:58:04.943382 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:58:04.943488 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:58:04.944555 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:58:04.944673 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:58:04.945572 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:58:04.949701 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:58:04.953731 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:58:04.969517 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:58:04.969823 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:58:04.973349 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:58:04.974422 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:58:04.976970 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:58:04.977092 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:58:04.978216 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:58:04.978292 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:58:04.979123 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:58:04.979236 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:58:04.980552 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:58:04.980660 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:58:04.981551 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:58:04.981653 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:58:04.989195 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:58:04.990082 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:58:04.990232 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:58:04.994409 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:58:04.994540 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:58:04.996609 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:58:04.996735 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:58:04.998857 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:58:04.998972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:05.003008 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:58:05.003155 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:58:05.004953 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:58:05.005145 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:58:05.009030 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:58:05.016535 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:58:05.034345 systemd[1]: Switching root. Mar 17 17:58:05.069522 systemd-journald[181]: Journal stopped Mar 17 17:58:06.497034 systemd-journald[181]: Received SIGTERM from PID 1 (systemd). Mar 17 17:58:06.497138 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:58:06.497173 kernel: SELinux: policy capability open_perms=1 Mar 17 17:58:06.497192 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:58:06.497205 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:58:06.497223 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:58:06.497236 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:58:06.497257 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:58:06.497270 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:58:06.497285 kernel: audit: type=1403 audit(1742234285.218:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:58:06.497299 systemd[1]: Successfully loaded SELinux policy in 45.969ms. Mar 17 17:58:06.497324 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 22.124ms. Mar 17 17:58:06.497339 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:58:06.497354 systemd[1]: Detected virtualization kvm. Mar 17 17:58:06.497373 systemd[1]: Detected architecture x86-64. Mar 17 17:58:06.497387 systemd[1]: Detected first boot. Mar 17 17:58:06.497400 systemd[1]: Hostname set to . Mar 17 17:58:06.497413 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:58:06.497426 zram_generator::config[1033]: No configuration found. Mar 17 17:58:06.497441 kernel: Guest personality initialized and is inactive Mar 17 17:58:06.497455 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 17 17:58:06.497473 kernel: Initialized host personality Mar 17 17:58:06.497496 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:58:06.497515 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:58:06.497546 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:58:06.497565 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:58:06.497583 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:58:06.497602 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:58:06.497621 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:58:06.497642 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:58:06.497675 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:58:06.497697 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:58:06.497723 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:58:06.497737 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:58:06.497752 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:58:06.498814 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:58:06.498882 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:58:06.498897 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:58:06.498911 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:58:06.498937 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:58:06.498951 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:58:06.498982 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:58:06.498995 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:58:06.499008 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:58:06.499021 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:58:06.499038 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:58:06.499051 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:58:06.499064 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:58:06.499077 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:58:06.499090 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:58:06.499104 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:58:06.499117 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:58:06.499129 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:58:06.499142 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:58:06.499159 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:58:06.499173 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:58:06.499187 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:58:06.499200 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:58:06.499212 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:58:06.499225 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:58:06.499246 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:58:06.499260 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:58:06.499310 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:06.499328 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:58:06.499348 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:58:06.499365 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:58:06.499385 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:58:06.499398 systemd[1]: Reached target machines.target - Containers. Mar 17 17:58:06.499412 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:58:06.499425 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:58:06.499438 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:58:06.499451 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:58:06.499469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:58:06.499482 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:58:06.499494 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:58:06.499506 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:58:06.499520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:58:06.499534 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:58:06.499550 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:58:06.499566 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:58:06.499582 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:58:06.499599 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:58:06.499613 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:58:06.499629 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:58:06.499652 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:58:06.499671 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:58:06.499691 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:58:06.499704 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:58:06.499721 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:58:06.499734 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:58:06.499747 systemd[1]: Stopped verity-setup.service. Mar 17 17:58:06.499785 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:06.499798 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:58:06.499811 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:58:06.499824 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:58:06.499838 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:58:06.499851 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:58:06.499864 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:58:06.499879 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:58:06.499904 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:58:06.499923 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:58:06.499942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:58:06.499960 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:58:06.499978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:58:06.500000 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:58:06.500019 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:58:06.500036 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:58:06.500052 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:58:06.500091 kernel: loop: module loaded Mar 17 17:58:06.500113 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:58:06.500133 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:58:06.500156 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:58:06.500245 systemd-journald[1106]: Collecting audit messages is disabled. Mar 17 17:58:06.500611 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:58:06.502801 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:58:06.502872 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:58:06.502886 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:58:06.502910 systemd-journald[1106]: Journal started Mar 17 17:58:06.502952 systemd-journald[1106]: Runtime Journal (/run/log/journal/ef7db9a8c6b94ba5bd115587d56dd2f7) is 4.9M, max 39.3M, 34.4M free. Mar 17 17:58:06.099016 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:58:06.112922 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:58:06.113626 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:58:06.506738 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:58:06.506886 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:58:06.507869 kernel: fuse: init (API version 7.39) Mar 17 17:58:06.513793 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:58:06.522799 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:58:06.533800 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:58:06.536888 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:58:06.555969 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:58:06.556268 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:58:06.564811 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:58:06.566811 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:58:06.582861 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:58:06.605039 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:58:06.611196 kernel: ACPI: bus type drm_connector registered Mar 17 17:58:06.590409 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:58:06.591224 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:58:06.614916 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:58:06.615242 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:58:06.633329 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:58:06.651451 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:58:06.653030 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:58:06.682819 kernel: loop0: detected capacity change from 0 to 138176 Mar 17 17:58:06.681252 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:58:06.686452 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:58:06.687378 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:58:06.698093 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:58:06.700160 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:58:06.724637 systemd-tmpfiles[1125]: ACLs are not supported, ignoring. Mar 17 17:58:06.724676 systemd-tmpfiles[1125]: ACLs are not supported, ignoring. Mar 17 17:58:06.737898 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:58:06.742104 systemd-journald[1106]: Time spent on flushing to /var/log/journal/ef7db9a8c6b94ba5bd115587d56dd2f7 is 134.224ms for 992 entries. Mar 17 17:58:06.742104 systemd-journald[1106]: System Journal (/var/log/journal/ef7db9a8c6b94ba5bd115587d56dd2f7) is 8M, max 195.6M, 187.6M free. Mar 17 17:58:06.897622 systemd-journald[1106]: Received client request to flush runtime journal. Mar 17 17:58:06.898263 kernel: loop1: detected capacity change from 0 to 218376 Mar 17 17:58:06.898332 kernel: loop2: detected capacity change from 0 to 8 Mar 17 17:58:06.775723 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:58:06.787244 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:58:06.802971 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:58:06.880385 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:58:06.893168 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:58:06.905412 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:58:06.950842 kernel: loop3: detected capacity change from 0 to 147912 Mar 17 17:58:06.985196 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:58:06.998714 kernel: loop4: detected capacity change from 0 to 138176 Mar 17 17:58:07.025795 kernel: loop5: detected capacity change from 0 to 218376 Mar 17 17:58:07.054556 kernel: loop6: detected capacity change from 0 to 8 Mar 17 17:58:07.051049 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:58:07.055973 kernel: loop7: detected capacity change from 0 to 147912 Mar 17 17:58:07.073222 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:58:07.091592 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Mar 17 17:58:07.092785 (sd-merge)[1181]: Merged extensions into '/usr'. Mar 17 17:58:07.103167 systemd[1]: Reload requested from client PID 1138 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:58:07.103425 systemd[1]: Reloading... Mar 17 17:58:07.171471 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Mar 17 17:58:07.173603 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Mar 17 17:58:07.314830 zram_generator::config[1213]: No configuration found. Mar 17 17:58:07.523986 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:58:07.600984 ldconfig[1130]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:58:07.658395 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:58:07.658914 systemd[1]: Reloading finished in 554 ms. Mar 17 17:58:07.675955 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:58:07.677400 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:58:07.678733 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:58:07.694106 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:58:07.706362 systemd[1]: Starting ensure-sysext.service... Mar 17 17:58:07.713178 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:58:07.737993 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:58:07.758187 systemd[1]: Reload requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:58:07.758219 systemd[1]: Reloading... Mar 17 17:58:07.802678 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:58:07.803018 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:58:07.804107 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:58:07.804444 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 17 17:58:07.804656 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 17 17:58:07.817381 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:58:07.817400 systemd-tmpfiles[1259]: Skipping /boot Mar 17 17:58:07.868613 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:58:07.868630 systemd-tmpfiles[1259]: Skipping /boot Mar 17 17:58:07.922792 zram_generator::config[1289]: No configuration found. Mar 17 17:58:08.112036 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:58:08.201990 systemd[1]: Reloading finished in 443 ms. Mar 17 17:58:08.219451 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:58:08.234749 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:58:08.254447 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:58:08.260447 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:58:08.266333 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:58:08.277480 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:58:08.284364 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:58:08.288100 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:58:08.296302 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:08.296677 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:58:08.306494 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:58:08.316324 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:58:08.327174 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:58:08.328019 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:58:08.328258 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:58:08.328495 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:08.339022 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:58:08.343369 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:08.343686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:58:08.344065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:58:08.344212 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:58:08.344381 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:08.351698 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:08.352171 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:58:08.358651 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:58:08.359675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:58:08.359975 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:58:08.360200 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:08.366793 systemd[1]: Finished ensure-sysext.service. Mar 17 17:58:08.381253 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:58:08.398886 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:58:08.415924 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:58:08.434121 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:58:08.435375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:58:08.436962 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:58:08.449942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:58:08.450293 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:58:08.452508 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:58:08.473248 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:58:08.484882 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:58:08.486924 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:58:08.487920 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:58:08.488623 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:58:08.490141 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Mar 17 17:58:08.495648 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:58:08.495707 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:58:08.516646 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:58:08.541483 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:58:08.549037 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:58:08.556561 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:58:08.567273 augenrules[1385]: No rules Mar 17 17:58:08.569507 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:58:08.569816 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:58:08.702019 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:58:08.702643 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:58:08.768098 systemd-networkd[1374]: lo: Link UP Mar 17 17:58:08.768109 systemd-networkd[1374]: lo: Gained carrier Mar 17 17:58:08.771278 systemd-networkd[1374]: Enumeration completed Mar 17 17:58:08.771471 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:58:08.777913 systemd-resolved[1338]: Positive Trust Anchors: Mar 17 17:58:08.777933 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:58:08.777972 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:58:08.781128 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:58:08.790078 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:58:08.791567 systemd-resolved[1338]: Using system hostname 'ci-4230.1.0-6-424e48892b'. Mar 17 17:58:08.795199 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:58:08.795875 systemd[1]: Reached target network.target - Network. Mar 17 17:58:08.796558 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:58:08.834916 systemd-networkd[1374]: eth1: Configuring with /run/systemd/network/10-66:be:ad:5a:55:a1.network. Mar 17 17:58:08.836638 systemd-networkd[1374]: eth1: Link UP Mar 17 17:58:08.836651 systemd-networkd[1374]: eth1: Gained carrier Mar 17 17:58:08.843126 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Mar 17 17:58:08.849564 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:58:08.870059 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:58:08.883968 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Mar 17 17:58:08.894019 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Mar 17 17:58:08.894476 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:08.894714 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:58:08.905073 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:58:08.911113 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:58:08.914798 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1393) Mar 17 17:58:08.917194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:58:08.917732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:58:08.917829 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:58:08.917862 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:58:08.917881 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:08.918563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:58:08.919308 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:58:08.947778 kernel: ISO 9660 Extensions: RRIP_1991A Mar 17 17:58:08.950575 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Mar 17 17:58:08.957249 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:58:08.957565 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:58:08.958482 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:58:08.958708 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:58:08.962640 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:58:08.962710 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:58:08.997803 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 17 17:58:09.011718 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 17:58:09.011767 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:58:09.053437 systemd-networkd[1374]: eth0: Configuring with /run/systemd/network/10-26:86:90:17:eb:b6.network. Mar 17 17:58:09.055108 systemd-networkd[1374]: eth0: Link UP Mar 17 17:58:09.055119 systemd-networkd[1374]: eth0: Gained carrier Mar 17 17:58:09.055888 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Mar 17 17:58:09.060981 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Mar 17 17:58:09.061785 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 17:58:09.063086 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Mar 17 17:58:09.090933 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:58:09.101108 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:58:09.139828 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:58:09.151801 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Mar 17 17:58:09.151917 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Mar 17 17:58:09.154821 kernel: Console: switching to colour dummy device 80x25 Mar 17 17:58:09.154915 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 17:58:09.154932 kernel: [drm] features: -context_init Mar 17 17:58:09.178963 kernel: [drm] number of scanouts: 1 Mar 17 17:58:09.179073 kernel: [drm] number of cap sets: 0 Mar 17 17:58:09.179391 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:58:09.181789 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:58:09.197797 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Mar 17 17:58:09.198204 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:58:09.198589 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:09.213233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:58:09.219705 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 17 17:58:09.219829 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:58:09.238045 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 17:58:09.242295 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:58:09.242659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:09.268414 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:58:09.382796 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:58:09.405483 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:09.411578 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:58:09.418368 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:58:09.440791 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:58:09.477683 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:58:09.479447 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:58:09.479641 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:58:09.479932 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:58:09.480055 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:58:09.480484 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:58:09.481225 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:58:09.481387 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:58:09.481576 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:58:09.481718 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:58:09.482032 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:58:09.484135 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:58:09.486487 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:58:09.492167 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:58:09.494161 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:58:09.494445 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:58:09.499246 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:58:09.502985 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:58:09.511106 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:58:09.514449 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:58:09.516594 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:58:09.517563 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:58:09.518525 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:58:09.518582 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:58:09.529817 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:58:09.528847 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:58:09.539022 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:58:09.546065 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:58:09.551018 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:58:09.563142 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:58:09.563723 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:58:09.570131 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:58:09.575939 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:58:09.585153 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:58:09.589557 jq[1456]: false Mar 17 17:58:09.599220 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:58:09.603637 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:58:09.611045 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:58:09.613297 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:58:09.620304 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:58:09.624307 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:58:09.637327 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:58:09.637585 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:58:09.664739 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:58:09.665694 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:58:09.680140 coreos-metadata[1454]: Mar 17 17:58:09.678 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:58:09.701951 coreos-metadata[1454]: Mar 17 17:58:09.692 INFO Fetch successful Mar 17 17:58:09.691171 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:58:09.722418 jq[1465]: true Mar 17 17:58:09.740019 (ntainerd)[1478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:58:09.746538 dbus-daemon[1455]: [system] SELinux support is enabled Mar 17 17:58:09.746914 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:58:09.756214 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:58:09.759577 update_engine[1464]: I20250317 17:58:09.758942 1464 main.cc:92] Flatcar Update Engine starting Mar 17 17:58:09.756265 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:58:09.760132 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:58:09.760278 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Mar 17 17:58:09.760321 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:58:09.764914 extend-filesystems[1458]: Found loop4 Mar 17 17:58:09.767700 extend-filesystems[1458]: Found loop5 Mar 17 17:58:09.767700 extend-filesystems[1458]: Found loop6 Mar 17 17:58:09.767700 extend-filesystems[1458]: Found loop7 Mar 17 17:58:09.767700 extend-filesystems[1458]: Found vda Mar 17 17:58:09.767700 extend-filesystems[1458]: Found vda1 Mar 17 17:58:09.767700 extend-filesystems[1458]: Found vda2 Mar 17 17:58:09.767700 extend-filesystems[1458]: Found vda3 Mar 17 17:58:09.767700 extend-filesystems[1458]: Found usr Mar 17 17:58:09.767700 extend-filesystems[1458]: Found vda4 Mar 17 17:58:09.767700 extend-filesystems[1458]: Found vda6 Mar 17 17:58:09.767700 extend-filesystems[1458]: Found vda7 Mar 17 17:58:09.767700 extend-filesystems[1458]: Found vda9 Mar 17 17:58:09.767700 extend-filesystems[1458]: Checking size of /dev/vda9 Mar 17 17:58:09.773701 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:58:09.847054 update_engine[1464]: I20250317 17:58:09.779979 1464 update_check_scheduler.cc:74] Next update check in 7m1s Mar 17 17:58:09.790051 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:58:09.847244 jq[1483]: true Mar 17 17:58:09.842287 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:58:09.842565 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:58:09.877877 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:58:09.879173 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:58:09.893254 systemd-logind[1463]: New seat seat0. Mar 17 17:58:09.895003 systemd-logind[1463]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:58:09.899333 extend-filesystems[1458]: Resized partition /dev/vda9 Mar 17 17:58:09.895033 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:58:09.895390 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:58:09.912189 extend-filesystems[1500]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:58:09.924107 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Mar 17 17:58:10.052852 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1395) Mar 17 17:58:10.078391 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:58:10.086129 systemd-networkd[1374]: eth1: Gained IPv6LL Mar 17 17:58:10.093235 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Mar 17 17:58:10.119392 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:58:10.121062 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:58:10.129190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:58:10.132565 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:58:10.141236 bash[1514]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:58:10.140518 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:58:10.153363 systemd[1]: Starting sshkeys.service... Mar 17 17:58:10.186534 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 17 17:58:10.188332 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:58:10.202041 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:58:10.214481 extend-filesystems[1500]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:58:10.214481 extend-filesystems[1500]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 17 17:58:10.214481 extend-filesystems[1500]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 17 17:58:10.234447 extend-filesystems[1458]: Resized filesystem in /dev/vda9 Mar 17 17:58:10.234447 extend-filesystems[1458]: Found vdb Mar 17 17:58:10.216233 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:58:10.219786 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:58:10.284980 coreos-metadata[1531]: Mar 17 17:58:10.283 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:58:10.305135 coreos-metadata[1531]: Mar 17 17:58:10.304 INFO Fetch successful Mar 17 17:58:10.300213 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:58:10.328091 unknown[1531]: wrote ssh authorized keys file for user: core Mar 17 17:58:10.386098 update-ssh-keys[1543]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:58:10.389943 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:58:10.392102 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:58:10.399899 systemd[1]: Finished sshkeys.service. Mar 17 17:58:10.447899 containerd[1478]: time="2025-03-17T17:58:10.447013131Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:58:10.464528 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:58:10.480357 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:58:10.494580 systemd[1]: Started sshd@0-134.199.208.120:22-139.178.68.195:57364.service - OpenSSH per-connection server daemon (139.178.68.195:57364). Mar 17 17:58:10.533677 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:58:10.533964 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:58:10.552439 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:58:10.555278 containerd[1478]: time="2025-03-17T17:58:10.554823834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:58:10.562109 containerd[1478]: time="2025-03-17T17:58:10.560542659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:58:10.562109 containerd[1478]: time="2025-03-17T17:58:10.561159318Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:58:10.562109 containerd[1478]: time="2025-03-17T17:58:10.561188718Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:58:10.562109 containerd[1478]: time="2025-03-17T17:58:10.561366027Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:58:10.562109 containerd[1478]: time="2025-03-17T17:58:10.561386976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:58:10.562109 containerd[1478]: time="2025-03-17T17:58:10.561452823Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:58:10.562109 containerd[1478]: time="2025-03-17T17:58:10.561465400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:58:10.562109 containerd[1478]: time="2025-03-17T17:58:10.561736798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:58:10.562109 containerd[1478]: time="2025-03-17T17:58:10.561809057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:58:10.562109 containerd[1478]: time="2025-03-17T17:58:10.561826360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:58:10.562109 containerd[1478]: time="2025-03-17T17:58:10.561836293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:58:10.562555 containerd[1478]: time="2025-03-17T17:58:10.561954586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:58:10.564525 containerd[1478]: time="2025-03-17T17:58:10.563260921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:58:10.564525 containerd[1478]: time="2025-03-17T17:58:10.563559390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:58:10.564525 containerd[1478]: time="2025-03-17T17:58:10.563584754Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:58:10.564525 containerd[1478]: time="2025-03-17T17:58:10.563744739Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:58:10.564525 containerd[1478]: time="2025-03-17T17:58:10.564199097Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:58:10.575946 containerd[1478]: time="2025-03-17T17:58:10.571239828Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:58:10.575946 containerd[1478]: time="2025-03-17T17:58:10.571356447Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:58:10.575946 containerd[1478]: time="2025-03-17T17:58:10.571375390Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:58:10.575946 containerd[1478]: time="2025-03-17T17:58:10.571427639Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:58:10.575946 containerd[1478]: time="2025-03-17T17:58:10.571446484Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:58:10.575946 containerd[1478]: time="2025-03-17T17:58:10.574118807Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:58:10.575946 containerd[1478]: time="2025-03-17T17:58:10.574569075Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:58:10.575946 containerd[1478]: time="2025-03-17T17:58:10.574771172Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:58:10.575946 containerd[1478]: time="2025-03-17T17:58:10.574799944Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:58:10.575946 containerd[1478]: time="2025-03-17T17:58:10.574824710Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:58:10.575946 containerd[1478]: time="2025-03-17T17:58:10.574845934Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:58:10.575946 containerd[1478]: time="2025-03-17T17:58:10.574864393Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:58:10.575946 containerd[1478]: time="2025-03-17T17:58:10.574889626Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:58:10.575946 containerd[1478]: time="2025-03-17T17:58:10.574911851Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:58:10.579670 containerd[1478]: time="2025-03-17T17:58:10.574934860Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:58:10.579670 containerd[1478]: time="2025-03-17T17:58:10.574952441Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:58:10.579670 containerd[1478]: time="2025-03-17T17:58:10.574965867Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:58:10.579670 containerd[1478]: time="2025-03-17T17:58:10.574980166Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:58:10.579670 containerd[1478]: time="2025-03-17T17:58:10.575008525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.579670 containerd[1478]: time="2025-03-17T17:58:10.575024674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.579670 containerd[1478]: time="2025-03-17T17:58:10.575047404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.579670 containerd[1478]: time="2025-03-17T17:58:10.575095855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.579670 containerd[1478]: time="2025-03-17T17:58:10.575115667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.579670 containerd[1478]: time="2025-03-17T17:58:10.575129531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.579670 containerd[1478]: time="2025-03-17T17:58:10.575143177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.579670 containerd[1478]: time="2025-03-17T17:58:10.575156464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.579670 containerd[1478]: time="2025-03-17T17:58:10.575170458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.579670 containerd[1478]: time="2025-03-17T17:58:10.575185831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.580134 containerd[1478]: time="2025-03-17T17:58:10.575197842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.580134 containerd[1478]: time="2025-03-17T17:58:10.575210790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.580134 containerd[1478]: time="2025-03-17T17:58:10.575225174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.580134 containerd[1478]: time="2025-03-17T17:58:10.575239608Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:58:10.580134 containerd[1478]: time="2025-03-17T17:58:10.575264161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.580134 containerd[1478]: time="2025-03-17T17:58:10.575279557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.580134 containerd[1478]: time="2025-03-17T17:58:10.575291396Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:58:10.580134 containerd[1478]: time="2025-03-17T17:58:10.575340375Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:58:10.580134 containerd[1478]: time="2025-03-17T17:58:10.575360504Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:58:10.580134 containerd[1478]: time="2025-03-17T17:58:10.575372222Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:58:10.580134 containerd[1478]: time="2025-03-17T17:58:10.575386170Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:58:10.580134 containerd[1478]: time="2025-03-17T17:58:10.575395612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.580134 containerd[1478]: time="2025-03-17T17:58:10.575413743Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:58:10.580134 containerd[1478]: time="2025-03-17T17:58:10.575430119Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:58:10.580574 containerd[1478]: time="2025-03-17T17:58:10.575447840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:58:10.581088 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:58:10.587210 containerd[1478]: time="2025-03-17T17:58:10.586225255Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:58:10.587210 containerd[1478]: time="2025-03-17T17:58:10.586349485Z" level=info msg="Connect containerd service" Mar 17 17:58:10.587210 containerd[1478]: time="2025-03-17T17:58:10.586440010Z" level=info msg="using legacy CRI server" Mar 17 17:58:10.587210 containerd[1478]: time="2025-03-17T17:58:10.586455871Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:58:10.587210 containerd[1478]: time="2025-03-17T17:58:10.586678382Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:58:10.593351 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:58:10.594425 containerd[1478]: time="2025-03-17T17:58:10.593335476Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:58:10.594425 containerd[1478]: time="2025-03-17T17:58:10.593543219Z" level=info msg="Start subscribing containerd event" Mar 17 17:58:10.594425 containerd[1478]: time="2025-03-17T17:58:10.593602358Z" level=info msg="Start recovering state" Mar 17 17:58:10.594425 containerd[1478]: time="2025-03-17T17:58:10.593707848Z" level=info msg="Start event monitor" Mar 17 17:58:10.594425 containerd[1478]: time="2025-03-17T17:58:10.593725606Z" level=info msg="Start snapshots syncer" Mar 17 17:58:10.594425 containerd[1478]: time="2025-03-17T17:58:10.593741333Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:58:10.594425 containerd[1478]: time="2025-03-17T17:58:10.593778997Z" level=info msg="Start streaming server" Mar 17 17:58:10.599991 containerd[1478]: time="2025-03-17T17:58:10.598091709Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:58:10.599991 containerd[1478]: time="2025-03-17T17:58:10.598206301Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:58:10.599991 containerd[1478]: time="2025-03-17T17:58:10.598308385Z" level=info msg="containerd successfully booted in 0.154725s" Mar 17 17:58:10.606673 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:58:10.607585 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:58:10.609539 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:58:10.671487 sshd[1558]: Accepted publickey for core from 139.178.68.195 port 57364 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:10.673848 sshd-session[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:10.685542 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:58:10.693214 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:58:10.716819 systemd-logind[1463]: New session 1 of user core. Mar 17 17:58:10.726089 systemd-networkd[1374]: eth0: Gained IPv6LL Mar 17 17:58:10.726831 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Mar 17 17:58:10.739365 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:58:10.755515 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:58:10.777634 (systemd)[1571]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:58:10.785948 systemd-logind[1463]: New session c1 of user core. Mar 17 17:58:10.952415 systemd[1571]: Queued start job for default target default.target. Mar 17 17:58:10.965658 systemd[1571]: Created slice app.slice - User Application Slice. Mar 17 17:58:10.965716 systemd[1571]: Reached target paths.target - Paths. Mar 17 17:58:10.965966 systemd[1571]: Reached target timers.target - Timers. Mar 17 17:58:10.970128 systemd[1571]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:58:11.007343 systemd[1571]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:58:11.007628 systemd[1571]: Reached target sockets.target - Sockets. Mar 17 17:58:11.007715 systemd[1571]: Reached target basic.target - Basic System. Mar 17 17:58:11.007802 systemd[1571]: Reached target default.target - Main User Target. Mar 17 17:58:11.007846 systemd[1571]: Startup finished in 210ms. Mar 17 17:58:11.008539 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:58:11.021124 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:58:11.110868 systemd[1]: Started sshd@1-134.199.208.120:22-139.178.68.195:57370.service - OpenSSH per-connection server daemon (139.178.68.195:57370). Mar 17 17:58:11.189522 sshd[1582]: Accepted publickey for core from 139.178.68.195 port 57370 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:11.192821 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:11.201101 systemd-logind[1463]: New session 2 of user core. Mar 17 17:58:11.216196 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:58:11.287792 sshd[1584]: Connection closed by 139.178.68.195 port 57370 Mar 17 17:58:11.289066 sshd-session[1582]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:11.302572 systemd[1]: sshd@1-134.199.208.120:22-139.178.68.195:57370.service: Deactivated successfully. Mar 17 17:58:11.306062 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:58:11.308722 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:58:11.317019 systemd[1]: Started sshd@2-134.199.208.120:22-139.178.68.195:57384.service - OpenSSH per-connection server daemon (139.178.68.195:57384). Mar 17 17:58:11.322860 systemd-logind[1463]: Removed session 2. Mar 17 17:58:11.382799 sshd[1589]: Accepted publickey for core from 139.178.68.195 port 57384 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:11.385978 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:11.397887 systemd-logind[1463]: New session 3 of user core. Mar 17 17:58:11.400087 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:58:11.475946 sshd[1592]: Connection closed by 139.178.68.195 port 57384 Mar 17 17:58:11.476914 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:11.484792 systemd[1]: sshd@2-134.199.208.120:22-139.178.68.195:57384.service: Deactivated successfully. Mar 17 17:58:11.489395 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:58:11.490803 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:58:11.493080 systemd-logind[1463]: Removed session 3. Mar 17 17:58:11.610182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:58:11.611583 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:58:11.616710 systemd[1]: Startup finished in 1.140s (kernel) + 6.491s (initrd) + 6.442s (userspace) = 14.073s. Mar 17 17:58:11.625244 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:58:12.357656 kubelet[1602]: E0317 17:58:12.357593 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:58:12.361473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:58:12.361703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:58:12.362243 systemd[1]: kubelet.service: Consumed 1.277s CPU time, 253.8M memory peak. Mar 17 17:58:21.504207 systemd[1]: Started sshd@3-134.199.208.120:22-139.178.68.195:55676.service - OpenSSH per-connection server daemon (139.178.68.195:55676). Mar 17 17:58:21.558026 sshd[1614]: Accepted publickey for core from 139.178.68.195 port 55676 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:21.560632 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:21.567378 systemd-logind[1463]: New session 4 of user core. Mar 17 17:58:21.575128 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:58:21.641967 sshd[1616]: Connection closed by 139.178.68.195 port 55676 Mar 17 17:58:21.642791 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:21.654609 systemd[1]: sshd@3-134.199.208.120:22-139.178.68.195:55676.service: Deactivated successfully. Mar 17 17:58:21.657736 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:58:21.660335 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:58:21.669340 systemd[1]: Started sshd@4-134.199.208.120:22-139.178.68.195:55682.service - OpenSSH per-connection server daemon (139.178.68.195:55682). Mar 17 17:58:21.671989 systemd-logind[1463]: Removed session 4. Mar 17 17:58:21.722261 sshd[1621]: Accepted publickey for core from 139.178.68.195 port 55682 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:21.725225 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:21.733461 systemd-logind[1463]: New session 5 of user core. Mar 17 17:58:21.744145 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:58:21.806906 sshd[1624]: Connection closed by 139.178.68.195 port 55682 Mar 17 17:58:21.805599 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:21.822170 systemd[1]: sshd@4-134.199.208.120:22-139.178.68.195:55682.service: Deactivated successfully. Mar 17 17:58:21.826298 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:58:21.830991 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:58:21.835803 systemd[1]: Started sshd@5-134.199.208.120:22-139.178.68.195:55686.service - OpenSSH per-connection server daemon (139.178.68.195:55686). Mar 17 17:58:21.838070 systemd-logind[1463]: Removed session 5. Mar 17 17:58:21.909331 sshd[1629]: Accepted publickey for core from 139.178.68.195 port 55686 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:21.911938 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:21.922134 systemd-logind[1463]: New session 6 of user core. Mar 17 17:58:21.932080 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:58:22.000029 sshd[1632]: Connection closed by 139.178.68.195 port 55686 Mar 17 17:58:22.001189 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:22.014441 systemd[1]: sshd@5-134.199.208.120:22-139.178.68.195:55686.service: Deactivated successfully. Mar 17 17:58:22.017558 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:58:22.022946 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:58:22.028609 systemd[1]: Started sshd@6-134.199.208.120:22-139.178.68.195:55690.service - OpenSSH per-connection server daemon (139.178.68.195:55690). Mar 17 17:58:22.031684 systemd-logind[1463]: Removed session 6. Mar 17 17:58:22.089300 sshd[1637]: Accepted publickey for core from 139.178.68.195 port 55690 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:22.091421 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:22.099000 systemd-logind[1463]: New session 7 of user core. Mar 17 17:58:22.110430 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:58:22.186345 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:58:22.186928 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:58:22.208679 sudo[1641]: pam_unix(sudo:session): session closed for user root Mar 17 17:58:22.212838 sshd[1640]: Connection closed by 139.178.68.195 port 55690 Mar 17 17:58:22.213986 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:22.231256 systemd[1]: sshd@6-134.199.208.120:22-139.178.68.195:55690.service: Deactivated successfully. Mar 17 17:58:22.233828 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:58:22.235145 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:58:22.242446 systemd[1]: Started sshd@7-134.199.208.120:22-139.178.68.195:55700.service - OpenSSH per-connection server daemon (139.178.68.195:55700). Mar 17 17:58:22.244594 systemd-logind[1463]: Removed session 7. Mar 17 17:58:22.310210 sshd[1646]: Accepted publickey for core from 139.178.68.195 port 55700 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:22.312715 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:22.320206 systemd-logind[1463]: New session 8 of user core. Mar 17 17:58:22.328276 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:58:22.394576 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:58:22.395230 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:58:22.397032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:58:22.409940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:58:22.414516 sudo[1651]: pam_unix(sudo:session): session closed for user root Mar 17 17:58:22.427069 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:58:22.427616 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:58:22.457286 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:58:22.512252 augenrules[1676]: No rules Mar 17 17:58:22.515423 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:58:22.516452 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:58:22.519147 sudo[1650]: pam_unix(sudo:session): session closed for user root Mar 17 17:58:22.523390 sshd[1649]: Connection closed by 139.178.68.195 port 55700 Mar 17 17:58:22.524540 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:22.556230 systemd[1]: Started sshd@8-134.199.208.120:22-139.178.68.195:55708.service - OpenSSH per-connection server daemon (139.178.68.195:55708). Mar 17 17:58:22.557651 systemd[1]: sshd@7-134.199.208.120:22-139.178.68.195:55700.service: Deactivated successfully. Mar 17 17:58:22.564566 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:58:22.580511 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:58:22.586571 systemd-logind[1463]: Removed session 8. Mar 17 17:58:22.634307 sshd[1682]: Accepted publickey for core from 139.178.68.195 port 55708 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:22.636152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:58:22.637494 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:22.650407 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:58:22.655730 systemd-logind[1463]: New session 9 of user core. Mar 17 17:58:22.665215 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:58:22.729429 kubelet[1692]: E0317 17:58:22.729350 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:58:22.734333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:58:22.734027 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:58:22.734575 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:58:22.734492 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:58:22.736301 systemd[1]: kubelet.service: Consumed 222ms CPU time, 101.4M memory peak. Mar 17 17:58:23.557446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:58:23.558365 systemd[1]: kubelet.service: Consumed 222ms CPU time, 101.4M memory peak. Mar 17 17:58:23.570242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:58:23.619387 systemd[1]: Reload requested from client PID 1733 ('systemctl') (unit session-9.scope)... Mar 17 17:58:23.619420 systemd[1]: Reloading... Mar 17 17:58:23.832919 zram_generator::config[1785]: No configuration found. Mar 17 17:58:23.996687 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:58:24.129829 systemd[1]: Reloading finished in 509 ms. Mar 17 17:58:24.211246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:58:24.211943 (kubelet)[1822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:58:24.219456 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:58:24.220178 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:58:24.221155 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:58:24.221545 systemd[1]: kubelet.service: Consumed 150ms CPU time, 92.9M memory peak. Mar 17 17:58:24.241666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:58:24.433238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:58:24.435427 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:58:24.506607 kubelet[1833]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:58:24.507807 kubelet[1833]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:58:24.507807 kubelet[1833]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:58:24.507807 kubelet[1833]: I0317 17:58:24.507392 1833 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:58:24.960647 kubelet[1833]: I0317 17:58:24.960554 1833 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:58:24.960647 kubelet[1833]: I0317 17:58:24.960615 1833 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:58:24.961179 kubelet[1833]: I0317 17:58:24.961124 1833 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:58:24.985930 kubelet[1833]: I0317 17:58:24.984991 1833 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:58:25.001380 kubelet[1833]: E0317 17:58:25.001311 1833 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:58:25.001380 kubelet[1833]: I0317 17:58:25.001370 1833 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:58:25.006098 kubelet[1833]: I0317 17:58:25.006046 1833 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:58:25.007684 kubelet[1833]: I0317 17:58:25.007566 1833 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:58:25.007872 kubelet[1833]: I0317 17:58:25.007655 1833 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"134.199.208.120","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:58:25.008033 kubelet[1833]: I0317 17:58:25.007892 1833 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:58:25.008033 kubelet[1833]: I0317 17:58:25.007909 1833 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:58:25.008704 kubelet[1833]: I0317 17:58:25.008170 1833 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:58:25.013798 kubelet[1833]: I0317 17:58:25.013697 1833 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:58:25.013981 kubelet[1833]: I0317 17:58:25.013833 1833 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:58:25.013981 kubelet[1833]: I0317 17:58:25.013882 1833 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:58:25.013981 kubelet[1833]: I0317 17:58:25.013904 1833 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:58:25.017778 kubelet[1833]: E0317 17:58:25.016768 1833 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:25.017778 kubelet[1833]: E0317 17:58:25.017705 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:25.018541 kubelet[1833]: I0317 17:58:25.018502 1833 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:58:25.019123 kubelet[1833]: I0317 17:58:25.019082 1833 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:58:25.019849 kubelet[1833]: W0317 17:58:25.019820 1833 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:58:25.022551 kubelet[1833]: I0317 17:58:25.022479 1833 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:58:25.022551 kubelet[1833]: I0317 17:58:25.022539 1833 server.go:1287] "Started kubelet" Mar 17 17:58:25.022905 kubelet[1833]: I0317 17:58:25.022803 1833 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:58:25.024253 kubelet[1833]: I0317 17:58:25.024188 1833 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:58:25.028783 kubelet[1833]: I0317 17:58:25.028089 1833 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:58:25.028783 kubelet[1833]: I0317 17:58:25.028631 1833 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:58:25.030806 kubelet[1833]: I0317 17:58:25.030744 1833 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:58:25.033843 kubelet[1833]: I0317 17:58:25.033785 1833 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:58:25.039587 kubelet[1833]: E0317 17:58:25.038073 1833 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{134.199.208.120.182da8dea25dba2f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:134.199.208.120,UID:134.199.208.120,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:134.199.208.120,},FirstTimestamp:2025-03-17 17:58:25.022507567 +0000 UTC m=+0.575499196,LastTimestamp:2025-03-17 17:58:25.022507567 +0000 UTC m=+0.575499196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:134.199.208.120,}" Mar 17 17:58:25.039859 kubelet[1833]: W0317 17:58:25.039731 1833 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 17:58:25.039859 kubelet[1833]: E0317 17:58:25.039815 1833 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 17:58:25.040012 kubelet[1833]: W0317 17:58:25.039989 1833 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "134.199.208.120" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 17:58:25.040069 kubelet[1833]: E0317 17:58:25.040020 1833 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"134.199.208.120\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 17:58:25.041166 kubelet[1833]: E0317 17:58:25.041091 1833 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:58:25.043078 kubelet[1833]: I0317 17:58:25.041917 1833 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:58:25.043078 kubelet[1833]: E0317 17:58:25.042097 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"134.199.208.120\" not found" Mar 17 17:58:25.043078 kubelet[1833]: I0317 17:58:25.042211 1833 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:58:25.043078 kubelet[1833]: I0317 17:58:25.042302 1833 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:58:25.044555 kubelet[1833]: I0317 17:58:25.044331 1833 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:58:25.048163 kubelet[1833]: I0317 17:58:25.048127 1833 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:58:25.048446 kubelet[1833]: I0317 17:58:25.048430 1833 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:58:25.057861 kubelet[1833]: W0317 17:58:25.056074 1833 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 17 17:58:25.058499 kubelet[1833]: E0317 17:58:25.058137 1833 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 17 17:58:25.058499 kubelet[1833]: E0317 17:58:25.058245 1833 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{134.199.208.120.182da8dea378ed03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:134.199.208.120,UID:134.199.208.120,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:134.199.208.120,},FirstTimestamp:2025-03-17 17:58:25.041067267 +0000 UTC m=+0.594058904,LastTimestamp:2025-03-17 17:58:25.041067267 +0000 UTC m=+0.594058904,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:134.199.208.120,}" Mar 17 17:58:25.059797 kubelet[1833]: E0317 17:58:25.058911 1833 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"134.199.208.120\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Mar 17 17:58:25.087572 kubelet[1833]: I0317 17:58:25.087533 1833 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:58:25.087771 kubelet[1833]: I0317 17:58:25.087740 1833 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:58:25.087962 kubelet[1833]: I0317 17:58:25.087952 1833 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:58:25.090042 kubelet[1833]: I0317 17:58:25.090011 1833 policy_none.go:49] "None policy: Start" Mar 17 17:58:25.090400 kubelet[1833]: I0317 17:58:25.090384 1833 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:58:25.090533 kubelet[1833]: I0317 17:58:25.090520 1833 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:58:25.108331 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:58:25.128475 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:58:25.136018 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:58:25.142840 kubelet[1833]: E0317 17:58:25.142713 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"134.199.208.120\" not found" Mar 17 17:58:25.148222 kubelet[1833]: I0317 17:58:25.147965 1833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:58:25.148222 kubelet[1833]: I0317 17:58:25.148217 1833 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:58:25.148731 kubelet[1833]: I0317 17:58:25.148578 1833 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:58:25.148731 kubelet[1833]: I0317 17:58:25.148604 1833 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:58:25.152801 kubelet[1833]: I0317 17:58:25.152216 1833 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:58:25.154467 kubelet[1833]: I0317 17:58:25.154392 1833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:58:25.154972 kubelet[1833]: I0317 17:58:25.154735 1833 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:58:25.154972 kubelet[1833]: I0317 17:58:25.154785 1833 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:58:25.154972 kubelet[1833]: I0317 17:58:25.154795 1833 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:58:25.159048 kubelet[1833]: E0317 17:58:25.158717 1833 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 17:58:25.160926 kubelet[1833]: E0317 17:58:25.160084 1833 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:58:25.160926 kubelet[1833]: E0317 17:58:25.160133 1833 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"134.199.208.120\" not found" Mar 17 17:58:25.251268 kubelet[1833]: I0317 17:58:25.250746 1833 kubelet_node_status.go:76] "Attempting to register node" node="134.199.208.120" Mar 17 17:58:25.269336 kubelet[1833]: I0317 17:58:25.269080 1833 kubelet_node_status.go:79] "Successfully registered node" node="134.199.208.120" Mar 17 17:58:25.269336 kubelet[1833]: E0317 17:58:25.269128 1833 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"134.199.208.120\": node \"134.199.208.120\" not found" Mar 17 17:58:25.280252 kubelet[1833]: E0317 17:58:25.280146 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"134.199.208.120\" not found" Mar 17 17:58:25.381222 kubelet[1833]: E0317 17:58:25.381146 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"134.199.208.120\" not found" Mar 17 17:58:25.481403 kubelet[1833]: E0317 17:58:25.481337 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"134.199.208.120\" not found" Mar 17 17:58:25.486959 sudo[1699]: pam_unix(sudo:session): session closed for user root Mar 17 17:58:25.490104 sshd[1697]: Connection closed by 139.178.68.195 port 55708 Mar 17 17:58:25.491169 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:25.498267 systemd[1]: sshd@8-134.199.208.120:22-139.178.68.195:55708.service: Deactivated successfully. Mar 17 17:58:25.501613 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:58:25.502590 systemd[1]: session-9.scope: Consumed 695ms CPU time, 70.8M memory peak. Mar 17 17:58:25.504763 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:58:25.506228 systemd-logind[1463]: Removed session 9. Mar 17 17:58:25.582077 kubelet[1833]: E0317 17:58:25.581995 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"134.199.208.120\" not found" Mar 17 17:58:25.683179 kubelet[1833]: E0317 17:58:25.683093 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"134.199.208.120\" not found" Mar 17 17:58:25.784733 kubelet[1833]: E0317 17:58:25.784101 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"134.199.208.120\" not found" Mar 17 17:58:25.886082 kubelet[1833]: E0317 17:58:25.885730 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"134.199.208.120\" not found" Mar 17 17:58:25.964096 kubelet[1833]: I0317 17:58:25.963012 1833 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 17:58:25.965268 kubelet[1833]: W0317 17:58:25.965222 1833 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 17:58:25.991897 kubelet[1833]: E0317 17:58:25.991373 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"134.199.208.120\" not found" Mar 17 17:58:26.018568 kubelet[1833]: E0317 17:58:26.018481 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:26.094250 kubelet[1833]: I0317 17:58:26.094198 1833 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 17:58:26.094949 containerd[1478]: time="2025-03-17T17:58:26.094850400Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:58:26.096238 kubelet[1833]: I0317 17:58:26.095517 1833 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 17:58:27.019080 kubelet[1833]: I0317 17:58:27.018823 1833 apiserver.go:52] "Watching apiserver" Mar 17 17:58:27.019080 kubelet[1833]: E0317 17:58:27.018999 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:27.023640 kubelet[1833]: E0317 17:58:27.023202 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b69cd" podUID="e76c5731-9144-4644-b2d4-50c1e2e23da7" Mar 17 17:58:27.039882 systemd[1]: Created slice kubepods-besteffort-pod598bdf7f_c223_4253_a408_909f035a820f.slice - libcontainer container kubepods-besteffort-pod598bdf7f_c223_4253_a408_909f035a820f.slice. Mar 17 17:58:27.043593 kubelet[1833]: I0317 17:58:27.043508 1833 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:58:27.055775 kubelet[1833]: I0317 17:58:27.055083 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bsfn\" (UniqueName: \"kubernetes.io/projected/598bdf7f-c223-4253-a408-909f035a820f-kube-api-access-2bsfn\") pod \"kube-proxy-sxgg7\" (UID: \"598bdf7f-c223-4253-a408-909f035a820f\") " pod="kube-system/kube-proxy-sxgg7" Mar 17 17:58:27.055775 kubelet[1833]: I0317 17:58:27.055135 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38969da3-b697-4c4b-a584-ed5113bb0f82-tigera-ca-bundle\") pod \"calico-node-rqkvv\" (UID: \"38969da3-b697-4c4b-a584-ed5113bb0f82\") " pod="calico-system/calico-node-rqkvv" Mar 17 17:58:27.055775 kubelet[1833]: I0317 17:58:27.055161 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/38969da3-b697-4c4b-a584-ed5113bb0f82-var-run-calico\") pod \"calico-node-rqkvv\" (UID: \"38969da3-b697-4c4b-a584-ed5113bb0f82\") " pod="calico-system/calico-node-rqkvv" Mar 17 17:58:27.055775 kubelet[1833]: I0317 17:58:27.055178 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/38969da3-b697-4c4b-a584-ed5113bb0f82-cni-log-dir\") pod \"calico-node-rqkvv\" (UID: \"38969da3-b697-4c4b-a584-ed5113bb0f82\") " pod="calico-system/calico-node-rqkvv" Mar 17 17:58:27.055775 kubelet[1833]: I0317 17:58:27.055195 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/38969da3-b697-4c4b-a584-ed5113bb0f82-policysync\") pod \"calico-node-rqkvv\" (UID: \"38969da3-b697-4c4b-a584-ed5113bb0f82\") " pod="calico-system/calico-node-rqkvv" Mar 17 17:58:27.056098 kubelet[1833]: I0317 17:58:27.055214 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/38969da3-b697-4c4b-a584-ed5113bb0f82-var-lib-calico\") pod \"calico-node-rqkvv\" (UID: \"38969da3-b697-4c4b-a584-ed5113bb0f82\") " pod="calico-system/calico-node-rqkvv" Mar 17 17:58:27.056098 kubelet[1833]: I0317 17:58:27.055273 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e76c5731-9144-4644-b2d4-50c1e2e23da7-varrun\") pod \"csi-node-driver-b69cd\" (UID: \"e76c5731-9144-4644-b2d4-50c1e2e23da7\") " pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:27.056098 kubelet[1833]: I0317 17:58:27.055293 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e76c5731-9144-4644-b2d4-50c1e2e23da7-kubelet-dir\") pod \"csi-node-driver-b69cd\" (UID: \"e76c5731-9144-4644-b2d4-50c1e2e23da7\") " pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:27.056098 kubelet[1833]: I0317 17:58:27.055310 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e76c5731-9144-4644-b2d4-50c1e2e23da7-socket-dir\") pod \"csi-node-driver-b69cd\" (UID: \"e76c5731-9144-4644-b2d4-50c1e2e23da7\") " pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:27.056098 kubelet[1833]: I0317 17:58:27.055325 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-568tl\" (UniqueName: \"kubernetes.io/projected/e76c5731-9144-4644-b2d4-50c1e2e23da7-kube-api-access-568tl\") pod \"csi-node-driver-b69cd\" (UID: \"e76c5731-9144-4644-b2d4-50c1e2e23da7\") " pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:27.056227 kubelet[1833]: I0317 17:58:27.055349 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/598bdf7f-c223-4253-a408-909f035a820f-xtables-lock\") pod \"kube-proxy-sxgg7\" (UID: \"598bdf7f-c223-4253-a408-909f035a820f\") " pod="kube-system/kube-proxy-sxgg7" Mar 17 17:58:27.056227 kubelet[1833]: I0317 17:58:27.055373 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38969da3-b697-4c4b-a584-ed5113bb0f82-lib-modules\") pod \"calico-node-rqkvv\" (UID: \"38969da3-b697-4c4b-a584-ed5113bb0f82\") " pod="calico-system/calico-node-rqkvv" Mar 17 17:58:27.056227 kubelet[1833]: I0317 17:58:27.055398 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/38969da3-b697-4c4b-a584-ed5113bb0f82-cni-net-dir\") pod \"calico-node-rqkvv\" (UID: \"38969da3-b697-4c4b-a584-ed5113bb0f82\") " pod="calico-system/calico-node-rqkvv" Mar 17 17:58:27.056227 kubelet[1833]: I0317 17:58:27.055424 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/598bdf7f-c223-4253-a408-909f035a820f-kube-proxy\") pod \"kube-proxy-sxgg7\" (UID: \"598bdf7f-c223-4253-a408-909f035a820f\") " pod="kube-system/kube-proxy-sxgg7" Mar 17 17:58:27.056227 kubelet[1833]: I0317 17:58:27.055450 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/598bdf7f-c223-4253-a408-909f035a820f-lib-modules\") pod \"kube-proxy-sxgg7\" (UID: \"598bdf7f-c223-4253-a408-909f035a820f\") " pod="kube-system/kube-proxy-sxgg7" Mar 17 17:58:27.056479 kubelet[1833]: I0317 17:58:27.055474 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/38969da3-b697-4c4b-a584-ed5113bb0f82-cni-bin-dir\") pod \"calico-node-rqkvv\" (UID: \"38969da3-b697-4c4b-a584-ed5113bb0f82\") " pod="calico-system/calico-node-rqkvv" Mar 17 17:58:27.056479 kubelet[1833]: I0317 17:58:27.055497 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/38969da3-b697-4c4b-a584-ed5113bb0f82-flexvol-driver-host\") pod \"calico-node-rqkvv\" (UID: \"38969da3-b697-4c4b-a584-ed5113bb0f82\") " pod="calico-system/calico-node-rqkvv" Mar 17 17:58:27.056479 kubelet[1833]: I0317 17:58:27.055527 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9kwz\" (UniqueName: \"kubernetes.io/projected/38969da3-b697-4c4b-a584-ed5113bb0f82-kube-api-access-d9kwz\") pod \"calico-node-rqkvv\" (UID: \"38969da3-b697-4c4b-a584-ed5113bb0f82\") " pod="calico-system/calico-node-rqkvv" Mar 17 17:58:27.056479 kubelet[1833]: I0317 17:58:27.055549 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e76c5731-9144-4644-b2d4-50c1e2e23da7-registration-dir\") pod \"csi-node-driver-b69cd\" (UID: \"e76c5731-9144-4644-b2d4-50c1e2e23da7\") " pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:27.056479 kubelet[1833]: I0317 17:58:27.055573 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38969da3-b697-4c4b-a584-ed5113bb0f82-xtables-lock\") pod \"calico-node-rqkvv\" (UID: \"38969da3-b697-4c4b-a584-ed5113bb0f82\") " pod="calico-system/calico-node-rqkvv" Mar 17 17:58:27.056615 kubelet[1833]: I0317 17:58:27.055593 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/38969da3-b697-4c4b-a584-ed5113bb0f82-node-certs\") pod \"calico-node-rqkvv\" (UID: \"38969da3-b697-4c4b-a584-ed5113bb0f82\") " pod="calico-system/calico-node-rqkvv" Mar 17 17:58:27.062599 systemd[1]: Created slice kubepods-besteffort-pod38969da3_b697_4c4b_a584_ed5113bb0f82.slice - libcontainer container kubepods-besteffort-pod38969da3_b697_4c4b_a584_ed5113bb0f82.slice. Mar 17 17:58:27.170848 kubelet[1833]: E0317 17:58:27.168036 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:58:27.170848 kubelet[1833]: W0317 17:58:27.168072 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:58:27.170848 kubelet[1833]: E0317 17:58:27.168108 1833 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:58:27.188491 kubelet[1833]: E0317 17:58:27.188327 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:58:27.188696 kubelet[1833]: W0317 17:58:27.188675 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:58:27.189401 kubelet[1833]: E0317 17:58:27.189358 1833 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:58:27.193112 kubelet[1833]: E0317 17:58:27.193068 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:58:27.193112 kubelet[1833]: W0317 17:58:27.193101 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:58:27.193291 kubelet[1833]: E0317 17:58:27.193140 1833 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:58:27.199990 kubelet[1833]: E0317 17:58:27.199826 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:58:27.199990 kubelet[1833]: W0317 17:58:27.199866 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:58:27.199990 kubelet[1833]: E0317 17:58:27.199904 1833 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:58:27.359097 kubelet[1833]: E0317 17:58:27.358652 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:58:27.360008 containerd[1478]: time="2025-03-17T17:58:27.359738558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sxgg7,Uid:598bdf7f-c223-4253-a408-909f035a820f,Namespace:kube-system,Attempt:0,}" Mar 17 17:58:27.366395 kubelet[1833]: E0317 17:58:27.366057 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:58:27.367739 containerd[1478]: time="2025-03-17T17:58:27.367682242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rqkvv,Uid:38969da3-b697-4c4b-a584-ed5113bb0f82,Namespace:calico-system,Attempt:0,}" Mar 17 17:58:27.918653 containerd[1478]: time="2025-03-17T17:58:27.916773068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:58:27.918653 containerd[1478]: time="2025-03-17T17:58:27.917723056Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:58:27.918653 containerd[1478]: time="2025-03-17T17:58:27.918566169Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:58:27.918653 containerd[1478]: time="2025-03-17T17:58:27.918610097Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:58:27.919056 containerd[1478]: time="2025-03-17T17:58:27.919026404Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:58:27.924109 containerd[1478]: time="2025-03-17T17:58:27.924042048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:58:27.925553 containerd[1478]: time="2025-03-17T17:58:27.925479657Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 565.40911ms" Mar 17 17:58:27.926966 containerd[1478]: time="2025-03-17T17:58:27.926927568Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 559.076301ms" Mar 17 17:58:28.019584 kubelet[1833]: E0317 17:58:28.019499 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:28.108695 containerd[1478]: time="2025-03-17T17:58:28.108555498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:58:28.108695 containerd[1478]: time="2025-03-17T17:58:28.108626057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:58:28.108976 containerd[1478]: time="2025-03-17T17:58:28.108644974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:28.108976 containerd[1478]: time="2025-03-17T17:58:28.108896494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:28.109984 containerd[1478]: time="2025-03-17T17:58:28.109856396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:58:28.110111 containerd[1478]: time="2025-03-17T17:58:28.110010729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:58:28.110836 containerd[1478]: time="2025-03-17T17:58:28.110710314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:28.110979 containerd[1478]: time="2025-03-17T17:58:28.110874379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:28.172850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748478753.mount: Deactivated successfully. Mar 17 17:58:28.214962 systemd[1]: run-containerd-runc-k8s.io-01b9d7c313ec1afa271f4352a7279f9c6bd193296c07df3291beddc2111917fa-runc.hMxcMT.mount: Deactivated successfully. Mar 17 17:58:28.230057 systemd[1]: Started cri-containerd-01b9d7c313ec1afa271f4352a7279f9c6bd193296c07df3291beddc2111917fa.scope - libcontainer container 01b9d7c313ec1afa271f4352a7279f9c6bd193296c07df3291beddc2111917fa. Mar 17 17:58:28.231801 systemd[1]: Started cri-containerd-3a7130949768b2d4d8a12fc5e69dce0aa60a5fe05c35b36b0df4a74bd80bd249.scope - libcontainer container 3a7130949768b2d4d8a12fc5e69dce0aa60a5fe05c35b36b0df4a74bd80bd249. Mar 17 17:58:28.284661 containerd[1478]: time="2025-03-17T17:58:28.284512670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rqkvv,Uid:38969da3-b697-4c4b-a584-ed5113bb0f82,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a7130949768b2d4d8a12fc5e69dce0aa60a5fe05c35b36b0df4a74bd80bd249\"" Mar 17 17:58:28.288061 kubelet[1833]: E0317 17:58:28.287799 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:58:28.289867 containerd[1478]: time="2025-03-17T17:58:28.289657321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:58:28.294891 containerd[1478]: time="2025-03-17T17:58:28.294596587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sxgg7,Uid:598bdf7f-c223-4253-a408-909f035a820f,Namespace:kube-system,Attempt:0,} returns sandbox id \"01b9d7c313ec1afa271f4352a7279f9c6bd193296c07df3291beddc2111917fa\"" Mar 17 17:58:28.295500 kubelet[1833]: E0317 17:58:28.295473 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:58:29.020658 kubelet[1833]: E0317 17:58:29.020572 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:29.155843 kubelet[1833]: E0317 17:58:29.155622 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b69cd" podUID="e76c5731-9144-4644-b2d4-50c1e2e23da7" Mar 17 17:58:29.835456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount970680075.mount: Deactivated successfully. Mar 17 17:58:29.970509 containerd[1478]: time="2025-03-17T17:58:29.970450103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:29.971924 containerd[1478]: time="2025-03-17T17:58:29.971856241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=6857253" Mar 17 17:58:29.972857 containerd[1478]: time="2025-03-17T17:58:29.972640093Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:29.975012 containerd[1478]: time="2025-03-17T17:58:29.974946502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:29.976471 containerd[1478]: time="2025-03-17T17:58:29.976122941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 1.686407616s" Mar 17 17:58:29.976471 containerd[1478]: time="2025-03-17T17:58:29.976176252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 17 17:58:29.978492 containerd[1478]: time="2025-03-17T17:58:29.978244219Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 17:58:29.980568 containerd[1478]: time="2025-03-17T17:58:29.980254082Z" level=info msg="CreateContainer within sandbox \"3a7130949768b2d4d8a12fc5e69dce0aa60a5fe05c35b36b0df4a74bd80bd249\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:58:30.007960 containerd[1478]: time="2025-03-17T17:58:30.007899735Z" level=info msg="CreateContainer within sandbox \"3a7130949768b2d4d8a12fc5e69dce0aa60a5fe05c35b36b0df4a74bd80bd249\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aa3bb5acee03ed36fcd001dc8778e1f695284db751dfa31e3f947d1710afb3e8\"" Mar 17 17:58:30.009153 containerd[1478]: time="2025-03-17T17:58:30.009105910Z" level=info msg="StartContainer for \"aa3bb5acee03ed36fcd001dc8778e1f695284db751dfa31e3f947d1710afb3e8\"" Mar 17 17:58:30.021910 kubelet[1833]: E0317 17:58:30.021829 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:30.063100 systemd[1]: Started cri-containerd-aa3bb5acee03ed36fcd001dc8778e1f695284db751dfa31e3f947d1710afb3e8.scope - libcontainer container aa3bb5acee03ed36fcd001dc8778e1f695284db751dfa31e3f947d1710afb3e8. Mar 17 17:58:30.114543 containerd[1478]: time="2025-03-17T17:58:30.114462663Z" level=info msg="StartContainer for \"aa3bb5acee03ed36fcd001dc8778e1f695284db751dfa31e3f947d1710afb3e8\" returns successfully" Mar 17 17:58:30.131689 systemd[1]: cri-containerd-aa3bb5acee03ed36fcd001dc8778e1f695284db751dfa31e3f947d1710afb3e8.scope: Deactivated successfully. Mar 17 17:58:30.178430 containerd[1478]: time="2025-03-17T17:58:30.178320570Z" level=info msg="shim disconnected" id=aa3bb5acee03ed36fcd001dc8778e1f695284db751dfa31e3f947d1710afb3e8 namespace=k8s.io Mar 17 17:58:30.178430 containerd[1478]: time="2025-03-17T17:58:30.178427725Z" level=warning msg="cleaning up after shim disconnected" id=aa3bb5acee03ed36fcd001dc8778e1f695284db751dfa31e3f947d1710afb3e8 namespace=k8s.io Mar 17 17:58:30.178430 containerd[1478]: time="2025-03-17T17:58:30.178444130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:58:30.197005 containerd[1478]: time="2025-03-17T17:58:30.196953323Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:58:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:58:30.205968 kubelet[1833]: E0317 17:58:30.205618 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:58:30.793951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa3bb5acee03ed36fcd001dc8778e1f695284db751dfa31e3f947d1710afb3e8-rootfs.mount: Deactivated successfully. Mar 17 17:58:31.022312 kubelet[1833]: E0317 17:58:31.022206 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:31.083371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428769982.mount: Deactivated successfully. Mar 17 17:58:31.156244 kubelet[1833]: E0317 17:58:31.155382 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b69cd" podUID="e76c5731-9144-4644-b2d4-50c1e2e23da7" Mar 17 17:58:31.616803 containerd[1478]: time="2025-03-17T17:58:31.616713122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:31.618721 containerd[1478]: time="2025-03-17T17:58:31.617730454Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=30918185" Mar 17 17:58:31.618721 containerd[1478]: time="2025-03-17T17:58:31.618647863Z" level=info msg="ImageCreate event name:\"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:31.621477 containerd[1478]: time="2025-03-17T17:58:31.621389510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:31.622418 containerd[1478]: time="2025-03-17T17:58:31.622380435Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"30917204\" in 1.644103524s" Mar 17 17:58:31.622593 containerd[1478]: time="2025-03-17T17:58:31.622557176Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 17 17:58:31.624223 containerd[1478]: time="2025-03-17T17:58:31.624184587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:58:31.626665 containerd[1478]: time="2025-03-17T17:58:31.626612399Z" level=info msg="CreateContainer within sandbox \"01b9d7c313ec1afa271f4352a7279f9c6bd193296c07df3291beddc2111917fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:58:31.646163 containerd[1478]: time="2025-03-17T17:58:31.646097264Z" level=info msg="CreateContainer within sandbox \"01b9d7c313ec1afa271f4352a7279f9c6bd193296c07df3291beddc2111917fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"36be09a007f314449dde3aaaa9f4f8163295e4bc7768fa7ca0a3ca4bbc51c852\"" Mar 17 17:58:31.647142 containerd[1478]: time="2025-03-17T17:58:31.647107394Z" level=info msg="StartContainer for \"36be09a007f314449dde3aaaa9f4f8163295e4bc7768fa7ca0a3ca4bbc51c852\"" Mar 17 17:58:31.703231 systemd[1]: Started cri-containerd-36be09a007f314449dde3aaaa9f4f8163295e4bc7768fa7ca0a3ca4bbc51c852.scope - libcontainer container 36be09a007f314449dde3aaaa9f4f8163295e4bc7768fa7ca0a3ca4bbc51c852. Mar 17 17:58:31.754837 containerd[1478]: time="2025-03-17T17:58:31.754681748Z" level=info msg="StartContainer for \"36be09a007f314449dde3aaaa9f4f8163295e4bc7768fa7ca0a3ca4bbc51c852\" returns successfully" Mar 17 17:58:32.024630 kubelet[1833]: E0317 17:58:32.023844 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:32.217051 kubelet[1833]: E0317 17:58:32.216720 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:58:32.237802 kubelet[1833]: I0317 17:58:32.236548 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sxgg7" podStartSLOduration=2.9086222790000003 podStartE2EDuration="6.236525555s" podCreationTimestamp="2025-03-17 17:58:26 +0000 UTC" firstStartedPulling="2025-03-17 17:58:28.296097104 +0000 UTC m=+3.849088720" lastFinishedPulling="2025-03-17 17:58:31.624000365 +0000 UTC m=+7.176991996" observedRunningTime="2025-03-17 17:58:32.236240967 +0000 UTC m=+7.789232609" watchObservedRunningTime="2025-03-17 17:58:32.236525555 +0000 UTC m=+7.789517189" Mar 17 17:58:33.024579 kubelet[1833]: E0317 17:58:33.024490 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:33.159795 kubelet[1833]: E0317 17:58:33.159342 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b69cd" podUID="e76c5731-9144-4644-b2d4-50c1e2e23da7" Mar 17 17:58:33.219272 kubelet[1833]: E0317 17:58:33.218612 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:58:34.025330 kubelet[1833]: E0317 17:58:34.025283 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:35.026883 kubelet[1833]: E0317 17:58:35.026799 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:35.157063 kubelet[1833]: E0317 17:58:35.155583 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b69cd" podUID="e76c5731-9144-4644-b2d4-50c1e2e23da7" Mar 17 17:58:35.581503 containerd[1478]: time="2025-03-17T17:58:35.581428582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:35.583157 containerd[1478]: time="2025-03-17T17:58:35.583071303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 17 17:58:35.584084 containerd[1478]: time="2025-03-17T17:58:35.584023497Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:35.586954 containerd[1478]: time="2025-03-17T17:58:35.586910788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:35.588313 containerd[1478]: time="2025-03-17T17:58:35.587627316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 3.963405251s" Mar 17 17:58:35.588313 containerd[1478]: time="2025-03-17T17:58:35.587667323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 17 17:58:35.592655 containerd[1478]: time="2025-03-17T17:58:35.592599177Z" level=info msg="CreateContainer within sandbox \"3a7130949768b2d4d8a12fc5e69dce0aa60a5fe05c35b36b0df4a74bd80bd249\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:58:35.623472 containerd[1478]: time="2025-03-17T17:58:35.623281029Z" level=info msg="CreateContainer within sandbox \"3a7130949768b2d4d8a12fc5e69dce0aa60a5fe05c35b36b0df4a74bd80bd249\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fe776eecc1b0ada5571e7bed4fc88f01ac4233ca89fe5e0f94e76d9469767692\"" Mar 17 17:58:35.625552 containerd[1478]: time="2025-03-17T17:58:35.625228716Z" level=info msg="StartContainer for \"fe776eecc1b0ada5571e7bed4fc88f01ac4233ca89fe5e0f94e76d9469767692\"" Mar 17 17:58:35.692202 systemd[1]: Started cri-containerd-fe776eecc1b0ada5571e7bed4fc88f01ac4233ca89fe5e0f94e76d9469767692.scope - libcontainer container fe776eecc1b0ada5571e7bed4fc88f01ac4233ca89fe5e0f94e76d9469767692. Mar 17 17:58:35.749422 containerd[1478]: time="2025-03-17T17:58:35.749341652Z" level=info msg="StartContainer for \"fe776eecc1b0ada5571e7bed4fc88f01ac4233ca89fe5e0f94e76d9469767692\" returns successfully" Mar 17 17:58:36.028783 kubelet[1833]: E0317 17:58:36.028683 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:36.230009 kubelet[1833]: E0317 17:58:36.229924 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:58:36.689450 systemd[1]: cri-containerd-fe776eecc1b0ada5571e7bed4fc88f01ac4233ca89fe5e0f94e76d9469767692.scope: Deactivated successfully. Mar 17 17:58:36.690315 systemd[1]: cri-containerd-fe776eecc1b0ada5571e7bed4fc88f01ac4233ca89fe5e0f94e76d9469767692.scope: Consumed 900ms CPU time, 177.2M memory peak, 154M written to disk. Mar 17 17:58:36.743872 kubelet[1833]: I0317 17:58:36.742964 1833 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 17:58:36.743632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe776eecc1b0ada5571e7bed4fc88f01ac4233ca89fe5e0f94e76d9469767692-rootfs.mount: Deactivated successfully. Mar 17 17:58:36.793507 containerd[1478]: time="2025-03-17T17:58:36.792818164Z" level=info msg="shim disconnected" id=fe776eecc1b0ada5571e7bed4fc88f01ac4233ca89fe5e0f94e76d9469767692 namespace=k8s.io Mar 17 17:58:36.793507 containerd[1478]: time="2025-03-17T17:58:36.793190621Z" level=warning msg="cleaning up after shim disconnected" id=fe776eecc1b0ada5571e7bed4fc88f01ac4233ca89fe5e0f94e76d9469767692 namespace=k8s.io Mar 17 17:58:36.793507 containerd[1478]: time="2025-03-17T17:58:36.793211119Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:58:37.032223 kubelet[1833]: E0317 17:58:37.031541 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:37.168572 systemd[1]: Created slice kubepods-besteffort-pode76c5731_9144_4644_b2d4_50c1e2e23da7.slice - libcontainer container kubepods-besteffort-pode76c5731_9144_4644_b2d4_50c1e2e23da7.slice. Mar 17 17:58:37.173685 containerd[1478]: time="2025-03-17T17:58:37.173633129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:0,}" Mar 17 17:58:37.243417 kubelet[1833]: E0317 17:58:37.243037 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:58:37.245249 containerd[1478]: time="2025-03-17T17:58:37.245196928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:58:37.248110 systemd-resolved[1338]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Mar 17 17:58:37.301966 containerd[1478]: time="2025-03-17T17:58:37.299180075Z" level=error msg="Failed to destroy network for sandbox \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:37.302912 containerd[1478]: time="2025-03-17T17:58:37.302591715Z" level=error msg="encountered an error cleaning up failed sandbox \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:37.302912 containerd[1478]: time="2025-03-17T17:58:37.302732779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:37.304412 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a-shm.mount: Deactivated successfully. Mar 17 17:58:37.304875 kubelet[1833]: E0317 17:58:37.304674 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:37.305013 kubelet[1833]: E0317 17:58:37.304974 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:37.305127 kubelet[1833]: E0317 17:58:37.305102 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:37.305968 kubelet[1833]: E0317 17:58:37.305273 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b69cd" podUID="e76c5731-9144-4644-b2d4-50c1e2e23da7" Mar 17 17:58:37.877809 systemd[1]: Created slice kubepods-besteffort-pod87935e86_3e97_4acc_b2e8_c204144caa65.slice - libcontainer container kubepods-besteffort-pod87935e86_3e97_4acc_b2e8_c204144caa65.slice. Mar 17 17:58:37.965421 kubelet[1833]: I0317 17:58:37.965316 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mhrk\" (UniqueName: \"kubernetes.io/projected/87935e86-3e97-4acc-b2e8-c204144caa65-kube-api-access-7mhrk\") pod \"nginx-deployment-7fcdb87857-nn6b8\" (UID: \"87935e86-3e97-4acc-b2e8-c204144caa65\") " pod="default/nginx-deployment-7fcdb87857-nn6b8" Mar 17 17:58:38.032593 kubelet[1833]: E0317 17:58:38.032287 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:38.191611 containerd[1478]: time="2025-03-17T17:58:38.190948065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:0,}" Mar 17 17:58:38.262435 kubelet[1833]: I0317 17:58:38.261956 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a" Mar 17 17:58:38.266114 containerd[1478]: time="2025-03-17T17:58:38.265926616Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\"" Mar 17 17:58:38.266803 containerd[1478]: time="2025-03-17T17:58:38.266606596Z" level=info msg="Ensure that sandbox 348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a in task-service has been cleanup successfully" Mar 17 17:58:38.270330 containerd[1478]: time="2025-03-17T17:58:38.270009249Z" level=info msg="TearDown network for sandbox \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" successfully" Mar 17 17:58:38.270330 containerd[1478]: time="2025-03-17T17:58:38.270066487Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" returns successfully" Mar 17 17:58:38.272266 systemd[1]: run-netns-cni\x2d30c84e74\x2d1cd3\x2dddb7\x2d0bae\x2dac8822b46372.mount: Deactivated successfully. Mar 17 17:58:38.277253 containerd[1478]: time="2025-03-17T17:58:38.277184176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:1,}" Mar 17 17:58:38.356641 containerd[1478]: time="2025-03-17T17:58:38.356388240Z" level=error msg="Failed to destroy network for sandbox \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:38.357477 containerd[1478]: time="2025-03-17T17:58:38.357339983Z" level=error msg="encountered an error cleaning up failed sandbox \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:38.357477 containerd[1478]: time="2025-03-17T17:58:38.357445136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:38.358157 kubelet[1833]: E0317 17:58:38.358090 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:38.358270 kubelet[1833]: E0317 17:58:38.358185 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nn6b8" Mar 17 17:58:38.358270 kubelet[1833]: E0317 17:58:38.358220 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nn6b8" Mar 17 17:58:38.358393 kubelet[1833]: E0317 17:58:38.358281 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-nn6b8_default(87935e86-3e97-4acc-b2e8-c204144caa65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-nn6b8_default(87935e86-3e97-4acc-b2e8-c204144caa65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-nn6b8" podUID="87935e86-3e97-4acc-b2e8-c204144caa65" Mar 17 17:58:38.407673 containerd[1478]: time="2025-03-17T17:58:38.407492465Z" level=error msg="Failed to destroy network for sandbox \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:38.408677 containerd[1478]: time="2025-03-17T17:58:38.408294564Z" level=error msg="encountered an error cleaning up failed sandbox \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:38.408677 containerd[1478]: time="2025-03-17T17:58:38.408623753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:38.413166 kubelet[1833]: E0317 17:58:38.411953 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:38.413166 kubelet[1833]: E0317 17:58:38.412063 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:38.413166 kubelet[1833]: E0317 17:58:38.412091 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:38.413373 kubelet[1833]: E0317 17:58:38.412138 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b69cd" podUID="e76c5731-9144-4644-b2d4-50c1e2e23da7" Mar 17 17:58:39.033053 kubelet[1833]: E0317 17:58:39.032991 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:39.108245 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e-shm.mount: Deactivated successfully. Mar 17 17:58:39.110284 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427-shm.mount: Deactivated successfully. Mar 17 17:58:39.269874 kubelet[1833]: I0317 17:58:39.267891 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e" Mar 17 17:58:39.273794 containerd[1478]: time="2025-03-17T17:58:39.270995214Z" level=info msg="StopPodSandbox for \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\"" Mar 17 17:58:39.273794 containerd[1478]: time="2025-03-17T17:58:39.271325252Z" level=info msg="Ensure that sandbox 9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e in task-service has been cleanup successfully" Mar 17 17:58:39.277390 systemd[1]: run-netns-cni\x2d3a0f3862\x2db62c\x2debc7\x2da9e0\x2deeb0acf0f66b.mount: Deactivated successfully. Mar 17 17:58:39.277957 containerd[1478]: time="2025-03-17T17:58:39.277899983Z" level=info msg="TearDown network for sandbox \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\" successfully" Mar 17 17:58:39.277957 containerd[1478]: time="2025-03-17T17:58:39.277952673Z" level=info msg="StopPodSandbox for \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\" returns successfully" Mar 17 17:58:39.281955 containerd[1478]: time="2025-03-17T17:58:39.278811172Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\"" Mar 17 17:58:39.281955 containerd[1478]: time="2025-03-17T17:58:39.279056943Z" level=info msg="TearDown network for sandbox \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" successfully" Mar 17 17:58:39.281955 containerd[1478]: time="2025-03-17T17:58:39.279083800Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" returns successfully" Mar 17 17:58:39.282530 containerd[1478]: time="2025-03-17T17:58:39.282059824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:2,}" Mar 17 17:58:39.319980 kubelet[1833]: I0317 17:58:39.318381 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427" Mar 17 17:58:39.322100 containerd[1478]: time="2025-03-17T17:58:39.320868757Z" level=info msg="StopPodSandbox for \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\"" Mar 17 17:58:39.322100 containerd[1478]: time="2025-03-17T17:58:39.321216071Z" level=info msg="Ensure that sandbox 6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427 in task-service has been cleanup successfully" Mar 17 17:58:39.328302 systemd[1]: run-netns-cni\x2d95258185\x2d661c\x2dbf71\x2da416\x2d6eecb29aa5f8.mount: Deactivated successfully. Mar 17 17:58:39.331615 containerd[1478]: time="2025-03-17T17:58:39.331534171Z" level=info msg="TearDown network for sandbox \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\" successfully" Mar 17 17:58:39.332392 containerd[1478]: time="2025-03-17T17:58:39.332177890Z" level=info msg="StopPodSandbox for \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\" returns successfully" Mar 17 17:58:39.337281 containerd[1478]: time="2025-03-17T17:58:39.337106594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:1,}" Mar 17 17:58:39.605341 containerd[1478]: time="2025-03-17T17:58:39.605227520Z" level=error msg="Failed to destroy network for sandbox \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:39.607222 containerd[1478]: time="2025-03-17T17:58:39.607151888Z" level=error msg="encountered an error cleaning up failed sandbox \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:39.608332 containerd[1478]: time="2025-03-17T17:58:39.607259197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:39.608639 kubelet[1833]: E0317 17:58:39.607715 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:39.608639 kubelet[1833]: E0317 17:58:39.607859 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:39.608639 kubelet[1833]: E0317 17:58:39.607896 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:39.608901 kubelet[1833]: E0317 17:58:39.607972 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b69cd" podUID="e76c5731-9144-4644-b2d4-50c1e2e23da7" Mar 17 17:58:39.628414 containerd[1478]: time="2025-03-17T17:58:39.626947049Z" level=error msg="Failed to destroy network for sandbox \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:39.628414 containerd[1478]: time="2025-03-17T17:58:39.627485841Z" level=error msg="encountered an error cleaning up failed sandbox \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:39.628414 containerd[1478]: time="2025-03-17T17:58:39.627564832Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:39.630059 kubelet[1833]: E0317 17:58:39.627914 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:39.630059 kubelet[1833]: E0317 17:58:39.628003 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nn6b8" Mar 17 17:58:39.630059 kubelet[1833]: E0317 17:58:39.628071 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nn6b8" Mar 17 17:58:39.630321 kubelet[1833]: E0317 17:58:39.628147 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-nn6b8_default(87935e86-3e97-4acc-b2e8-c204144caa65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-nn6b8_default(87935e86-3e97-4acc-b2e8-c204144caa65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-nn6b8" podUID="87935e86-3e97-4acc-b2e8-c204144caa65" Mar 17 17:58:40.034890 kubelet[1833]: E0317 17:58:40.034691 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:40.116016 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae-shm.mount: Deactivated successfully. Mar 17 17:58:40.324821 kubelet[1833]: I0317 17:58:40.323641 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185" Mar 17 17:58:40.325549 containerd[1478]: time="2025-03-17T17:58:40.325369905Z" level=info msg="StopPodSandbox for \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\"" Mar 17 17:58:40.326866 containerd[1478]: time="2025-03-17T17:58:40.326285200Z" level=info msg="Ensure that sandbox f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185 in task-service has been cleanup successfully" Mar 17 17:58:40.328926 containerd[1478]: time="2025-03-17T17:58:40.328888232Z" level=info msg="TearDown network for sandbox \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\" successfully" Mar 17 17:58:40.329182 containerd[1478]: time="2025-03-17T17:58:40.329145266Z" level=info msg="StopPodSandbox for \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\" returns successfully" Mar 17 17:58:40.332189 containerd[1478]: time="2025-03-17T17:58:40.331053508Z" level=info msg="StopPodSandbox for \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\"" Mar 17 17:58:40.332189 containerd[1478]: time="2025-03-17T17:58:40.331189183Z" level=info msg="TearDown network for sandbox \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\" successfully" Mar 17 17:58:40.332189 containerd[1478]: time="2025-03-17T17:58:40.331202834Z" level=info msg="StopPodSandbox for \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\" returns successfully" Mar 17 17:58:40.331633 systemd[1]: run-netns-cni\x2d519f59a4\x2dfdbc\x2dcad5\x2d5e58\x2d36c7e54e5925.mount: Deactivated successfully. Mar 17 17:58:40.336898 containerd[1478]: time="2025-03-17T17:58:40.333685181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:2,}" Mar 17 17:58:40.337146 kubelet[1833]: I0317 17:58:40.335894 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae" Mar 17 17:58:40.337263 containerd[1478]: time="2025-03-17T17:58:40.337180474Z" level=info msg="StopPodSandbox for \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\"" Mar 17 17:58:40.337727 containerd[1478]: time="2025-03-17T17:58:40.337484945Z" level=info msg="Ensure that sandbox cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae in task-service has been cleanup successfully" Mar 17 17:58:40.340136 containerd[1478]: time="2025-03-17T17:58:40.340086818Z" level=info msg="TearDown network for sandbox \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\" successfully" Mar 17 17:58:40.340136 containerd[1478]: time="2025-03-17T17:58:40.340132757Z" level=info msg="StopPodSandbox for \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\" returns successfully" Mar 17 17:58:40.341614 systemd[1]: run-netns-cni\x2d3a4ff11f\x2d8420\x2d630e\x2d328d\x2d6314da09455c.mount: Deactivated successfully. Mar 17 17:58:40.344896 containerd[1478]: time="2025-03-17T17:58:40.343982790Z" level=info msg="StopPodSandbox for \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\"" Mar 17 17:58:40.344896 containerd[1478]: time="2025-03-17T17:58:40.344209568Z" level=info msg="TearDown network for sandbox \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\" successfully" Mar 17 17:58:40.344896 containerd[1478]: time="2025-03-17T17:58:40.344240362Z" level=info msg="StopPodSandbox for \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\" returns successfully" Mar 17 17:58:40.347151 containerd[1478]: time="2025-03-17T17:58:40.346862223Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\"" Mar 17 17:58:40.347370 containerd[1478]: time="2025-03-17T17:58:40.347078931Z" level=info msg="TearDown network for sandbox \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" successfully" Mar 17 17:58:40.347370 containerd[1478]: time="2025-03-17T17:58:40.347207036Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" returns successfully" Mar 17 17:58:40.349298 containerd[1478]: time="2025-03-17T17:58:40.348536939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:3,}" Mar 17 17:58:40.358486 systemd-resolved[1338]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Mar 17 17:58:40.596128 containerd[1478]: time="2025-03-17T17:58:40.595926719Z" level=error msg="Failed to destroy network for sandbox \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:40.596654 containerd[1478]: time="2025-03-17T17:58:40.596605882Z" level=error msg="encountered an error cleaning up failed sandbox \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:40.596844 containerd[1478]: time="2025-03-17T17:58:40.596687443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:40.597539 kubelet[1833]: E0317 17:58:40.597002 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:40.597539 kubelet[1833]: E0317 17:58:40.597068 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nn6b8" Mar 17 17:58:40.597539 kubelet[1833]: E0317 17:58:40.597093 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nn6b8" Mar 17 17:58:40.597721 kubelet[1833]: E0317 17:58:40.597152 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-nn6b8_default(87935e86-3e97-4acc-b2e8-c204144caa65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-nn6b8_default(87935e86-3e97-4acc-b2e8-c204144caa65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-nn6b8" podUID="87935e86-3e97-4acc-b2e8-c204144caa65" Mar 17 17:58:40.601874 containerd[1478]: time="2025-03-17T17:58:40.601662432Z" level=error msg="Failed to destroy network for sandbox \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:40.602877 containerd[1478]: time="2025-03-17T17:58:40.602705929Z" level=error msg="encountered an error cleaning up failed sandbox \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:40.603672 containerd[1478]: time="2025-03-17T17:58:40.603444065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:40.603913 kubelet[1833]: E0317 17:58:40.603826 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:40.604016 kubelet[1833]: E0317 17:58:40.603937 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:40.604016 kubelet[1833]: E0317 17:58:40.604003 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:40.604139 kubelet[1833]: E0317 17:58:40.604077 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b69cd" podUID="e76c5731-9144-4644-b2d4-50c1e2e23da7" Mar 17 17:58:41.035192 kubelet[1833]: E0317 17:58:41.035036 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:41.109309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914-shm.mount: Deactivated successfully. Mar 17 17:58:41.109537 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf-shm.mount: Deactivated successfully. Mar 17 17:58:42.326806 systemd-resolved[1338]: Clock change detected. Flushing caches. Mar 17 17:58:42.327579 systemd-timesyncd[1354]: Contacted time server 75.72.171.171:123 (2.flatcar.pool.ntp.org). Mar 17 17:58:42.327697 systemd-timesyncd[1354]: Initial clock synchronization to Mon 2025-03-17 17:58:42.326543 UTC. Mar 17 17:58:42.485581 kubelet[1833]: I0317 17:58:42.483862 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914" Mar 17 17:58:42.485779 containerd[1478]: time="2025-03-17T17:58:42.485076071Z" level=info msg="StopPodSandbox for \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\"" Mar 17 17:58:42.485779 containerd[1478]: time="2025-03-17T17:58:42.485375757Z" level=info msg="Ensure that sandbox 606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914 in task-service has been cleanup successfully" Mar 17 17:58:42.488214 systemd[1]: run-netns-cni\x2d3b6175a5\x2d91df\x2d3b92\x2dfadd\x2de1e2924ac83d.mount: Deactivated successfully. Mar 17 17:58:42.490311 containerd[1478]: time="2025-03-17T17:58:42.490254722Z" level=info msg="TearDown network for sandbox \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\" successfully" Mar 17 17:58:42.490311 containerd[1478]: time="2025-03-17T17:58:42.490307312Z" level=info msg="StopPodSandbox for \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\" returns successfully" Mar 17 17:58:42.493352 containerd[1478]: time="2025-03-17T17:58:42.493117699Z" level=info msg="StopPodSandbox for \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\"" Mar 17 17:58:42.493352 containerd[1478]: time="2025-03-17T17:58:42.493243828Z" level=info msg="TearDown network for sandbox \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\" successfully" Mar 17 17:58:42.493352 containerd[1478]: time="2025-03-17T17:58:42.493257904Z" level=info msg="StopPodSandbox for \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\" returns successfully" Mar 17 17:58:42.494375 containerd[1478]: time="2025-03-17T17:58:42.494343172Z" level=info msg="StopPodSandbox for \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\"" Mar 17 17:58:42.494476 containerd[1478]: time="2025-03-17T17:58:42.494466159Z" level=info msg="TearDown network for sandbox \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\" successfully" Mar 17 17:58:42.494518 containerd[1478]: time="2025-03-17T17:58:42.494479405Z" level=info msg="StopPodSandbox for \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\" returns successfully" Mar 17 17:58:42.495944 containerd[1478]: time="2025-03-17T17:58:42.495650732Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\"" Mar 17 17:58:42.495944 containerd[1478]: time="2025-03-17T17:58:42.495839203Z" level=info msg="TearDown network for sandbox \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" successfully" Mar 17 17:58:42.495944 containerd[1478]: time="2025-03-17T17:58:42.495853190Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" returns successfully" Mar 17 17:58:42.496974 containerd[1478]: time="2025-03-17T17:58:42.496340111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:4,}" Mar 17 17:58:42.498158 kubelet[1833]: I0317 17:58:42.498023 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf" Mar 17 17:58:42.501109 containerd[1478]: time="2025-03-17T17:58:42.500985915Z" level=info msg="StopPodSandbox for \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\"" Mar 17 17:58:42.501445 containerd[1478]: time="2025-03-17T17:58:42.501284259Z" level=info msg="Ensure that sandbox e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf in task-service has been cleanup successfully" Mar 17 17:58:42.504563 containerd[1478]: time="2025-03-17T17:58:42.503799172Z" level=info msg="TearDown network for sandbox \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\" successfully" Mar 17 17:58:42.504563 containerd[1478]: time="2025-03-17T17:58:42.503836899Z" level=info msg="StopPodSandbox for \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\" returns successfully" Mar 17 17:58:42.504791 containerd[1478]: time="2025-03-17T17:58:42.504612138Z" level=info msg="StopPodSandbox for \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\"" Mar 17 17:58:42.504842 containerd[1478]: time="2025-03-17T17:58:42.504788520Z" level=info msg="TearDown network for sandbox \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\" successfully" Mar 17 17:58:42.504842 containerd[1478]: time="2025-03-17T17:58:42.504804741Z" level=info msg="StopPodSandbox for \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\" returns successfully" Mar 17 17:58:42.505569 systemd[1]: run-netns-cni\x2dcb7ba062\x2dc681\x2d8999\x2db645\x2db52910174ee1.mount: Deactivated successfully. Mar 17 17:58:42.506474 containerd[1478]: time="2025-03-17T17:58:42.506065232Z" level=info msg="StopPodSandbox for \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\"" Mar 17 17:58:42.506474 containerd[1478]: time="2025-03-17T17:58:42.506228522Z" level=info msg="TearDown network for sandbox \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\" successfully" Mar 17 17:58:42.506474 containerd[1478]: time="2025-03-17T17:58:42.506252035Z" level=info msg="StopPodSandbox for \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\" returns successfully" Mar 17 17:58:42.507772 containerd[1478]: time="2025-03-17T17:58:42.507048142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:3,}" Mar 17 17:58:42.705328 containerd[1478]: time="2025-03-17T17:58:42.705269131Z" level=error msg="Failed to destroy network for sandbox \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:42.705812 containerd[1478]: time="2025-03-17T17:58:42.705694610Z" level=error msg="encountered an error cleaning up failed sandbox \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:42.705812 containerd[1478]: time="2025-03-17T17:58:42.705797777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:42.706196 kubelet[1833]: E0317 17:58:42.706072 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:42.706196 kubelet[1833]: E0317 17:58:42.706140 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:42.706196 kubelet[1833]: E0317 17:58:42.706174 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:42.706324 kubelet[1833]: E0317 17:58:42.706222 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b69cd" podUID="e76c5731-9144-4644-b2d4-50c1e2e23da7" Mar 17 17:58:42.736420 containerd[1478]: time="2025-03-17T17:58:42.736098172Z" level=error msg="Failed to destroy network for sandbox \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:42.737887 containerd[1478]: time="2025-03-17T17:58:42.737376767Z" level=error msg="encountered an error cleaning up failed sandbox \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:42.737887 containerd[1478]: time="2025-03-17T17:58:42.737485376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:42.738134 kubelet[1833]: E0317 17:58:42.737979 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:42.738134 kubelet[1833]: E0317 17:58:42.738060 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nn6b8" Mar 17 17:58:42.738134 kubelet[1833]: E0317 17:58:42.738097 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nn6b8" Mar 17 17:58:42.738283 kubelet[1833]: E0317 17:58:42.738156 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-nn6b8_default(87935e86-3e97-4acc-b2e8-c204144caa65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-nn6b8_default(87935e86-3e97-4acc-b2e8-c204144caa65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-nn6b8" podUID="87935e86-3e97-4acc-b2e8-c204144caa65" Mar 17 17:58:43.176632 kubelet[1833]: E0317 17:58:43.175809 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:43.250260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50-shm.mount: Deactivated successfully. Mar 17 17:58:43.250988 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c-shm.mount: Deactivated successfully. Mar 17 17:58:43.506786 kubelet[1833]: I0317 17:58:43.506735 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c" Mar 17 17:58:43.507842 containerd[1478]: time="2025-03-17T17:58:43.507755942Z" level=info msg="StopPodSandbox for \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\"" Mar 17 17:58:43.509654 containerd[1478]: time="2025-03-17T17:58:43.509141305Z" level=info msg="Ensure that sandbox d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c in task-service has been cleanup successfully" Mar 17 17:58:43.512485 containerd[1478]: time="2025-03-17T17:58:43.512107166Z" level=info msg="TearDown network for sandbox \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\" successfully" Mar 17 17:58:43.512485 containerd[1478]: time="2025-03-17T17:58:43.512145416Z" level=info msg="StopPodSandbox for \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\" returns successfully" Mar 17 17:58:43.514391 containerd[1478]: time="2025-03-17T17:58:43.513261487Z" level=info msg="StopPodSandbox for \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\"" Mar 17 17:58:43.514391 containerd[1478]: time="2025-03-17T17:58:43.513398522Z" level=info msg="TearDown network for sandbox \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\" successfully" Mar 17 17:58:43.514391 containerd[1478]: time="2025-03-17T17:58:43.513415633Z" level=info msg="StopPodSandbox for \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\" returns successfully" Mar 17 17:58:43.514317 systemd[1]: run-netns-cni\x2d17b5dc10\x2d86cc\x2d0df0\x2d9dd6\x2d11a97c7f17b1.mount: Deactivated successfully. Mar 17 17:58:43.519682 containerd[1478]: time="2025-03-17T17:58:43.519071006Z" level=info msg="StopPodSandbox for \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\"" Mar 17 17:58:43.519682 containerd[1478]: time="2025-03-17T17:58:43.519214807Z" level=info msg="TearDown network for sandbox \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\" successfully" Mar 17 17:58:43.519682 containerd[1478]: time="2025-03-17T17:58:43.519286191Z" level=info msg="StopPodSandbox for \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\" returns successfully" Mar 17 17:58:43.521093 kubelet[1833]: I0317 17:58:43.519567 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50" Mar 17 17:58:43.521597 containerd[1478]: time="2025-03-17T17:58:43.521526431Z" level=info msg="StopPodSandbox for \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\"" Mar 17 17:58:43.522159 containerd[1478]: time="2025-03-17T17:58:43.522104410Z" level=info msg="Ensure that sandbox fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50 in task-service has been cleanup successfully" Mar 17 17:58:43.523105 containerd[1478]: time="2025-03-17T17:58:43.523068577Z" level=info msg="TearDown network for sandbox \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\" successfully" Mar 17 17:58:43.525753 containerd[1478]: time="2025-03-17T17:58:43.523248276Z" level=info msg="StopPodSandbox for \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\" returns successfully" Mar 17 17:58:43.526830 containerd[1478]: time="2025-03-17T17:58:43.526164754Z" level=info msg="StopPodSandbox for \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\"" Mar 17 17:58:43.526830 containerd[1478]: time="2025-03-17T17:58:43.526363687Z" level=info msg="TearDown network for sandbox \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\" successfully" Mar 17 17:58:43.526830 containerd[1478]: time="2025-03-17T17:58:43.526389285Z" level=info msg="StopPodSandbox for \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\" returns successfully" Mar 17 17:58:43.528669 systemd[1]: run-netns-cni\x2d447e9617\x2d4572\x2d0341\x2ddd3b\x2d4e6aafd9c27f.mount: Deactivated successfully. Mar 17 17:58:43.532696 containerd[1478]: time="2025-03-17T17:58:43.532201802Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\"" Mar 17 17:58:43.532696 containerd[1478]: time="2025-03-17T17:58:43.532381875Z" level=info msg="StopPodSandbox for \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\"" Mar 17 17:58:43.532696 containerd[1478]: time="2025-03-17T17:58:43.532633519Z" level=info msg="TearDown network for sandbox \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" successfully" Mar 17 17:58:43.532696 containerd[1478]: time="2025-03-17T17:58:43.532659477Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" returns successfully" Mar 17 17:58:43.533880 containerd[1478]: time="2025-03-17T17:58:43.533554173Z" level=info msg="TearDown network for sandbox \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\" successfully" Mar 17 17:58:43.533880 containerd[1478]: time="2025-03-17T17:58:43.533585300Z" level=info msg="StopPodSandbox for \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\" returns successfully" Mar 17 17:58:43.534091 containerd[1478]: time="2025-03-17T17:58:43.533987760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:5,}" Mar 17 17:58:43.537123 containerd[1478]: time="2025-03-17T17:58:43.536873430Z" level=info msg="StopPodSandbox for \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\"" Mar 17 17:58:43.537123 containerd[1478]: time="2025-03-17T17:58:43.537011312Z" level=info msg="TearDown network for sandbox \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\" successfully" Mar 17 17:58:43.537123 containerd[1478]: time="2025-03-17T17:58:43.537023215Z" level=info msg="StopPodSandbox for \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\" returns successfully" Mar 17 17:58:43.540775 containerd[1478]: time="2025-03-17T17:58:43.540566649Z" level=info msg="StopPodSandbox for \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\"" Mar 17 17:58:43.544048 containerd[1478]: time="2025-03-17T17:58:43.543823800Z" level=info msg="TearDown network for sandbox \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\" successfully" Mar 17 17:58:43.544048 containerd[1478]: time="2025-03-17T17:58:43.543869804Z" level=info msg="StopPodSandbox for \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\" returns successfully" Mar 17 17:58:43.547987 containerd[1478]: time="2025-03-17T17:58:43.547643775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:4,}" Mar 17 17:58:43.793521 containerd[1478]: time="2025-03-17T17:58:43.792915739Z" level=error msg="Failed to destroy network for sandbox \"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:43.796164 containerd[1478]: time="2025-03-17T17:58:43.795658247Z" level=error msg="encountered an error cleaning up failed sandbox \"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:43.796679 containerd[1478]: time="2025-03-17T17:58:43.796356323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:43.796821 kubelet[1833]: E0317 17:58:43.796775 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:43.796905 kubelet[1833]: E0317 17:58:43.796859 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:43.796948 kubelet[1833]: E0317 17:58:43.796909 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:43.797003 kubelet[1833]: E0317 17:58:43.796968 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b69cd" podUID="e76c5731-9144-4644-b2d4-50c1e2e23da7" Mar 17 17:58:43.810090 containerd[1478]: time="2025-03-17T17:58:43.809880086Z" level=error msg="Failed to destroy network for sandbox \"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:43.811098 containerd[1478]: time="2025-03-17T17:58:43.811025030Z" level=error msg="encountered an error cleaning up failed sandbox \"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:43.811522 containerd[1478]: time="2025-03-17T17:58:43.811131911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:43.811695 kubelet[1833]: E0317 17:58:43.811431 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:43.811695 kubelet[1833]: E0317 17:58:43.811514 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nn6b8" Mar 17 17:58:43.811695 kubelet[1833]: E0317 17:58:43.811550 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nn6b8" Mar 17 17:58:43.811886 kubelet[1833]: E0317 17:58:43.811605 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-nn6b8_default(87935e86-3e97-4acc-b2e8-c204144caa65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-nn6b8_default(87935e86-3e97-4acc-b2e8-c204144caa65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-nn6b8" podUID="87935e86-3e97-4acc-b2e8-c204144caa65" Mar 17 17:58:44.176974 kubelet[1833]: E0317 17:58:44.176822 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:44.250147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479-shm.mount: Deactivated successfully. Mar 17 17:58:44.250326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7-shm.mount: Deactivated successfully. Mar 17 17:58:44.527674 kubelet[1833]: I0317 17:58:44.526793 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479" Mar 17 17:58:44.527868 containerd[1478]: time="2025-03-17T17:58:44.527590073Z" level=info msg="StopPodSandbox for \"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\"" Mar 17 17:58:44.529878 containerd[1478]: time="2025-03-17T17:58:44.529385729Z" level=info msg="Ensure that sandbox 2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479 in task-service has been cleanup successfully" Mar 17 17:58:44.532736 containerd[1478]: time="2025-03-17T17:58:44.530679229Z" level=info msg="TearDown network for sandbox \"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\" successfully" Mar 17 17:58:44.532736 containerd[1478]: time="2025-03-17T17:58:44.530732028Z" level=info msg="StopPodSandbox for \"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\" returns successfully" Mar 17 17:58:44.533846 containerd[1478]: time="2025-03-17T17:58:44.533536179Z" level=info msg="StopPodSandbox for \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\"" Mar 17 17:58:44.533846 containerd[1478]: time="2025-03-17T17:58:44.533676777Z" level=info msg="TearDown network for sandbox \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\" successfully" Mar 17 17:58:44.533846 containerd[1478]: time="2025-03-17T17:58:44.533697468Z" level=info msg="StopPodSandbox for \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\" returns successfully" Mar 17 17:58:44.534496 systemd[1]: run-netns-cni\x2dae4ba9c2\x2d3eca\x2d9497\x2d0d17\x2dd43e498c7dd0.mount: Deactivated successfully. Mar 17 17:58:44.536413 containerd[1478]: time="2025-03-17T17:58:44.536164352Z" level=info msg="StopPodSandbox for \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\"" Mar 17 17:58:44.536413 containerd[1478]: time="2025-03-17T17:58:44.536310352Z" level=info msg="TearDown network for sandbox \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\" successfully" Mar 17 17:58:44.536413 containerd[1478]: time="2025-03-17T17:58:44.536329642Z" level=info msg="StopPodSandbox for \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\" returns successfully" Mar 17 17:58:44.537348 containerd[1478]: time="2025-03-17T17:58:44.537254952Z" level=info msg="StopPodSandbox for \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\"" Mar 17 17:58:44.537577 containerd[1478]: time="2025-03-17T17:58:44.537555039Z" level=info msg="TearDown network for sandbox \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\" successfully" Mar 17 17:58:44.537977 containerd[1478]: time="2025-03-17T17:58:44.537675661Z" level=info msg="StopPodSandbox for \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\" returns successfully" Mar 17 17:58:44.539509 kubelet[1833]: I0317 17:58:44.538911 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7" Mar 17 17:58:44.540036 containerd[1478]: time="2025-03-17T17:58:44.540007316Z" level=info msg="StopPodSandbox for \"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\"" Mar 17 17:58:44.541129 containerd[1478]: time="2025-03-17T17:58:44.541088677Z" level=info msg="Ensure that sandbox 9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7 in task-service has been cleanup successfully" Mar 17 17:58:44.543687 containerd[1478]: time="2025-03-17T17:58:44.540218742Z" level=info msg="StopPodSandbox for \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\"" Mar 17 17:58:44.544583 containerd[1478]: time="2025-03-17T17:58:44.544096670Z" level=info msg="TearDown network for sandbox \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\" successfully" Mar 17 17:58:44.544583 containerd[1478]: time="2025-03-17T17:58:44.544168444Z" level=info msg="StopPodSandbox for \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\" returns successfully" Mar 17 17:58:44.544583 containerd[1478]: time="2025-03-17T17:58:44.543874407Z" level=info msg="TearDown network for sandbox \"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\" successfully" Mar 17 17:58:44.544583 containerd[1478]: time="2025-03-17T17:58:44.544491236Z" level=info msg="StopPodSandbox for \"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\" returns successfully" Mar 17 17:58:44.546053 systemd[1]: run-netns-cni\x2d65a6792a\x2db168\x2de6d8\x2d249c\x2dd9152976c743.mount: Deactivated successfully. Mar 17 17:58:44.548096 containerd[1478]: time="2025-03-17T17:58:44.547645803Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\"" Mar 17 17:58:44.548096 containerd[1478]: time="2025-03-17T17:58:44.547932691Z" level=info msg="TearDown network for sandbox \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" successfully" Mar 17 17:58:44.548096 containerd[1478]: time="2025-03-17T17:58:44.547968214Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" returns successfully" Mar 17 17:58:44.548569 containerd[1478]: time="2025-03-17T17:58:44.548108480Z" level=info msg="StopPodSandbox for \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\"" Mar 17 17:58:44.551481 containerd[1478]: time="2025-03-17T17:58:44.551072092Z" level=info msg="TearDown network for sandbox \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\" successfully" Mar 17 17:58:44.551481 containerd[1478]: time="2025-03-17T17:58:44.551138361Z" level=info msg="StopPodSandbox for \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\" returns successfully" Mar 17 17:58:44.551481 containerd[1478]: time="2025-03-17T17:58:44.551155751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:6,}" Mar 17 17:58:44.552183 containerd[1478]: time="2025-03-17T17:58:44.551981698Z" level=info msg="StopPodSandbox for \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\"" Mar 17 17:58:44.552601 containerd[1478]: time="2025-03-17T17:58:44.552185370Z" level=info msg="TearDown network for sandbox \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\" successfully" Mar 17 17:58:44.552601 containerd[1478]: time="2025-03-17T17:58:44.552204001Z" level=info msg="StopPodSandbox for \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\" returns successfully" Mar 17 17:58:44.552771 containerd[1478]: time="2025-03-17T17:58:44.552658386Z" level=info msg="StopPodSandbox for \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\"" Mar 17 17:58:44.553145 containerd[1478]: time="2025-03-17T17:58:44.552864852Z" level=info msg="TearDown network for sandbox \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\" successfully" Mar 17 17:58:44.553145 containerd[1478]: time="2025-03-17T17:58:44.552891695Z" level=info msg="StopPodSandbox for \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\" returns successfully" Mar 17 17:58:44.555142 containerd[1478]: time="2025-03-17T17:58:44.555022628Z" level=info msg="StopPodSandbox for \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\"" Mar 17 17:58:44.555292 containerd[1478]: time="2025-03-17T17:58:44.555212700Z" level=info msg="TearDown network for sandbox \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\" successfully" Mar 17 17:58:44.555292 containerd[1478]: time="2025-03-17T17:58:44.555229322Z" level=info msg="StopPodSandbox for \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\" returns successfully" Mar 17 17:58:44.556096 containerd[1478]: time="2025-03-17T17:58:44.555997155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:5,}" Mar 17 17:58:44.571008 systemd-resolved[1338]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Mar 17 17:58:44.735002 containerd[1478]: time="2025-03-17T17:58:44.734567763Z" level=error msg="Failed to destroy network for sandbox \"fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:44.736841 containerd[1478]: time="2025-03-17T17:58:44.736667793Z" level=error msg="encountered an error cleaning up failed sandbox \"fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:44.737357 containerd[1478]: time="2025-03-17T17:58:44.737292935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:44.737870 kubelet[1833]: E0317 17:58:44.737799 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:44.737981 kubelet[1833]: E0317 17:58:44.737890 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:44.737981 kubelet[1833]: E0317 17:58:44.737923 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:44.738063 kubelet[1833]: E0317 17:58:44.737976 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b69cd" podUID="e76c5731-9144-4644-b2d4-50c1e2e23da7" Mar 17 17:58:44.811574 containerd[1478]: time="2025-03-17T17:58:44.809830849Z" level=error msg="Failed to destroy network for sandbox \"232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:44.812675 containerd[1478]: time="2025-03-17T17:58:44.812450027Z" level=error msg="encountered an error cleaning up failed sandbox \"232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:44.812675 containerd[1478]: time="2025-03-17T17:58:44.812555106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:44.812889 kubelet[1833]: E0317 17:58:44.812843 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:44.813050 kubelet[1833]: E0317 17:58:44.812923 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nn6b8" Mar 17 17:58:44.813050 kubelet[1833]: E0317 17:58:44.812961 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nn6b8" Mar 17 17:58:44.813050 kubelet[1833]: E0317 17:58:44.813024 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-nn6b8_default(87935e86-3e97-4acc-b2e8-c204144caa65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-nn6b8_default(87935e86-3e97-4acc-b2e8-c204144caa65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-nn6b8" podUID="87935e86-3e97-4acc-b2e8-c204144caa65" Mar 17 17:58:45.177932 kubelet[1833]: E0317 17:58:45.177769 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:45.252093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c-shm.mount: Deactivated successfully. Mar 17 17:58:45.252235 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b-shm.mount: Deactivated successfully. Mar 17 17:58:45.548828 kubelet[1833]: I0317 17:58:45.548501 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b" Mar 17 17:58:45.552807 containerd[1478]: time="2025-03-17T17:58:45.552766979Z" level=info msg="StopPodSandbox for \"fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b\"" Mar 17 17:58:45.553284 containerd[1478]: time="2025-03-17T17:58:45.553004701Z" level=info msg="Ensure that sandbox fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b in task-service has been cleanup successfully" Mar 17 17:58:45.553284 containerd[1478]: time="2025-03-17T17:58:45.553176405Z" level=info msg="TearDown network for sandbox \"fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b\" successfully" Mar 17 17:58:45.553284 containerd[1478]: time="2025-03-17T17:58:45.553191266Z" level=info msg="StopPodSandbox for \"fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b\" returns successfully" Mar 17 17:58:45.556352 containerd[1478]: time="2025-03-17T17:58:45.554022044Z" level=info msg="StopPodSandbox for \"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\"" Mar 17 17:58:45.556352 containerd[1478]: time="2025-03-17T17:58:45.554115491Z" level=info msg="TearDown network for sandbox \"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\" successfully" Mar 17 17:58:45.556352 containerd[1478]: time="2025-03-17T17:58:45.554126080Z" level=info msg="StopPodSandbox for \"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\" returns successfully" Mar 17 17:58:45.556187 systemd[1]: run-netns-cni\x2da1dec2cc\x2db435\x2dff4d\x2d09a5\x2d139c573f1d60.mount: Deactivated successfully. Mar 17 17:58:45.558740 containerd[1478]: time="2025-03-17T17:58:45.558334656Z" level=info msg="StopPodSandbox for \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\"" Mar 17 17:58:45.558740 containerd[1478]: time="2025-03-17T17:58:45.558455167Z" level=info msg="TearDown network for sandbox \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\" successfully" Mar 17 17:58:45.558740 containerd[1478]: time="2025-03-17T17:58:45.558467019Z" level=info msg="StopPodSandbox for \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\" returns successfully" Mar 17 17:58:45.561030 containerd[1478]: time="2025-03-17T17:58:45.559396342Z" level=info msg="StopPodSandbox for \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\"" Mar 17 17:58:45.561030 containerd[1478]: time="2025-03-17T17:58:45.559524229Z" level=info msg="TearDown network for sandbox \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\" successfully" Mar 17 17:58:45.561030 containerd[1478]: time="2025-03-17T17:58:45.559540218Z" level=info msg="StopPodSandbox for \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\" returns successfully" Mar 17 17:58:45.561978 kubelet[1833]: I0317 17:58:45.561492 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c" Mar 17 17:58:45.562119 containerd[1478]: time="2025-03-17T17:58:45.561776715Z" level=info msg="StopPodSandbox for \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\"" Mar 17 17:58:45.562119 containerd[1478]: time="2025-03-17T17:58:45.562031138Z" level=info msg="TearDown network for sandbox \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\" successfully" Mar 17 17:58:45.562119 containerd[1478]: time="2025-03-17T17:58:45.562054978Z" level=info msg="StopPodSandbox for \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\" returns successfully" Mar 17 17:58:45.562286 containerd[1478]: time="2025-03-17T17:58:45.562247882Z" level=info msg="StopPodSandbox for \"232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c\"" Mar 17 17:58:45.562500 containerd[1478]: time="2025-03-17T17:58:45.562473886Z" level=info msg="Ensure that sandbox 232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c in task-service has been cleanup successfully" Mar 17 17:58:45.562692 containerd[1478]: time="2025-03-17T17:58:45.562668035Z" level=info msg="TearDown network for sandbox \"232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c\" successfully" Mar 17 17:58:45.562692 containerd[1478]: time="2025-03-17T17:58:45.562686539Z" level=info msg="StopPodSandbox for \"232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c\" returns successfully" Mar 17 17:58:45.564434 containerd[1478]: time="2025-03-17T17:58:45.564372811Z" level=info msg="StopPodSandbox for \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\"" Mar 17 17:58:45.564538 containerd[1478]: time="2025-03-17T17:58:45.564509930Z" level=info msg="TearDown network for sandbox \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\" successfully" Mar 17 17:58:45.564538 containerd[1478]: time="2025-03-17T17:58:45.564528645Z" level=info msg="StopPodSandbox for \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\" returns successfully" Mar 17 17:58:45.564681 containerd[1478]: time="2025-03-17T17:58:45.564654526Z" level=info msg="StopPodSandbox for \"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\"" Mar 17 17:58:45.564822 containerd[1478]: time="2025-03-17T17:58:45.564795843Z" level=info msg="TearDown network for sandbox \"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\" successfully" Mar 17 17:58:45.564887 containerd[1478]: time="2025-03-17T17:58:45.564818507Z" level=info msg="StopPodSandbox for \"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\" returns successfully" Mar 17 17:58:45.565507 systemd[1]: run-netns-cni\x2d453c9c24\x2d3f09\x2d69d6\x2d7578\x2d606b3b2a3ec5.mount: Deactivated successfully. Mar 17 17:58:45.568163 containerd[1478]: time="2025-03-17T17:58:45.568121573Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\"" Mar 17 17:58:45.568508 containerd[1478]: time="2025-03-17T17:58:45.568475827Z" level=info msg="TearDown network for sandbox \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" successfully" Mar 17 17:58:45.568669 containerd[1478]: time="2025-03-17T17:58:45.568642530Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" returns successfully" Mar 17 17:58:45.568980 containerd[1478]: time="2025-03-17T17:58:45.568146250Z" level=info msg="StopPodSandbox for \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\"" Mar 17 17:58:45.569222 containerd[1478]: time="2025-03-17T17:58:45.569196521Z" level=info msg="TearDown network for sandbox \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\" successfully" Mar 17 17:58:45.569333 containerd[1478]: time="2025-03-17T17:58:45.569315954Z" level=info msg="StopPodSandbox for \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\" returns successfully" Mar 17 17:58:45.569832 containerd[1478]: time="2025-03-17T17:58:45.569816033Z" level=info msg="StopPodSandbox for \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\"" Mar 17 17:58:45.570151 containerd[1478]: time="2025-03-17T17:58:45.569989356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:7,}" Mar 17 17:58:45.570637 containerd[1478]: time="2025-03-17T17:58:45.570615406Z" level=info msg="TearDown network for sandbox \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\" successfully" Mar 17 17:58:45.571058 containerd[1478]: time="2025-03-17T17:58:45.571028375Z" level=info msg="StopPodSandbox for \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\" returns successfully" Mar 17 17:58:45.572108 containerd[1478]: time="2025-03-17T17:58:45.572074973Z" level=info msg="StopPodSandbox for \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\"" Mar 17 17:58:45.572365 containerd[1478]: time="2025-03-17T17:58:45.572208871Z" level=info msg="TearDown network for sandbox \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\" successfully" Mar 17 17:58:45.572365 containerd[1478]: time="2025-03-17T17:58:45.572224024Z" level=info msg="StopPodSandbox for \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\" returns successfully" Mar 17 17:58:45.573469 containerd[1478]: time="2025-03-17T17:58:45.573440306Z" level=info msg="StopPodSandbox for \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\"" Mar 17 17:58:45.575230 containerd[1478]: time="2025-03-17T17:58:45.574914103Z" level=info msg="TearDown network for sandbox \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\" successfully" Mar 17 17:58:45.575230 containerd[1478]: time="2025-03-17T17:58:45.574949070Z" level=info msg="StopPodSandbox for \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\" returns successfully" Mar 17 17:58:45.577394 containerd[1478]: time="2025-03-17T17:58:45.577347814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:6,}" Mar 17 17:58:45.771646 containerd[1478]: time="2025-03-17T17:58:45.771501974Z" level=error msg="Failed to destroy network for sandbox \"7c22a772845b9408ee8d75f8e619e6b4f8a99f77f54764b6ffca6f2d45f43acc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:45.773278 containerd[1478]: time="2025-03-17T17:58:45.772449491Z" level=error msg="encountered an error cleaning up failed sandbox \"7c22a772845b9408ee8d75f8e619e6b4f8a99f77f54764b6ffca6f2d45f43acc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:45.773278 containerd[1478]: time="2025-03-17T17:58:45.772538747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"7c22a772845b9408ee8d75f8e619e6b4f8a99f77f54764b6ffca6f2d45f43acc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:45.773517 kubelet[1833]: E0317 17:58:45.772843 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c22a772845b9408ee8d75f8e619e6b4f8a99f77f54764b6ffca6f2d45f43acc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:45.773517 kubelet[1833]: E0317 17:58:45.772917 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c22a772845b9408ee8d75f8e619e6b4f8a99f77f54764b6ffca6f2d45f43acc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:45.773517 kubelet[1833]: E0317 17:58:45.772945 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c22a772845b9408ee8d75f8e619e6b4f8a99f77f54764b6ffca6f2d45f43acc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b69cd" Mar 17 17:58:45.773800 kubelet[1833]: E0317 17:58:45.773005 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b69cd_calico-system(e76c5731-9144-4644-b2d4-50c1e2e23da7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c22a772845b9408ee8d75f8e619e6b4f8a99f77f54764b6ffca6f2d45f43acc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b69cd" podUID="e76c5731-9144-4644-b2d4-50c1e2e23da7" Mar 17 17:58:45.775826 containerd[1478]: time="2025-03-17T17:58:45.774851341Z" level=error msg="Failed to destroy network for sandbox \"8bc59844cba0a011a54a2ce15f9db4175dfcb06ead04fd8c8c3140501aa9c859\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:45.775826 containerd[1478]: time="2025-03-17T17:58:45.775274294Z" level=error msg="encountered an error cleaning up failed sandbox \"8bc59844cba0a011a54a2ce15f9db4175dfcb06ead04fd8c8c3140501aa9c859\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:45.775826 containerd[1478]: time="2025-03-17T17:58:45.775360513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"8bc59844cba0a011a54a2ce15f9db4175dfcb06ead04fd8c8c3140501aa9c859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:45.776394 kubelet[1833]: E0317 17:58:45.775631 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc59844cba0a011a54a2ce15f9db4175dfcb06ead04fd8c8c3140501aa9c859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:45.776394 kubelet[1833]: E0317 17:58:45.775694 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc59844cba0a011a54a2ce15f9db4175dfcb06ead04fd8c8c3140501aa9c859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nn6b8" Mar 17 17:58:45.776394 kubelet[1833]: E0317 17:58:45.775811 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc59844cba0a011a54a2ce15f9db4175dfcb06ead04fd8c8c3140501aa9c859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nn6b8" Mar 17 17:58:45.776634 kubelet[1833]: E0317 17:58:45.775880 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-nn6b8_default(87935e86-3e97-4acc-b2e8-c204144caa65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-nn6b8_default(87935e86-3e97-4acc-b2e8-c204144caa65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bc59844cba0a011a54a2ce15f9db4175dfcb06ead04fd8c8c3140501aa9c859\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-nn6b8" podUID="87935e86-3e97-4acc-b2e8-c204144caa65" Mar 17 17:58:46.093344 containerd[1478]: time="2025-03-17T17:58:46.093216285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 17 17:58:46.093530 containerd[1478]: time="2025-03-17T17:58:46.093459451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:46.097750 containerd[1478]: time="2025-03-17T17:58:46.097434908Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:46.098835 containerd[1478]: time="2025-03-17T17:58:46.098373345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 7.712946342s" Mar 17 17:58:46.098835 containerd[1478]: time="2025-03-17T17:58:46.098429318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 17 17:58:46.100070 containerd[1478]: time="2025-03-17T17:58:46.100016985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:46.132876 containerd[1478]: time="2025-03-17T17:58:46.132643769Z" level=info msg="CreateContainer within sandbox \"3a7130949768b2d4d8a12fc5e69dce0aa60a5fe05c35b36b0df4a74bd80bd249\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:58:46.150358 containerd[1478]: time="2025-03-17T17:58:46.150261936Z" level=info msg="CreateContainer within sandbox \"3a7130949768b2d4d8a12fc5e69dce0aa60a5fe05c35b36b0df4a74bd80bd249\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"85e4f6f93fde56c3f20ba7a989e611dee19d52df5dd7eb2de2c420c0d50982eb\"" Mar 17 17:58:46.151259 containerd[1478]: time="2025-03-17T17:58:46.151157732Z" level=info msg="StartContainer for \"85e4f6f93fde56c3f20ba7a989e611dee19d52df5dd7eb2de2c420c0d50982eb\"" Mar 17 17:58:46.154945 kubelet[1833]: E0317 17:58:46.154888 1833 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:46.179352 kubelet[1833]: E0317 17:58:46.178730 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:46.259193 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c22a772845b9408ee8d75f8e619e6b4f8a99f77f54764b6ffca6f2d45f43acc-shm.mount: Deactivated successfully. Mar 17 17:58:46.259376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2408924429.mount: Deactivated successfully. Mar 17 17:58:46.277031 systemd[1]: Started cri-containerd-85e4f6f93fde56c3f20ba7a989e611dee19d52df5dd7eb2de2c420c0d50982eb.scope - libcontainer container 85e4f6f93fde56c3f20ba7a989e611dee19d52df5dd7eb2de2c420c0d50982eb. Mar 17 17:58:46.355189 containerd[1478]: time="2025-03-17T17:58:46.354151844Z" level=info msg="StartContainer for \"85e4f6f93fde56c3f20ba7a989e611dee19d52df5dd7eb2de2c420c0d50982eb\" returns successfully" Mar 17 17:58:46.459963 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:58:46.460147 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:58:46.570018 kubelet[1833]: E0317 17:58:46.568157 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:58:46.584452 kubelet[1833]: I0317 17:58:46.584419 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c22a772845b9408ee8d75f8e619e6b4f8a99f77f54764b6ffca6f2d45f43acc" Mar 17 17:58:46.586982 containerd[1478]: time="2025-03-17T17:58:46.586943997Z" level=info msg="StopPodSandbox for \"7c22a772845b9408ee8d75f8e619e6b4f8a99f77f54764b6ffca6f2d45f43acc\"" Mar 17 17:58:46.589741 containerd[1478]: time="2025-03-17T17:58:46.588029524Z" level=info msg="Ensure that sandbox 7c22a772845b9408ee8d75f8e619e6b4f8a99f77f54764b6ffca6f2d45f43acc in task-service has been cleanup successfully" Mar 17 17:58:46.592868 containerd[1478]: time="2025-03-17T17:58:46.591868992Z" level=info msg="TearDown network for sandbox \"7c22a772845b9408ee8d75f8e619e6b4f8a99f77f54764b6ffca6f2d45f43acc\" successfully" Mar 17 17:58:46.592868 containerd[1478]: time="2025-03-17T17:58:46.591915155Z" level=info msg="StopPodSandbox for \"7c22a772845b9408ee8d75f8e619e6b4f8a99f77f54764b6ffca6f2d45f43acc\" returns successfully" Mar 17 17:58:46.592568 systemd[1]: run-netns-cni\x2d4eaad525\x2de6b5\x2df5cb\x2df3fd\x2d504b3ac0aea3.mount: Deactivated successfully. Mar 17 17:58:46.596388 containerd[1478]: time="2025-03-17T17:58:46.595911083Z" level=info msg="StopPodSandbox for \"fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b\"" Mar 17 17:58:46.596388 containerd[1478]: time="2025-03-17T17:58:46.596063807Z" level=info msg="TearDown network for sandbox \"fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b\" successfully" Mar 17 17:58:46.596388 containerd[1478]: time="2025-03-17T17:58:46.596078734Z" level=info msg="StopPodSandbox for \"fb3835d0f718232315dc5ddb55ade2a7623a2a7ce0de91870fd6b995ae82741b\" returns successfully" Mar 17 17:58:46.598732 containerd[1478]: time="2025-03-17T17:58:46.598192121Z" level=info msg="StopPodSandbox for \"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\"" Mar 17 17:58:46.599562 containerd[1478]: time="2025-03-17T17:58:46.599390086Z" level=info msg="TearDown network for sandbox \"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\" successfully" Mar 17 17:58:46.599562 containerd[1478]: time="2025-03-17T17:58:46.599518358Z" level=info msg="StopPodSandbox for \"2b28296f38d87160b6c023b33e6bfb3092b0dca62e0a468fa616ffef94266479\" returns successfully" Mar 17 17:58:46.600753 containerd[1478]: time="2025-03-17T17:58:46.600624014Z" level=info msg="StopPodSandbox for \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\"" Mar 17 17:58:46.601376 containerd[1478]: time="2025-03-17T17:58:46.601193692Z" level=info msg="TearDown network for sandbox \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\" successfully" Mar 17 17:58:46.601376 containerd[1478]: time="2025-03-17T17:58:46.601316415Z" level=info msg="StopPodSandbox for \"d4b0d674bd31c3d3cf23bf1d89efdeedcdd1653949468c595d8c716e3cdb8d3c\" returns successfully" Mar 17 17:58:46.601975 containerd[1478]: time="2025-03-17T17:58:46.601884547Z" level=info msg="StopPodSandbox for \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\"" Mar 17 17:58:46.602186 containerd[1478]: time="2025-03-17T17:58:46.602080616Z" level=info msg="TearDown network for sandbox \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\" successfully" Mar 17 17:58:46.602288 containerd[1478]: time="2025-03-17T17:58:46.602253323Z" level=info msg="StopPodSandbox for \"606cc678dc0f5f6b3516343a5975c4322d55d93db6aaafb83a7267df8a364914\" returns successfully" Mar 17 17:58:46.603837 containerd[1478]: time="2025-03-17T17:58:46.603368875Z" level=info msg="StopPodSandbox for \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\"" Mar 17 17:58:46.603837 containerd[1478]: time="2025-03-17T17:58:46.603463521Z" level=info msg="TearDown network for sandbox \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\" successfully" Mar 17 17:58:46.603837 containerd[1478]: time="2025-03-17T17:58:46.603473695Z" level=info msg="StopPodSandbox for \"cf0d415b2f8599a86504c9eefc453bd3086b86bf02d355ed320b153718f771ae\" returns successfully" Mar 17 17:58:46.604609 containerd[1478]: time="2025-03-17T17:58:46.604572548Z" level=info msg="StopPodSandbox for \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\"" Mar 17 17:58:46.605268 containerd[1478]: time="2025-03-17T17:58:46.605183125Z" level=info msg="TearDown network for sandbox \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\" successfully" Mar 17 17:58:46.605860 containerd[1478]: time="2025-03-17T17:58:46.605756369Z" level=info msg="StopPodSandbox for \"9aed3115c2df4a2a7686f41868cd09d0b4dff13972cb05de60d5c50f13e8890e\" returns successfully" Mar 17 17:58:46.608194 containerd[1478]: time="2025-03-17T17:58:46.607805640Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\"" Mar 17 17:58:46.608194 containerd[1478]: time="2025-03-17T17:58:46.607960555Z" level=info msg="TearDown network for sandbox \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" successfully" Mar 17 17:58:46.608194 containerd[1478]: time="2025-03-17T17:58:46.607976891Z" level=info msg="StopPodSandbox for \"348e1b46ab19c9ee36067d576fce168d71e12d086effca72b7e47976843fb26a\" returns successfully" Mar 17 17:58:46.608836 kubelet[1833]: I0317 17:58:46.608509 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bc59844cba0a011a54a2ce15f9db4175dfcb06ead04fd8c8c3140501aa9c859" Mar 17 17:58:46.611468 containerd[1478]: time="2025-03-17T17:58:46.610946421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:8,}" Mar 17 17:58:46.620155 containerd[1478]: time="2025-03-17T17:58:46.619882211Z" level=info msg="StopPodSandbox for \"8bc59844cba0a011a54a2ce15f9db4175dfcb06ead04fd8c8c3140501aa9c859\"" Mar 17 17:58:46.625459 containerd[1478]: time="2025-03-17T17:58:46.625174045Z" level=info msg="Ensure that sandbox 8bc59844cba0a011a54a2ce15f9db4175dfcb06ead04fd8c8c3140501aa9c859 in task-service has been cleanup successfully" Mar 17 17:58:46.632238 containerd[1478]: time="2025-03-17T17:58:46.632185336Z" level=info msg="TearDown network for sandbox \"8bc59844cba0a011a54a2ce15f9db4175dfcb06ead04fd8c8c3140501aa9c859\" successfully" Mar 17 17:58:46.633198 systemd[1]: run-netns-cni\x2de0784347\x2ddbc0\x2d526b\x2dba95\x2df88f36090d2e.mount: Deactivated successfully. Mar 17 17:58:46.635664 containerd[1478]: time="2025-03-17T17:58:46.634959309Z" level=info msg="StopPodSandbox for \"8bc59844cba0a011a54a2ce15f9db4175dfcb06ead04fd8c8c3140501aa9c859\" returns successfully" Mar 17 17:58:46.636686 containerd[1478]: time="2025-03-17T17:58:46.636351160Z" level=info msg="StopPodSandbox for \"232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c\"" Mar 17 17:58:46.636686 containerd[1478]: time="2025-03-17T17:58:46.636546976Z" level=info msg="TearDown network for sandbox \"232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c\" successfully" Mar 17 17:58:46.636686 containerd[1478]: time="2025-03-17T17:58:46.636570781Z" level=info msg="StopPodSandbox for \"232809565b1a5eccfdd795c086a99bd0bb74e0963aca3e15e8497231d4acfa0c\" returns successfully" Mar 17 17:58:46.637632 containerd[1478]: time="2025-03-17T17:58:46.637433225Z" level=info msg="StopPodSandbox for \"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\"" Mar 17 17:58:46.637632 containerd[1478]: time="2025-03-17T17:58:46.637533426Z" level=info msg="TearDown network for sandbox \"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\" successfully" Mar 17 17:58:46.637632 containerd[1478]: time="2025-03-17T17:58:46.637542665Z" level=info msg="StopPodSandbox for \"9c510c2bc2f9c1bbdaae743a98a1fddf52b10af594dc6ea66e54b0287c3bc3f7\" returns successfully" Mar 17 17:58:46.638977 containerd[1478]: time="2025-03-17T17:58:46.638945719Z" level=info msg="StopPodSandbox for \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\"" Mar 17 17:58:46.639351 containerd[1478]: time="2025-03-17T17:58:46.639332632Z" level=info msg="TearDown network for sandbox \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\" successfully" Mar 17 17:58:46.640353 containerd[1478]: time="2025-03-17T17:58:46.639536936Z" level=info msg="StopPodSandbox for \"fd2b8bd0c043c35c73b0866dd3cda878abe5240c9baa494e2ad0d92d9709ed50\" returns successfully" Mar 17 17:58:46.640353 containerd[1478]: time="2025-03-17T17:58:46.640178230Z" level=info msg="StopPodSandbox for \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\"" Mar 17 17:58:46.640353 containerd[1478]: time="2025-03-17T17:58:46.640265416Z" level=info msg="TearDown network for sandbox \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\" successfully" Mar 17 17:58:46.640353 containerd[1478]: time="2025-03-17T17:58:46.640277121Z" level=info msg="StopPodSandbox for \"e6f7a50ad435283d312df7267982498630c6964ed8fbb031713253019bfcedbf\" returns successfully" Mar 17 17:58:46.641714 containerd[1478]: time="2025-03-17T17:58:46.641675017Z" level=info msg="StopPodSandbox for \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\"" Mar 17 17:58:46.642075 containerd[1478]: time="2025-03-17T17:58:46.642039624Z" level=info msg="TearDown network for sandbox \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\" successfully" Mar 17 17:58:46.642207 containerd[1478]: time="2025-03-17T17:58:46.642190085Z" level=info msg="StopPodSandbox for \"f8d85142b0d579594990f86f3368c2bda8cb3cdad63ebaa1ebac11399428e185\" returns successfully" Mar 17 17:58:46.643487 containerd[1478]: time="2025-03-17T17:58:46.643453608Z" level=info msg="StopPodSandbox for \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\"" Mar 17 17:58:46.643846 containerd[1478]: time="2025-03-17T17:58:46.643784893Z" level=info msg="TearDown network for sandbox \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\" successfully" Mar 17 17:58:46.643846 containerd[1478]: time="2025-03-17T17:58:46.643808497Z" level=info msg="StopPodSandbox for \"6c11b75df2032fae27b22ed3ec15769bcb8398593316d4755e14c10ed5a03427\" returns successfully" Mar 17 17:58:46.645006 containerd[1478]: time="2025-03-17T17:58:46.644877008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:7,}" Mar 17 17:58:47.077321 systemd-networkd[1374]: cali1fb4d095e8b: Link UP Mar 17 17:58:47.078671 systemd-networkd[1374]: cali1fb4d095e8b: Gained carrier Mar 17 17:58:47.101934 kubelet[1833]: I0317 17:58:47.101444 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rqkvv" podStartSLOduration=4.430056205 podStartE2EDuration="21.101408449s" podCreationTimestamp="2025-03-17 17:58:26 +0000 UTC" firstStartedPulling="2025-03-17 17:58:28.288975579 +0000 UTC m=+3.841967212" lastFinishedPulling="2025-03-17 17:58:46.100275707 +0000 UTC m=+20.513319456" observedRunningTime="2025-03-17 17:58:46.615041772 +0000 UTC m=+21.028085518" watchObservedRunningTime="2025-03-17 17:58:47.101408449 +0000 UTC m=+21.514452188" Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.724 [INFO][2829] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.795 [INFO][2829] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {134.199.208.120-k8s-csi--node--driver--b69cd-eth0 csi-node-driver- calico-system e76c5731-9144-4644-b2d4-50c1e2e23da7 1125 0 2025-03-17 17:58:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:54877d75d5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 134.199.208.120 csi-node-driver-b69cd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1fb4d095e8b [] []}} ContainerID="a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" Namespace="calico-system" Pod="csi-node-driver-b69cd" WorkloadEndpoint="134.199.208.120-k8s-csi--node--driver--b69cd-" Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.796 [INFO][2829] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" Namespace="calico-system" Pod="csi-node-driver-b69cd" WorkloadEndpoint="134.199.208.120-k8s-csi--node--driver--b69cd-eth0" Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.854 [INFO][2865] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" HandleID="k8s-pod-network.a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" Workload="134.199.208.120-k8s-csi--node--driver--b69cd-eth0" Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.881 [INFO][2865] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" HandleID="k8s-pod-network.a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" Workload="134.199.208.120-k8s-csi--node--driver--b69cd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050df0), Attrs:map[string]string{"namespace":"calico-system", "node":"134.199.208.120", "pod":"csi-node-driver-b69cd", "timestamp":"2025-03-17 17:58:46.854276842 +0000 UTC"}, Hostname:"134.199.208.120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.881 [INFO][2865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.881 [INFO][2865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.881 [INFO][2865] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '134.199.208.120' Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.887 [INFO][2865] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" host="134.199.208.120" Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.898 [INFO][2865] ipam/ipam.go 372: Looking up existing affinities for host host="134.199.208.120" Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.912 [INFO][2865] ipam/ipam.go 521: Ran out of existing affine blocks for host host="134.199.208.120" Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.918 [INFO][2865] ipam/ipam.go 538: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="134.199.208.120" Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.935 [INFO][2865] ipam/ipam_block_reader_writer.go 154: Found free block: 192.168.8.0/26 Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.935 [INFO][2865] ipam/ipam.go 550: Found unclaimed block host="134.199.208.120" subnet=192.168.8.0/26 Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.935 [INFO][2865] ipam/ipam_block_reader_writer.go 171: Trying to create affinity in pending state host="134.199.208.120" subnet=192.168.8.0/26 Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.953 [INFO][2865] ipam/ipam_block_reader_writer.go 201: Successfully created pending affinity for block host="134.199.208.120" subnet=192.168.8.0/26 Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.953 [INFO][2865] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.0/26 host="134.199.208.120" Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.960 [INFO][2865] ipam/ipam.go 160: The referenced block doesn't exist, trying to create it cidr=192.168.8.0/26 host="134.199.208.120" Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.966 [INFO][2865] ipam/ipam.go 167: Wrote affinity as pending cidr=192.168.8.0/26 host="134.199.208.120" Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.973 [INFO][2865] ipam/ipam.go 176: Attempting to claim the block cidr=192.168.8.0/26 host="134.199.208.120" Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.973 [INFO][2865] ipam/ipam_block_reader_writer.go 223: Attempting to create a new block host="134.199.208.120" subnet=192.168.8.0/26 Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.988 [INFO][2865] ipam/ipam_block_reader_writer.go 264: Successfully created block Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:46.988 [INFO][2865] ipam/ipam_block_reader_writer.go 275: Confirming affinity host="134.199.208.120" subnet=192.168.8.0/26 Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:47.002 [INFO][2865] ipam/ipam_block_reader_writer.go 290: Successfully confirmed affinity host="134.199.208.120" subnet=192.168.8.0/26 Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:47.002 [INFO][2865] ipam/ipam.go 585: Block '192.168.8.0/26' has 64 free ips which is more than 1 ips required. host="134.199.208.120" subnet=192.168.8.0/26 Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:47.002 [INFO][2865] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" host="134.199.208.120" Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:47.007 [INFO][2865] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843 Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:47.031 [INFO][2865] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" host="134.199.208.120" Mar 17 17:58:47.104467 containerd[1478]: 2025-03-17 17:58:47.059 [INFO][2865] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.0/26] block=192.168.8.0/26 handle="k8s-pod-network.a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" host="134.199.208.120" Mar 17 17:58:47.106508 containerd[1478]: 2025-03-17 17:58:47.059 [INFO][2865] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.0/26] handle="k8s-pod-network.a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" host="134.199.208.120" Mar 17 17:58:47.106508 containerd[1478]: 2025-03-17 17:58:47.059 [INFO][2865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:58:47.106508 containerd[1478]: 2025-03-17 17:58:47.059 [INFO][2865] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.0/26] IPv6=[] ContainerID="a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" HandleID="k8s-pod-network.a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" Workload="134.199.208.120-k8s-csi--node--driver--b69cd-eth0" Mar 17 17:58:47.106508 containerd[1478]: 2025-03-17 17:58:47.064 [INFO][2829] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" Namespace="calico-system" Pod="csi-node-driver-b69cd" WorkloadEndpoint="134.199.208.120-k8s-csi--node--driver--b69cd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"134.199.208.120-k8s-csi--node--driver--b69cd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e76c5731-9144-4644-b2d4-50c1e2e23da7", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 58, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"134.199.208.120", ContainerID:"", Pod:"csi-node-driver-b69cd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1fb4d095e8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:58:47.106508 containerd[1478]: 2025-03-17 17:58:47.064 [INFO][2829] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.0/32] ContainerID="a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" Namespace="calico-system" Pod="csi-node-driver-b69cd" WorkloadEndpoint="134.199.208.120-k8s-csi--node--driver--b69cd-eth0" Mar 17 17:58:47.106508 containerd[1478]: 2025-03-17 17:58:47.064 [INFO][2829] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1fb4d095e8b ContainerID="a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" Namespace="calico-system" Pod="csi-node-driver-b69cd" WorkloadEndpoint="134.199.208.120-k8s-csi--node--driver--b69cd-eth0" Mar 17 17:58:47.106508 containerd[1478]: 2025-03-17 17:58:47.079 [INFO][2829] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" Namespace="calico-system" Pod="csi-node-driver-b69cd" WorkloadEndpoint="134.199.208.120-k8s-csi--node--driver--b69cd-eth0" Mar 17 17:58:47.106508 containerd[1478]: 2025-03-17 17:58:47.079 [INFO][2829] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" Namespace="calico-system" Pod="csi-node-driver-b69cd" WorkloadEndpoint="134.199.208.120-k8s-csi--node--driver--b69cd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"134.199.208.120-k8s-csi--node--driver--b69cd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e76c5731-9144-4644-b2d4-50c1e2e23da7", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 58, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"134.199.208.120", ContainerID:"a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843", Pod:"csi-node-driver-b69cd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1fb4d095e8b", MAC:"26:06:18:73:48:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:58:47.106508 containerd[1478]: 2025-03-17 17:58:47.101 [INFO][2829] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843" Namespace="calico-system" Pod="csi-node-driver-b69cd" WorkloadEndpoint="134.199.208.120-k8s-csi--node--driver--b69cd-eth0" Mar 17 17:58:47.138326 containerd[1478]: time="2025-03-17T17:58:47.137109406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:58:47.138326 containerd[1478]: time="2025-03-17T17:58:47.137213984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:58:47.138326 containerd[1478]: time="2025-03-17T17:58:47.137241015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:47.138326 containerd[1478]: time="2025-03-17T17:58:47.137497910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:47.169269 systemd[1]: Started cri-containerd-a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843.scope - libcontainer container a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843. Mar 17 17:58:47.179767 kubelet[1833]: E0317 17:58:47.179335 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:47.210530 containerd[1478]: time="2025-03-17T17:58:47.210461964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b69cd,Uid:e76c5731-9144-4644-b2d4-50c1e2e23da7,Namespace:calico-system,Attempt:8,} returns sandbox id \"a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843\"" Mar 17 17:58:47.213527 containerd[1478]: time="2025-03-17T17:58:47.213408360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:58:47.285378 systemd-networkd[1374]: cali509260b82b1: Link UP Mar 17 17:58:47.288583 systemd-networkd[1374]: cali509260b82b1: Gained carrier Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:46.710 [INFO][2842] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:46.789 [INFO][2842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {134.199.208.120-k8s-nginx--deployment--7fcdb87857--nn6b8-eth0 nginx-deployment-7fcdb87857- default 87935e86-3e97-4acc-b2e8-c204144caa65 1198 0 2025-03-17 17:58:38 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 134.199.208.120 nginx-deployment-7fcdb87857-nn6b8 eth0 default [] [] [kns.default ksa.default.default] cali509260b82b1 [] []}} ContainerID="95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" Namespace="default" Pod="nginx-deployment-7fcdb87857-nn6b8" WorkloadEndpoint="134.199.208.120-k8s-nginx--deployment--7fcdb87857--nn6b8-" Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:46.790 [INFO][2842] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" Namespace="default" Pod="nginx-deployment-7fcdb87857-nn6b8" WorkloadEndpoint="134.199.208.120-k8s-nginx--deployment--7fcdb87857--nn6b8-eth0" Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:46.874 [INFO][2870] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" HandleID="k8s-pod-network.95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" Workload="134.199.208.120-k8s-nginx--deployment--7fcdb87857--nn6b8-eth0" Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:46.897 [INFO][2870] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" HandleID="k8s-pod-network.95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" Workload="134.199.208.120-k8s-nginx--deployment--7fcdb87857--nn6b8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050e40), Attrs:map[string]string{"namespace":"default", "node":"134.199.208.120", "pod":"nginx-deployment-7fcdb87857-nn6b8", "timestamp":"2025-03-17 17:58:46.874841621 +0000 UTC"}, Hostname:"134.199.208.120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:46.897 [INFO][2870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:47.059 [INFO][2870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:47.060 [INFO][2870] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '134.199.208.120' Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:47.074 [INFO][2870] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" host="134.199.208.120" Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:47.101 [INFO][2870] ipam/ipam.go 372: Looking up existing affinities for host host="134.199.208.120" Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:47.114 [INFO][2870] ipam/ipam.go 489: Trying affinity for 192.168.8.0/26 host="134.199.208.120" Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:47.123 [INFO][2870] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.0/26 host="134.199.208.120" Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:47.139 [INFO][2870] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="134.199.208.120" Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:47.139 [INFO][2870] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" host="134.199.208.120" Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:47.145 [INFO][2870] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5 Mar 17 17:58:47.307164 containerd[1478]: 2025-03-17 17:58:47.163 [INFO][2870] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" host="134.199.208.120" Mar 17 17:58:47.309041 containerd[1478]: 2025-03-17 17:58:47.174 [ERROR][2870] ipam/customresource.go 183: Error updating resource Key=IPAMBlock(192-168-8-0-26) Name="192-168-8-0-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-8-0-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"1278", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.8.0/26", Affinity:(*string)(0xc0003bcb10), Allocations:[]*int{(*int)(0xc000440f98), (*int)(0xc000441158), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc0003bcb20), AttrSecondary:map[string]string{"namespace":"calico-system", "node":"134.199.208.120", "pod":"csi-node-driver-b69cd", "timestamp":"2025-03-17 17:58:46.854276842 +0000 UTC"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc000050e40), AttrSecondary:map[string]string{"namespace":"default", "node":"134.199.208.120", "pod":"nginx-deployment-7fcdb87857-nn6b8", "timestamp":"2025-03-17 17:58:46.874841621 +0000 UTC"}}}, SequenceNumber:0x182da8e3bebc5812, SequenceNumberForAllocation:map[string]uint64{"0":0x182da8e3bebc5810, "1":0x182da8e3bebc5811}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-8-0-26": the object has been modified; please apply your changes to the latest version and try again Mar 17 17:58:47.309041 containerd[1478]: 2025-03-17 17:58:47.174 [INFO][2870] ipam/ipam.go 1207: Failed to update block block=192.168.8.0/26 error=update conflict: IPAMBlock(192-168-8-0-26) handle="k8s-pod-network.95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" host="134.199.208.120" Mar 17 17:58:47.309041 containerd[1478]: 2025-03-17 17:58:47.223 [INFO][2870] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" host="134.199.208.120" Mar 17 17:58:47.309041 containerd[1478]: 2025-03-17 17:58:47.241 [INFO][2870] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5 Mar 17 17:58:47.309041 containerd[1478]: 2025-03-17 17:58:47.251 [INFO][2870] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" host="134.199.208.120" Mar 17 17:58:47.309041 containerd[1478]: 2025-03-17 17:58:47.276 [INFO][2870] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.2/26] block=192.168.8.0/26 handle="k8s-pod-network.95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" host="134.199.208.120" Mar 17 17:58:47.309041 containerd[1478]: 2025-03-17 17:58:47.276 [INFO][2870] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.2/26] handle="k8s-pod-network.95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" host="134.199.208.120" Mar 17 17:58:47.309041 containerd[1478]: 2025-03-17 17:58:47.277 [INFO][2870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:58:47.309041 containerd[1478]: 2025-03-17 17:58:47.277 [INFO][2870] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.2/26] IPv6=[] ContainerID="95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" HandleID="k8s-pod-network.95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" Workload="134.199.208.120-k8s-nginx--deployment--7fcdb87857--nn6b8-eth0" Mar 17 17:58:47.309378 containerd[1478]: 2025-03-17 17:58:47.280 [INFO][2842] cni-plugin/k8s.go 386: Populated endpoint ContainerID="95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" Namespace="default" Pod="nginx-deployment-7fcdb87857-nn6b8" WorkloadEndpoint="134.199.208.120-k8s-nginx--deployment--7fcdb87857--nn6b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"134.199.208.120-k8s-nginx--deployment--7fcdb87857--nn6b8-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"87935e86-3e97-4acc-b2e8-c204144caa65", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"134.199.208.120", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-nn6b8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali509260b82b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:58:47.309378 containerd[1478]: 2025-03-17 17:58:47.280 [INFO][2842] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.2/32] ContainerID="95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" Namespace="default" Pod="nginx-deployment-7fcdb87857-nn6b8" WorkloadEndpoint="134.199.208.120-k8s-nginx--deployment--7fcdb87857--nn6b8-eth0" Mar 17 17:58:47.309378 containerd[1478]: 2025-03-17 17:58:47.280 [INFO][2842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali509260b82b1 ContainerID="95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" Namespace="default" Pod="nginx-deployment-7fcdb87857-nn6b8" WorkloadEndpoint="134.199.208.120-k8s-nginx--deployment--7fcdb87857--nn6b8-eth0" Mar 17 17:58:47.309378 containerd[1478]: 2025-03-17 17:58:47.288 [INFO][2842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" Namespace="default" Pod="nginx-deployment-7fcdb87857-nn6b8" WorkloadEndpoint="134.199.208.120-k8s-nginx--deployment--7fcdb87857--nn6b8-eth0" Mar 17 17:58:47.309378 containerd[1478]: 2025-03-17 17:58:47.290 [INFO][2842] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" Namespace="default" Pod="nginx-deployment-7fcdb87857-nn6b8" WorkloadEndpoint="134.199.208.120-k8s-nginx--deployment--7fcdb87857--nn6b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"134.199.208.120-k8s-nginx--deployment--7fcdb87857--nn6b8-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"87935e86-3e97-4acc-b2e8-c204144caa65", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"134.199.208.120", ContainerID:"95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5", Pod:"nginx-deployment-7fcdb87857-nn6b8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali509260b82b1", MAC:"16:8b:99:57:f6:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:58:47.309378 containerd[1478]: 2025-03-17 17:58:47.304 [INFO][2842] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5" Namespace="default" Pod="nginx-deployment-7fcdb87857-nn6b8" WorkloadEndpoint="134.199.208.120-k8s-nginx--deployment--7fcdb87857--nn6b8-eth0" Mar 17 17:58:47.340905 containerd[1478]: time="2025-03-17T17:58:47.339443184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:58:47.340905 containerd[1478]: time="2025-03-17T17:58:47.339540212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:58:47.340905 containerd[1478]: time="2025-03-17T17:58:47.339570651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:47.340905 containerd[1478]: time="2025-03-17T17:58:47.339720955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:47.388110 systemd[1]: Started cri-containerd-95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5.scope - libcontainer container 95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5. Mar 17 17:58:47.451475 containerd[1478]: time="2025-03-17T17:58:47.451414285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nn6b8,Uid:87935e86-3e97-4acc-b2e8-c204144caa65,Namespace:default,Attempt:7,} returns sandbox id \"95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5\"" Mar 17 17:58:47.619041 kubelet[1833]: E0317 17:58:47.618905 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:58:48.180673 kubelet[1833]: E0317 17:58:48.180610 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:48.248490 systemd[1]: run-containerd-runc-k8s.io-85e4f6f93fde56c3f20ba7a989e611dee19d52df5dd7eb2de2c420c0d50982eb-runc.0P52eu.mount: Deactivated successfully. Mar 17 17:58:48.775740 kernel: bpftool[3140]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:58:48.794877 systemd-networkd[1374]: cali1fb4d095e8b: Gained IPv6LL Mar 17 17:58:48.974689 containerd[1478]: time="2025-03-17T17:58:48.973416800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:48.974689 containerd[1478]: time="2025-03-17T17:58:48.974619576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 17 17:58:48.976552 containerd[1478]: time="2025-03-17T17:58:48.976455449Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:48.981739 containerd[1478]: time="2025-03-17T17:58:48.980363717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:48.981739 containerd[1478]: time="2025-03-17T17:58:48.981623727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 1.768173671s" Mar 17 17:58:48.981739 containerd[1478]: time="2025-03-17T17:58:48.981669744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 17 17:58:48.984983 containerd[1478]: time="2025-03-17T17:58:48.984941312Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:58:48.987063 containerd[1478]: time="2025-03-17T17:58:48.987015742Z" level=info msg="CreateContainer within sandbox \"a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:58:49.010763 containerd[1478]: time="2025-03-17T17:58:49.010323366Z" level=info msg="CreateContainer within sandbox \"a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"92b1009decdba073d17b430c60b973d594b061be68f6c73a443d044db78c3a2a\"" Mar 17 17:58:49.013735 containerd[1478]: time="2025-03-17T17:58:49.013536297Z" level=info msg="StartContainer for \"92b1009decdba073d17b430c60b973d594b061be68f6c73a443d044db78c3a2a\"" Mar 17 17:58:49.051237 systemd-networkd[1374]: cali509260b82b1: Gained IPv6LL Mar 17 17:58:49.074000 systemd[1]: Started cri-containerd-92b1009decdba073d17b430c60b973d594b061be68f6c73a443d044db78c3a2a.scope - libcontainer container 92b1009decdba073d17b430c60b973d594b061be68f6c73a443d044db78c3a2a. Mar 17 17:58:49.140628 containerd[1478]: time="2025-03-17T17:58:49.140549168Z" level=info msg="StartContainer for \"92b1009decdba073d17b430c60b973d594b061be68f6c73a443d044db78c3a2a\" returns successfully" Mar 17 17:58:49.181297 kubelet[1833]: E0317 17:58:49.181190 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:49.233896 systemd-networkd[1374]: vxlan.calico: Link UP Mar 17 17:58:49.233909 systemd-networkd[1374]: vxlan.calico: Gained carrier Mar 17 17:58:50.181400 kubelet[1833]: E0317 17:58:50.181324 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:50.394065 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL Mar 17 17:58:51.181970 kubelet[1833]: E0317 17:58:51.181885 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:52.183060 kubelet[1833]: E0317 17:58:52.183006 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:52.388898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1728219569.mount: Deactivated successfully. Mar 17 17:58:53.184134 kubelet[1833]: E0317 17:58:53.184089 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:53.794197 containerd[1478]: time="2025-03-17T17:58:53.794129610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:53.795433 containerd[1478]: time="2025-03-17T17:58:53.794871325Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73060131" Mar 17 17:58:53.799890 containerd[1478]: time="2025-03-17T17:58:53.799802289Z" level=info msg="ImageCreate event name:\"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:53.801776 containerd[1478]: time="2025-03-17T17:58:53.801706590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:53.803783 containerd[1478]: time="2025-03-17T17:58:53.803731450Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 4.81758637s" Mar 17 17:58:53.803783 containerd[1478]: time="2025-03-17T17:58:53.803780041Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 17:58:53.805172 containerd[1478]: time="2025-03-17T17:58:53.805091211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:58:53.807018 containerd[1478]: time="2025-03-17T17:58:53.806976308Z" level=info msg="CreateContainer within sandbox \"95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 17:58:53.826830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2263935364.mount: Deactivated successfully. Mar 17 17:58:53.830290 containerd[1478]: time="2025-03-17T17:58:53.829832728Z" level=info msg="CreateContainer within sandbox \"95e39319a090c584133e22944a693859e3299c5a55f665d200ccd22ee856efd5\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4622b9a9bcfe22dbbc23508258053ada31891ba931f7686885c985cf948f6707\"" Mar 17 17:58:53.831577 containerd[1478]: time="2025-03-17T17:58:53.831360407Z" level=info msg="StartContainer for \"4622b9a9bcfe22dbbc23508258053ada31891ba931f7686885c985cf948f6707\"" Mar 17 17:58:53.883071 systemd[1]: Started cri-containerd-4622b9a9bcfe22dbbc23508258053ada31891ba931f7686885c985cf948f6707.scope - libcontainer container 4622b9a9bcfe22dbbc23508258053ada31891ba931f7686885c985cf948f6707. Mar 17 17:58:53.923340 containerd[1478]: time="2025-03-17T17:58:53.923284417Z" level=info msg="StartContainer for \"4622b9a9bcfe22dbbc23508258053ada31891ba931f7686885c985cf948f6707\" returns successfully" Mar 17 17:58:54.185555 kubelet[1833]: E0317 17:58:54.185382 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:54.674634 kubelet[1833]: I0317 17:58:54.674523 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-nn6b8" podStartSLOduration=10.322258774 podStartE2EDuration="16.674500817s" podCreationTimestamp="2025-03-17 17:58:38 +0000 UTC" firstStartedPulling="2025-03-17 17:58:47.452643656 +0000 UTC m=+21.865687386" lastFinishedPulling="2025-03-17 17:58:53.804885679 +0000 UTC m=+28.217929429" observedRunningTime="2025-03-17 17:58:54.674308915 +0000 UTC m=+29.087352658" watchObservedRunningTime="2025-03-17 17:58:54.674500817 +0000 UTC m=+29.087544563" Mar 17 17:58:55.186214 kubelet[1833]: E0317 17:58:55.186155 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:55.604669 containerd[1478]: time="2025-03-17T17:58:55.603506487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:55.604669 containerd[1478]: time="2025-03-17T17:58:55.604565276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 17 17:58:55.605314 containerd[1478]: time="2025-03-17T17:58:55.605274716Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:55.607664 containerd[1478]: time="2025-03-17T17:58:55.607607853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:55.609073 containerd[1478]: time="2025-03-17T17:58:55.609016095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 1.803868052s" Mar 17 17:58:55.609073 containerd[1478]: time="2025-03-17T17:58:55.609074596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 17 17:58:55.611794 containerd[1478]: time="2025-03-17T17:58:55.611755488Z" level=info msg="CreateContainer within sandbox \"a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:58:55.630269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4217036101.mount: Deactivated successfully. Mar 17 17:58:55.635231 containerd[1478]: time="2025-03-17T17:58:55.635163830Z" level=info msg="CreateContainer within sandbox \"a2fca8728d4ad89848ce2865f6c579c4cc6c976400f098fde81a5103fe9da843\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3408cc8c90c66b55af60d25508be4d7a3e9f8003b8783cb10bbbb949002164f7\"" Mar 17 17:58:55.636041 containerd[1478]: time="2025-03-17T17:58:55.635987005Z" level=info msg="StartContainer for \"3408cc8c90c66b55af60d25508be4d7a3e9f8003b8783cb10bbbb949002164f7\"" Mar 17 17:58:55.684442 update_engine[1464]: I20250317 17:58:55.683627 1464 update_attempter.cc:509] Updating boot flags... Mar 17 17:58:55.690721 systemd[1]: Started cri-containerd-3408cc8c90c66b55af60d25508be4d7a3e9f8003b8783cb10bbbb949002164f7.scope - libcontainer container 3408cc8c90c66b55af60d25508be4d7a3e9f8003b8783cb10bbbb949002164f7. Mar 17 17:58:55.751968 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (3374) Mar 17 17:58:55.788959 containerd[1478]: time="2025-03-17T17:58:55.788768280Z" level=info msg="StartContainer for \"3408cc8c90c66b55af60d25508be4d7a3e9f8003b8783cb10bbbb949002164f7\" returns successfully" Mar 17 17:58:56.187138 kubelet[1833]: E0317 17:58:56.187066 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:56.314291 kubelet[1833]: I0317 17:58:56.314246 1833 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:58:56.314291 kubelet[1833]: I0317 17:58:56.314294 1833 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:58:56.698031 kubelet[1833]: I0317 17:58:56.697844 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b69cd" podStartSLOduration=22.300638919 podStartE2EDuration="30.697819389s" podCreationTimestamp="2025-03-17 17:58:26 +0000 UTC" firstStartedPulling="2025-03-17 17:58:47.212876868 +0000 UTC m=+21.625920612" lastFinishedPulling="2025-03-17 17:58:55.610057352 +0000 UTC m=+30.023101082" observedRunningTime="2025-03-17 17:58:56.695106697 +0000 UTC m=+31.108150461" watchObservedRunningTime="2025-03-17 17:58:56.697819389 +0000 UTC m=+31.110863137" Mar 17 17:58:57.188199 kubelet[1833]: E0317 17:58:57.188128 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:57.517507 systemd[1]: Created slice kubepods-besteffort-pod83c1557e_4c8e_46f4_9171_41b14accc371.slice - libcontainer container kubepods-besteffort-pod83c1557e_4c8e_46f4_9171_41b14accc371.slice. Mar 17 17:58:57.558343 kubelet[1833]: I0317 17:58:57.558172 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km29k\" (UniqueName: \"kubernetes.io/projected/83c1557e-4c8e-46f4-9171-41b14accc371-kube-api-access-km29k\") pod \"nfs-server-provisioner-0\" (UID: \"83c1557e-4c8e-46f4-9171-41b14accc371\") " pod="default/nfs-server-provisioner-0" Mar 17 17:58:57.558343 kubelet[1833]: I0317 17:58:57.558242 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/83c1557e-4c8e-46f4-9171-41b14accc371-data\") pod \"nfs-server-provisioner-0\" (UID: \"83c1557e-4c8e-46f4-9171-41b14accc371\") " pod="default/nfs-server-provisioner-0" Mar 17 17:58:57.821735 containerd[1478]: time="2025-03-17T17:58:57.821222013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:83c1557e-4c8e-46f4-9171-41b14accc371,Namespace:default,Attempt:0,}" Mar 17 17:58:58.044447 systemd-networkd[1374]: cali60e51b789ff: Link UP Mar 17 17:58:58.044981 systemd-networkd[1374]: cali60e51b789ff: Gained carrier Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:57.898 [INFO][3401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {134.199.208.120-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 83c1557e-4c8e-46f4-9171-41b14accc371 1357 0 2025-03-17 17:58:57 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 134.199.208.120 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="134.199.208.120-k8s-nfs--server--provisioner--0-" Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:57.898 [INFO][3401] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="134.199.208.120-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:57.948 [INFO][3412] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" HandleID="k8s-pod-network.a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" Workload="134.199.208.120-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:57.965 [INFO][3412] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" HandleID="k8s-pod-network.a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" Workload="134.199.208.120-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ecce0), Attrs:map[string]string{"namespace":"default", "node":"134.199.208.120", "pod":"nfs-server-provisioner-0", "timestamp":"2025-03-17 17:58:57.948266467 +0000 UTC"}, Hostname:"134.199.208.120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:57.965 [INFO][3412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:57.965 [INFO][3412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:57.965 [INFO][3412] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '134.199.208.120' Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:57.973 [INFO][3412] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" host="134.199.208.120" Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:57.982 [INFO][3412] ipam/ipam.go 372: Looking up existing affinities for host host="134.199.208.120" Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:57.995 [INFO][3412] ipam/ipam.go 489: Trying affinity for 192.168.8.0/26 host="134.199.208.120" Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:57.999 [INFO][3412] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.0/26 host="134.199.208.120" Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:58.006 [INFO][3412] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="134.199.208.120" Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:58.006 [INFO][3412] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" host="134.199.208.120" Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:58.010 [INFO][3412] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4 Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:58.019 [INFO][3412] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" host="134.199.208.120" Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:58.035 [INFO][3412] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.3/26] block=192.168.8.0/26 handle="k8s-pod-network.a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" host="134.199.208.120" Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:58.035 [INFO][3412] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.3/26] handle="k8s-pod-network.a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" host="134.199.208.120" Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:58.035 [INFO][3412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:58:58.068442 containerd[1478]: 2025-03-17 17:58:58.035 [INFO][3412] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.3/26] IPv6=[] ContainerID="a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" HandleID="k8s-pod-network.a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" Workload="134.199.208.120-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:58:58.069412 containerd[1478]: 2025-03-17 17:58:58.037 [INFO][3401] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="134.199.208.120-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"134.199.208.120-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"83c1557e-4c8e-46f4-9171-41b14accc371", ResourceVersion:"1357", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 58, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"134.199.208.120", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:58:58.069412 containerd[1478]: 2025-03-17 17:58:58.037 [INFO][3401] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.3/32] ContainerID="a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="134.199.208.120-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:58:58.069412 containerd[1478]: 2025-03-17 17:58:58.038 [INFO][3401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="134.199.208.120-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:58:58.069412 containerd[1478]: 2025-03-17 17:58:58.043 [INFO][3401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="134.199.208.120-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:58:58.069648 containerd[1478]: 2025-03-17 17:58:58.044 [INFO][3401] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="134.199.208.120-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"134.199.208.120-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"83c1557e-4c8e-46f4-9171-41b14accc371", ResourceVersion:"1357", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 58, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"134.199.208.120", ContainerID:"a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"7a:53:ab:ac:02:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:58:58.069648 containerd[1478]: 2025-03-17 17:58:58.065 [INFO][3401] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="134.199.208.120-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:58:58.108938 containerd[1478]: time="2025-03-17T17:58:58.107258593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:58:58.108938 containerd[1478]: time="2025-03-17T17:58:58.107337837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:58:58.110114 containerd[1478]: time="2025-03-17T17:58:58.108021373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:58.110114 containerd[1478]: time="2025-03-17T17:58:58.108132473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:58.147075 systemd[1]: Started cri-containerd-a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4.scope - libcontainer container a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4. Mar 17 17:58:58.189033 kubelet[1833]: E0317 17:58:58.188970 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:58.201418 containerd[1478]: time="2025-03-17T17:58:58.201365229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:83c1557e-4c8e-46f4-9171-41b14accc371,Namespace:default,Attempt:0,} returns sandbox id \"a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4\"" Mar 17 17:58:58.203504 containerd[1478]: time="2025-03-17T17:58:58.203436320Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 17:58:59.190228 kubelet[1833]: E0317 17:58:59.190079 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:59.995408 systemd-networkd[1374]: cali60e51b789ff: Gained IPv6LL Mar 17 17:59:00.190627 kubelet[1833]: E0317 17:59:00.190542 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:00.796166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount143241180.mount: Deactivated successfully. Mar 17 17:59:01.191993 kubelet[1833]: E0317 17:59:01.191667 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:02.191971 kubelet[1833]: E0317 17:59:02.191914 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:03.206659 kubelet[1833]: E0317 17:59:03.206450 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:04.210507 kubelet[1833]: E0317 17:59:04.208650 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:04.855760 containerd[1478]: time="2025-03-17T17:59:04.854345769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:04.857787 containerd[1478]: time="2025-03-17T17:59:04.857645684Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Mar 17 17:59:04.859495 containerd[1478]: time="2025-03-17T17:59:04.859435231Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:04.866601 containerd[1478]: time="2025-03-17T17:59:04.866519876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:04.881856 containerd[1478]: time="2025-03-17T17:59:04.880785724Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.677288322s" Mar 17 17:59:04.882473 containerd[1478]: time="2025-03-17T17:59:04.882224600Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Mar 17 17:59:04.910471 containerd[1478]: time="2025-03-17T17:59:04.909669681Z" level=info msg="CreateContainer within sandbox \"a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 17:59:04.951854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3826896429.mount: Deactivated successfully. Mar 17 17:59:04.959074 containerd[1478]: time="2025-03-17T17:59:04.958938499Z" level=info msg="CreateContainer within sandbox \"a13c49d481b009b11bfc5584503636cb4f6781695e40eee8eeafadbf3fdd59a4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"8e0e756c2187e7f8e8fccc4849b437b4a9b3c2382e8d32cae9137c3924506043\"" Mar 17 17:59:04.961061 containerd[1478]: time="2025-03-17T17:59:04.960904236Z" level=info msg="StartContainer for \"8e0e756c2187e7f8e8fccc4849b437b4a9b3c2382e8d32cae9137c3924506043\"" Mar 17 17:59:05.051301 systemd[1]: Started cri-containerd-8e0e756c2187e7f8e8fccc4849b437b4a9b3c2382e8d32cae9137c3924506043.scope - libcontainer container 8e0e756c2187e7f8e8fccc4849b437b4a9b3c2382e8d32cae9137c3924506043. Mar 17 17:59:05.116209 containerd[1478]: time="2025-03-17T17:59:05.115943735Z" level=info msg="StartContainer for \"8e0e756c2187e7f8e8fccc4849b437b4a9b3c2382e8d32cae9137c3924506043\" returns successfully" Mar 17 17:59:05.209463 kubelet[1833]: E0317 17:59:05.209380 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:05.762257 kubelet[1833]: I0317 17:59:05.761824 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.067063128 podStartE2EDuration="8.761798793s" podCreationTimestamp="2025-03-17 17:58:57 +0000 UTC" firstStartedPulling="2025-03-17 17:58:58.203173558 +0000 UTC m=+32.616217283" lastFinishedPulling="2025-03-17 17:59:04.89790921 +0000 UTC m=+39.310952948" observedRunningTime="2025-03-17 17:59:05.761503646 +0000 UTC m=+40.174547396" watchObservedRunningTime="2025-03-17 17:59:05.761798793 +0000 UTC m=+40.174842543" Mar 17 17:59:06.154997 kubelet[1833]: E0317 17:59:06.154543 1833 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:06.210513 kubelet[1833]: E0317 17:59:06.210420 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:07.211242 kubelet[1833]: E0317 17:59:07.211177 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:08.212066 kubelet[1833]: E0317 17:59:08.211928 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:09.212659 kubelet[1833]: E0317 17:59:09.212598 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:10.214323 kubelet[1833]: E0317 17:59:10.214220 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:11.215281 kubelet[1833]: E0317 17:59:11.215215 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:12.215581 kubelet[1833]: E0317 17:59:12.215466 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:13.216113 kubelet[1833]: E0317 17:59:13.216046 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:14.217087 kubelet[1833]: E0317 17:59:14.217017 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:15.152217 systemd[1]: Created slice kubepods-besteffort-pod2848c3a5_8367_47b5_937e_c6e9a2f6d717.slice - libcontainer container kubepods-besteffort-pod2848c3a5_8367_47b5_937e_c6e9a2f6d717.slice. Mar 17 17:59:15.209478 kubelet[1833]: I0317 17:59:15.209131 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6jmm\" (UniqueName: \"kubernetes.io/projected/2848c3a5-8367-47b5-937e-c6e9a2f6d717-kube-api-access-x6jmm\") pod \"test-pod-1\" (UID: \"2848c3a5-8367-47b5-937e-c6e9a2f6d717\") " pod="default/test-pod-1" Mar 17 17:59:15.209478 kubelet[1833]: I0317 17:59:15.209203 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5ef466d3-5491-4770-b62a-5d6b90e3b66a\" (UniqueName: \"kubernetes.io/nfs/2848c3a5-8367-47b5-937e-c6e9a2f6d717-pvc-5ef466d3-5491-4770-b62a-5d6b90e3b66a\") pod \"test-pod-1\" (UID: \"2848c3a5-8367-47b5-937e-c6e9a2f6d717\") " pod="default/test-pod-1" Mar 17 17:59:15.217487 kubelet[1833]: E0317 17:59:15.217404 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:15.347953 kernel: FS-Cache: Loaded Mar 17 17:59:15.429814 kernel: RPC: Registered named UNIX socket transport module. Mar 17 17:59:15.430088 kernel: RPC: Registered udp transport module. Mar 17 17:59:15.430170 kernel: RPC: Registered tcp transport module. Mar 17 17:59:15.430204 kernel: RPC: Registered tcp-with-tls transport module. Mar 17 17:59:15.431155 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 17:59:15.683945 kernel: NFS: Registering the id_resolver key type Mar 17 17:59:15.684211 kernel: Key type id_resolver registered Mar 17 17:59:15.686764 kernel: Key type id_legacy registered Mar 17 17:59:15.724783 nfsidmap[3611]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-6-424e48892b' Mar 17 17:59:15.741673 nfsidmap[3612]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-6-424e48892b' Mar 17 17:59:16.057322 containerd[1478]: time="2025-03-17T17:59:16.057237670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2848c3a5-8367-47b5-937e-c6e9a2f6d717,Namespace:default,Attempt:0,}" Mar 17 17:59:16.218604 kubelet[1833]: E0317 17:59:16.218515 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:16.263067 systemd-networkd[1374]: cali5ec59c6bf6e: Link UP Mar 17 17:59:16.263349 systemd-networkd[1374]: cali5ec59c6bf6e: Gained carrier Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.122 [INFO][3614] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {134.199.208.120-k8s-test--pod--1-eth0 default 2848c3a5-8367-47b5-937e-c6e9a2f6d717 1421 0 2025-03-17 17:58:58 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 134.199.208.120 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="134.199.208.120-k8s-test--pod--1-" Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.122 [INFO][3614] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="134.199.208.120-k8s-test--pod--1-eth0" Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.161 [INFO][3626] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" HandleID="k8s-pod-network.a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" Workload="134.199.208.120-k8s-test--pod--1-eth0" Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.183 [INFO][3626] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" HandleID="k8s-pod-network.a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" Workload="134.199.208.120-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293290), Attrs:map[string]string{"namespace":"default", "node":"134.199.208.120", "pod":"test-pod-1", "timestamp":"2025-03-17 17:59:16.161775383 +0000 UTC"}, Hostname:"134.199.208.120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.184 [INFO][3626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.184 [INFO][3626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.184 [INFO][3626] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '134.199.208.120' Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.190 [INFO][3626] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" host="134.199.208.120" Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.200 [INFO][3626] ipam/ipam.go 372: Looking up existing affinities for host host="134.199.208.120" Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.212 [INFO][3626] ipam/ipam.go 489: Trying affinity for 192.168.8.0/26 host="134.199.208.120" Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.216 [INFO][3626] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.0/26 host="134.199.208.120" Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.222 [INFO][3626] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="134.199.208.120" Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.222 [INFO][3626] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" host="134.199.208.120" Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.227 [INFO][3626] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1 Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.240 [INFO][3626] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" host="134.199.208.120" Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.255 [INFO][3626] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.4/26] block=192.168.8.0/26 handle="k8s-pod-network.a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" host="134.199.208.120" Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.255 [INFO][3626] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.4/26] handle="k8s-pod-network.a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" host="134.199.208.120" Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.255 [INFO][3626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.255 [INFO][3626] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.4/26] IPv6=[] ContainerID="a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" HandleID="k8s-pod-network.a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" Workload="134.199.208.120-k8s-test--pod--1-eth0" Mar 17 17:59:16.279464 containerd[1478]: 2025-03-17 17:59:16.257 [INFO][3614] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="134.199.208.120-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"134.199.208.120-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"2848c3a5-8367-47b5-937e-c6e9a2f6d717", ResourceVersion:"1421", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"134.199.208.120", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:59:16.282007 containerd[1478]: 2025-03-17 17:59:16.257 [INFO][3614] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.4/32] ContainerID="a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="134.199.208.120-k8s-test--pod--1-eth0" Mar 17 17:59:16.282007 containerd[1478]: 2025-03-17 17:59:16.257 [INFO][3614] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="134.199.208.120-k8s-test--pod--1-eth0" Mar 17 17:59:16.282007 containerd[1478]: 2025-03-17 17:59:16.262 [INFO][3614] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="134.199.208.120-k8s-test--pod--1-eth0" Mar 17 17:59:16.282007 containerd[1478]: 2025-03-17 17:59:16.264 [INFO][3614] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="134.199.208.120-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"134.199.208.120-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"2848c3a5-8367-47b5-937e-c6e9a2f6d717", ResourceVersion:"1421", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"134.199.208.120", ContainerID:"a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"76:b6:b0:3c:47:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:59:16.282007 containerd[1478]: 2025-03-17 17:59:16.276 [INFO][3614] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="134.199.208.120-k8s-test--pod--1-eth0" Mar 17 17:59:16.311612 containerd[1478]: time="2025-03-17T17:59:16.310951718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:59:16.311612 containerd[1478]: time="2025-03-17T17:59:16.311512104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:59:16.311612 containerd[1478]: time="2025-03-17T17:59:16.311546313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:16.312619 containerd[1478]: time="2025-03-17T17:59:16.311679754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:16.341016 systemd[1]: Started cri-containerd-a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1.scope - libcontainer container a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1. Mar 17 17:59:16.409909 containerd[1478]: time="2025-03-17T17:59:16.409286936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2848c3a5-8367-47b5-937e-c6e9a2f6d717,Namespace:default,Attempt:0,} returns sandbox id \"a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1\"" Mar 17 17:59:16.414093 containerd[1478]: time="2025-03-17T17:59:16.413997884Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:59:16.807229 containerd[1478]: time="2025-03-17T17:59:16.807156682Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:16.808015 containerd[1478]: time="2025-03-17T17:59:16.807955132Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 17 17:59:16.812169 containerd[1478]: time="2025-03-17T17:59:16.812012896Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 397.93284ms" Mar 17 17:59:16.812169 containerd[1478]: time="2025-03-17T17:59:16.812060791Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 17:59:16.815120 containerd[1478]: time="2025-03-17T17:59:16.815050670Z" level=info msg="CreateContainer within sandbox \"a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 17:59:16.837037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762438428.mount: Deactivated successfully. Mar 17 17:59:16.838059 containerd[1478]: time="2025-03-17T17:59:16.837835238Z" level=info msg="CreateContainer within sandbox \"a1171f678e699e44cd0f2b1c9a91798febcabbab726485b2a8c9d1aafa40adc1\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"848be9a751381124db4a9d2fcec9bc8cb570895c1197f185c1a31a552cf02cbd\"" Mar 17 17:59:16.839003 containerd[1478]: time="2025-03-17T17:59:16.838678764Z" level=info msg="StartContainer for \"848be9a751381124db4a9d2fcec9bc8cb570895c1197f185c1a31a552cf02cbd\"" Mar 17 17:59:16.879960 systemd[1]: Started cri-containerd-848be9a751381124db4a9d2fcec9bc8cb570895c1197f185c1a31a552cf02cbd.scope - libcontainer container 848be9a751381124db4a9d2fcec9bc8cb570895c1197f185c1a31a552cf02cbd. Mar 17 17:59:16.923236 containerd[1478]: time="2025-03-17T17:59:16.923063397Z" level=info msg="StartContainer for \"848be9a751381124db4a9d2fcec9bc8cb570895c1197f185c1a31a552cf02cbd\" returns successfully" Mar 17 17:59:17.219174 kubelet[1833]: E0317 17:59:17.218981 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:17.719110 kubelet[1833]: E0317 17:59:17.718985 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:17.797358 kubelet[1833]: I0317 17:59:17.797275 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.396410226 podStartE2EDuration="19.797243675s" podCreationTimestamp="2025-03-17 17:58:58 +0000 UTC" firstStartedPulling="2025-03-17 17:59:16.412262908 +0000 UTC m=+50.825306647" lastFinishedPulling="2025-03-17 17:59:16.813096371 +0000 UTC m=+51.226140096" observedRunningTime="2025-03-17 17:59:17.797138007 +0000 UTC m=+52.210181808" watchObservedRunningTime="2025-03-17 17:59:17.797243675 +0000 UTC m=+52.210287422" Mar 17 17:59:18.042321 systemd-networkd[1374]: cali5ec59c6bf6e: Gained IPv6LL Mar 17 17:59:18.219464 kubelet[1833]: E0317 17:59:18.219409 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:19.219984 kubelet[1833]: E0317 17:59:19.219902 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:20.221052 kubelet[1833]: E0317 17:59:20.220926 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:21.221649 kubelet[1833]: E0317 17:59:21.221569 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:22.236483 kubelet[1833]: E0317 17:59:22.236196 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"