Jan 29 11:24:12.007568 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 29 11:24:12.007616 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:24:12.007638 kernel: BIOS-provided physical RAM map: Jan 29 11:24:12.007650 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:24:12.007662 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:24:12.007674 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:24:12.007688 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jan 29 11:24:12.007699 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jan 29 11:24:12.007711 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:24:12.007723 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:24:12.007742 kernel: NX (Execute Disable) protection: active Jan 29 11:24:12.007753 kernel: APIC: Static calls initialized Jan 29 11:24:12.007764 kernel: SMBIOS 2.8 present. Jan 29 11:24:12.007776 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 29 11:24:12.007791 kernel: Hypervisor detected: KVM Jan 29 11:24:12.007804 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:24:12.007833 kernel: kvm-clock: using sched offset of 3561405325 cycles Jan 29 11:24:12.007849 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:24:12.007876 kernel: tsc: Detected 2494.138 MHz processor Jan 29 11:24:12.007890 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:24:12.007904 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:24:12.007918 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jan 29 11:24:12.007931 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:24:12.007946 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:24:12.007967 kernel: ACPI: Early table checksum verification disabled Jan 29 11:24:12.007982 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jan 29 11:24:12.007996 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:24:12.008010 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:24:12.008025 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:24:12.008039 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 29 11:24:12.008053 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:24:12.008067 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:24:12.008081 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:24:12.008103 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:24:12.008116 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 29 11:24:12.008128 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 29 11:24:12.008143 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 29 11:24:12.008157 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 29 11:24:12.008170 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 29 11:24:12.008184 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 29 11:24:12.008209 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 29 11:24:12.008265 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:24:12.008280 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 11:24:12.008336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 11:24:12.008351 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 11:24:12.008370 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jan 29 11:24:12.008384 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jan 29 11:24:12.008406 kernel: Zone ranges: Jan 29 11:24:12.008420 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:24:12.008443 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jan 29 11:24:12.008456 kernel: Normal empty Jan 29 11:24:12.008469 kernel: Movable zone start for each node Jan 29 11:24:12.008482 kernel: Early memory node ranges Jan 29 11:24:12.008495 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:24:12.008510 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jan 29 11:24:12.008523 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jan 29 11:24:12.008545 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:24:12.008559 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:24:12.008573 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jan 29 11:24:12.008587 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:24:12.008601 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:24:12.008615 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:24:12.008629 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:24:12.008643 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:24:12.008657 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:24:12.008678 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:24:12.008694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:24:12.008710 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:24:12.008726 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:24:12.008741 kernel: TSC deadline timer available Jan 29 11:24:12.008756 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 11:24:12.008771 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:24:12.008786 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 29 11:24:12.008815 kernel: Booting paravirtualized kernel on KVM Jan 29 11:24:12.008837 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:24:12.008854 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 11:24:12.008870 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 11:24:12.008885 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 11:24:12.008899 kernel: pcpu-alloc: [0] 0 1 Jan 29 11:24:12.008915 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 11:24:12.008933 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:24:12.008947 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:24:12.008968 kernel: random: crng init done Jan 29 11:24:12.008984 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:24:12.008999 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:24:12.009014 kernel: Fallback order for Node 0: 0 Jan 29 11:24:12.009029 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jan 29 11:24:12.009042 kernel: Policy zone: DMA32 Jan 29 11:24:12.009058 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:24:12.009074 kernel: Memory: 1969140K/2096600K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 127200K reserved, 0K cma-reserved) Jan 29 11:24:12.009089 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:24:12.009110 kernel: Kernel/User page tables isolation: enabled Jan 29 11:24:12.009126 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 11:24:12.009140 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:24:12.009155 kernel: Dynamic Preempt: voluntary Jan 29 11:24:12.009169 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:24:12.009184 kernel: rcu: RCU event tracing is enabled. Jan 29 11:24:12.009198 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:24:12.009212 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:24:12.009227 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:24:12.011377 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:24:12.011393 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:24:12.011407 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:24:12.011421 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 11:24:12.011435 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:24:12.011448 kernel: Console: colour VGA+ 80x25 Jan 29 11:24:12.011462 kernel: printk: console [tty0] enabled Jan 29 11:24:12.011478 kernel: printk: console [ttyS0] enabled Jan 29 11:24:12.011495 kernel: ACPI: Core revision 20230628 Jan 29 11:24:12.011512 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:24:12.011535 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:24:12.011551 kernel: x2apic enabled Jan 29 11:24:12.011563 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:24:12.011573 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:24:12.011582 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 29 11:24:12.011592 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jan 29 11:24:12.011601 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 11:24:12.011611 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 11:24:12.011634 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:24:12.011644 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:24:12.011654 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:24:12.011668 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:24:12.011678 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 29 11:24:12.011687 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:24:12.011697 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:24:12.011707 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 11:24:12.011717 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:24:12.011732 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:24:12.011747 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:24:12.011763 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:24:12.011778 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:24:12.011791 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 11:24:12.011806 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:24:12.011821 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:24:12.011835 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:24:12.011850 kernel: landlock: Up and running. Jan 29 11:24:12.011859 kernel: SELinux: Initializing. Jan 29 11:24:12.011889 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:24:12.011905 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:24:12.011920 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 29 11:24:12.011936 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:24:12.011950 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:24:12.011965 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:24:12.011987 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 29 11:24:12.012002 kernel: signal: max sigframe size: 1776 Jan 29 11:24:12.012018 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:24:12.012035 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:24:12.012050 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:24:12.012063 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:24:12.012073 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:24:12.012083 kernel: .... node #0, CPUs: #1 Jan 29 11:24:12.012093 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:24:12.012103 kernel: smpboot: Max logical packages: 1 Jan 29 11:24:12.012118 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jan 29 11:24:12.012128 kernel: devtmpfs: initialized Jan 29 11:24:12.012138 kernel: x86/mm: Memory block size: 128MB Jan 29 11:24:12.012148 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:24:12.012158 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:24:12.012183 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:24:12.012193 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:24:12.012203 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:24:12.012214 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:24:12.012414 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:24:12.012431 kernel: audit: type=2000 audit(1738149850.183:1): state=initialized audit_enabled=0 res=1 Jan 29 11:24:12.012447 kernel: cpuidle: using governor menu Jan 29 11:24:12.012464 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:24:12.012481 kernel: dca service started, version 1.12.1 Jan 29 11:24:12.012497 kernel: PCI: Using configuration type 1 for base access Jan 29 11:24:12.012514 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:24:12.012530 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:24:12.012546 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:24:12.012572 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:24:12.012589 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:24:12.012605 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:24:12.012621 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:24:12.012637 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:24:12.012654 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:24:12.012669 kernel: ACPI: Interpreter enabled Jan 29 11:24:12.012685 kernel: ACPI: PM: (supports S0 S5) Jan 29 11:24:12.012701 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:24:12.012724 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:24:12.012740 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:24:12.012757 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 11:24:12.012773 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:24:12.013089 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:24:12.013217 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 11:24:12.017585 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 11:24:12.017648 kernel: acpiphp: Slot [3] registered Jan 29 11:24:12.017664 kernel: acpiphp: Slot [4] registered Jan 29 11:24:12.017679 kernel: acpiphp: Slot [5] registered Jan 29 11:24:12.017695 kernel: acpiphp: Slot [6] registered Jan 29 11:24:12.017713 kernel: acpiphp: Slot [7] registered Jan 29 11:24:12.017728 kernel: acpiphp: Slot [8] registered Jan 29 11:24:12.017743 kernel: acpiphp: Slot [9] registered Jan 29 11:24:12.017759 kernel: acpiphp: Slot [10] registered Jan 29 11:24:12.017776 kernel: acpiphp: Slot [11] registered Jan 29 11:24:12.017798 kernel: acpiphp: Slot [12] registered Jan 29 11:24:12.017813 kernel: acpiphp: Slot [13] registered Jan 29 11:24:12.017829 kernel: acpiphp: Slot [14] registered Jan 29 11:24:12.017846 kernel: acpiphp: Slot [15] registered Jan 29 11:24:12.017863 kernel: acpiphp: Slot [16] registered Jan 29 11:24:12.017880 kernel: acpiphp: Slot [17] registered Jan 29 11:24:12.017897 kernel: acpiphp: Slot [18] registered Jan 29 11:24:12.017913 kernel: acpiphp: Slot [19] registered Jan 29 11:24:12.017931 kernel: acpiphp: Slot [20] registered Jan 29 11:24:12.017949 kernel: acpiphp: Slot [21] registered Jan 29 11:24:12.017967 kernel: acpiphp: Slot [22] registered Jan 29 11:24:12.017977 kernel: acpiphp: Slot [23] registered Jan 29 11:24:12.017987 kernel: acpiphp: Slot [24] registered Jan 29 11:24:12.017996 kernel: acpiphp: Slot [25] registered Jan 29 11:24:12.018006 kernel: acpiphp: Slot [26] registered Jan 29 11:24:12.018016 kernel: acpiphp: Slot [27] registered Jan 29 11:24:12.018025 kernel: acpiphp: Slot [28] registered Jan 29 11:24:12.018035 kernel: acpiphp: Slot [29] registered Jan 29 11:24:12.018044 kernel: acpiphp: Slot [30] registered Jan 29 11:24:12.018058 kernel: acpiphp: Slot [31] registered Jan 29 11:24:12.018068 kernel: PCI host bridge to bus 0000:00 Jan 29 11:24:12.018260 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:24:12.018419 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:24:12.018572 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:24:12.018760 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 29 11:24:12.018943 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 29 11:24:12.019101 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:24:12.020461 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 11:24:12.020622 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 11:24:12.020746 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 11:24:12.020919 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 29 11:24:12.021083 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 11:24:12.021222 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 11:24:12.021502 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 11:24:12.021633 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 11:24:12.021801 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 29 11:24:12.025284 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 29 11:24:12.025570 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 11:24:12.025714 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 11:24:12.025859 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 11:24:12.026048 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 11:24:12.026174 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 11:24:12.026330 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 29 11:24:12.026500 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 29 11:24:12.026658 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 29 11:24:12.026778 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:24:12.026967 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:24:12.027086 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 29 11:24:12.027213 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 29 11:24:12.031584 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 29 11:24:12.031739 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:24:12.031852 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 29 11:24:12.032038 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 29 11:24:12.032163 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 29 11:24:12.032298 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 29 11:24:12.032406 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 29 11:24:12.032511 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 29 11:24:12.032623 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 29 11:24:12.032869 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:24:12.033055 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 11:24:12.035268 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 29 11:24:12.035480 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 29 11:24:12.035609 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:24:12.035720 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 29 11:24:12.035830 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 29 11:24:12.036157 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 29 11:24:12.036326 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 11:24:12.036565 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 29 11:24:12.036731 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 29 11:24:12.036745 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:24:12.036755 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:24:12.036765 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:24:12.036775 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:24:12.036790 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 11:24:12.036800 kernel: iommu: Default domain type: Translated Jan 29 11:24:12.036809 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:24:12.036819 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:24:12.036829 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:24:12.036838 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:24:12.036848 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jan 29 11:24:12.039423 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 11:24:12.039761 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 11:24:12.039982 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:24:12.040017 kernel: vgaarb: loaded Jan 29 11:24:12.040029 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:24:12.040040 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:24:12.040050 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:24:12.040060 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:24:12.040071 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:24:12.040081 kernel: pnp: PnP ACPI init Jan 29 11:24:12.040091 kernel: pnp: PnP ACPI: found 4 devices Jan 29 11:24:12.040101 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:24:12.040115 kernel: NET: Registered PF_INET protocol family Jan 29 11:24:12.040125 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:24:12.040136 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 11:24:12.040146 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:24:12.040155 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:24:12.040165 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:24:12.040175 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 11:24:12.040185 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:24:12.040195 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:24:12.040208 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:24:12.040218 kernel: NET: Registered PF_XDP protocol family Jan 29 11:24:12.040362 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:24:12.040462 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:24:12.040560 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:24:12.040657 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 29 11:24:12.040752 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 29 11:24:12.040870 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 11:24:12.041048 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 11:24:12.041066 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 11:24:12.044357 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 28901 usecs Jan 29 11:24:12.044401 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:24:12.044413 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:24:12.044451 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 29 11:24:12.044462 kernel: Initialise system trusted keyrings Jan 29 11:24:12.044473 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 11:24:12.044497 kernel: Key type asymmetric registered Jan 29 11:24:12.044507 kernel: Asymmetric key parser 'x509' registered Jan 29 11:24:12.044517 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:24:12.044527 kernel: io scheduler mq-deadline registered Jan 29 11:24:12.044537 kernel: io scheduler kyber registered Jan 29 11:24:12.044547 kernel: io scheduler bfq registered Jan 29 11:24:12.044557 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:24:12.044569 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 11:24:12.044578 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 11:24:12.044588 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 11:24:12.044602 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:24:12.044612 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:24:12.044622 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:24:12.044631 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:24:12.044641 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:24:12.044651 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:24:12.044839 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 11:24:12.044943 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 11:24:12.045046 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T11:24:11 UTC (1738149851) Jan 29 11:24:12.045144 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 29 11:24:12.045157 kernel: intel_pstate: CPU model not supported Jan 29 11:24:12.045167 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:24:12.045176 kernel: Segment Routing with IPv6 Jan 29 11:24:12.045186 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:24:12.045196 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:24:12.045206 kernel: Key type dns_resolver registered Jan 29 11:24:12.045219 kernel: IPI shorthand broadcast: enabled Jan 29 11:24:12.045305 kernel: sched_clock: Marking stable (986003556, 98445712)->(1109713167, -25263899) Jan 29 11:24:12.045320 kernel: registered taskstats version 1 Jan 29 11:24:12.045335 kernel: Loading compiled-in X.509 certificates Jan 29 11:24:12.045349 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 29 11:24:12.045363 kernel: Key type .fscrypt registered Jan 29 11:24:12.045376 kernel: Key type fscrypt-provisioning registered Jan 29 11:24:12.045390 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:24:12.045404 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:24:12.045425 kernel: ima: No architecture policies found Jan 29 11:24:12.045440 kernel: clk: Disabling unused clocks Jan 29 11:24:12.045456 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 29 11:24:12.045471 kernel: Write protecting the kernel read-only data: 38912k Jan 29 11:24:12.045515 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 29 11:24:12.045535 kernel: Run /init as init process Jan 29 11:24:12.045551 kernel: with arguments: Jan 29 11:24:12.045567 kernel: /init Jan 29 11:24:12.045580 kernel: with environment: Jan 29 11:24:12.045594 kernel: HOME=/ Jan 29 11:24:12.045604 kernel: TERM=linux Jan 29 11:24:12.045615 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:24:12.045629 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:24:12.045643 systemd[1]: Detected virtualization kvm. Jan 29 11:24:12.045655 systemd[1]: Detected architecture x86-64. Jan 29 11:24:12.045665 systemd[1]: Running in initrd. Jan 29 11:24:12.045676 systemd[1]: No hostname configured, using default hostname. Jan 29 11:24:12.045690 systemd[1]: Hostname set to . Jan 29 11:24:12.045702 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:24:12.045712 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:24:12.045723 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:24:12.045734 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:24:12.045746 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:24:12.045757 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:24:12.045768 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:24:12.045783 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:24:12.045795 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:24:12.045807 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:24:12.045818 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:24:12.045829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:24:12.045840 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:24:12.045854 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:24:12.045865 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:24:12.045879 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:24:12.045890 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:24:12.045901 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:24:12.045912 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:24:12.045926 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:24:12.045937 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:24:12.045948 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:24:12.045959 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:24:12.045970 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:24:12.045981 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:24:12.045992 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:24:12.046003 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:24:12.046017 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:24:12.046028 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:24:12.046039 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:24:12.046049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:24:12.046060 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:24:12.046114 systemd-journald[183]: Collecting audit messages is disabled. Jan 29 11:24:12.046144 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:24:12.046155 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:24:12.046167 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:24:12.046182 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:24:12.046193 kernel: Bridge firewalling registered Jan 29 11:24:12.046203 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:24:12.046215 systemd-journald[183]: Journal started Jan 29 11:24:12.046259 systemd-journald[183]: Runtime Journal (/run/log/journal/d7bb115b42b544b3a0525f2eda2fd5f2) is 4.9M, max 39.3M, 34.4M free. Jan 29 11:24:12.004768 systemd-modules-load[184]: Inserted module 'overlay' Jan 29 11:24:12.042832 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 29 11:24:12.094332 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:24:12.100392 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:24:12.101467 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:24:12.115554 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:24:12.117733 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:24:12.119466 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:24:12.130453 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:24:12.153017 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:24:12.156688 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:24:12.160160 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:24:12.166539 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:24:12.168420 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:24:12.177550 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:24:12.187887 dracut-cmdline[215]: dracut-dracut-053 Jan 29 11:24:12.191561 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:24:12.245067 systemd-resolved[217]: Positive Trust Anchors: Jan 29 11:24:12.245088 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:24:12.245145 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:24:12.249241 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 29 11:24:12.252919 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:24:12.254368 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:24:12.321324 kernel: SCSI subsystem initialized Jan 29 11:24:12.331267 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:24:12.344276 kernel: iscsi: registered transport (tcp) Jan 29 11:24:12.367459 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:24:12.367542 kernel: QLogic iSCSI HBA Driver Jan 29 11:24:12.425428 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:24:12.432584 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:24:12.471340 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:24:12.471455 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:24:12.473637 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:24:12.522286 kernel: raid6: avx2x4 gen() 22059 MB/s Jan 29 11:24:12.539293 kernel: raid6: avx2x2 gen() 22243 MB/s Jan 29 11:24:12.556691 kernel: raid6: avx2x1 gen() 19417 MB/s Jan 29 11:24:12.556815 kernel: raid6: using algorithm avx2x2 gen() 22243 MB/s Jan 29 11:24:12.574420 kernel: raid6: .... xor() 16152 MB/s, rmw enabled Jan 29 11:24:12.574540 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:24:12.600298 kernel: xor: automatically using best checksumming function avx Jan 29 11:24:12.772304 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:24:12.787371 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:24:12.795475 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:24:12.815420 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 29 11:24:12.821114 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:24:12.831526 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:24:12.849767 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 29 11:24:12.896779 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:24:12.902524 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:24:12.970521 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:24:12.976505 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:24:13.020157 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:24:13.024633 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:24:13.025782 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:24:13.026411 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:24:13.033540 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:24:13.068844 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:24:13.075260 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 29 11:24:13.136670 kernel: scsi host0: Virtio SCSI HBA Jan 29 11:24:13.136930 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 29 11:24:13.137095 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:24:13.137115 kernel: GPT:9289727 != 125829119 Jan 29 11:24:13.137138 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:24:13.137155 kernel: GPT:9289727 != 125829119 Jan 29 11:24:13.137171 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:24:13.137187 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:24:13.137205 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:24:13.137222 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 29 11:24:13.160373 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) Jan 29 11:24:13.160602 kernel: libata version 3.00 loaded. Jan 29 11:24:13.160641 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:24:13.160661 kernel: AES CTR mode by8 optimization enabled Jan 29 11:24:13.168032 kernel: ACPI: bus type USB registered Jan 29 11:24:13.168135 kernel: usbcore: registered new interface driver usbfs Jan 29 11:24:13.169274 kernel: usbcore: registered new interface driver hub Jan 29 11:24:13.172263 kernel: usbcore: registered new device driver usb Jan 29 11:24:13.184542 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 11:24:13.245550 kernel: scsi host1: ata_piix Jan 29 11:24:13.245745 kernel: scsi host2: ata_piix Jan 29 11:24:13.245899 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 29 11:24:13.245915 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 29 11:24:13.245928 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 29 11:24:13.246067 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 29 11:24:13.246187 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 29 11:24:13.246388 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 29 11:24:13.246573 kernel: hub 1-0:1.0: USB hub found Jan 29 11:24:13.246813 kernel: hub 1-0:1.0: 2 ports detected Jan 29 11:24:13.198794 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:24:13.198941 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:24:13.200702 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:24:13.201317 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:24:13.201510 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:24:13.203452 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:24:13.215825 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:24:13.271705 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:24:13.317677 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (467) Jan 29 11:24:13.317733 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (471) Jan 29 11:24:13.320288 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:24:13.326301 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:24:13.337344 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:24:13.346018 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:24:13.346584 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:24:13.354587 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:24:13.358412 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:24:13.364211 disk-uuid[531]: Primary Header is updated. Jan 29 11:24:13.364211 disk-uuid[531]: Secondary Entries is updated. Jan 29 11:24:13.364211 disk-uuid[531]: Secondary Header is updated. Jan 29 11:24:13.371284 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:24:13.380275 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:24:13.407928 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:24:14.382310 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:24:14.382410 disk-uuid[533]: The operation has completed successfully. Jan 29 11:24:14.441345 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:24:14.441519 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:24:14.455601 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:24:14.470881 sh[563]: Success Jan 29 11:24:14.487269 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:24:14.560803 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:24:14.578412 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:24:14.579111 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:24:14.613347 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 29 11:24:14.613439 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:24:14.614269 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:24:14.615867 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:24:14.615913 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:24:14.626362 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:24:14.627659 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:24:14.640737 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:24:14.644663 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:24:14.659267 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:24:14.659363 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:24:14.659381 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:24:14.664276 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:24:14.680853 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:24:14.680175 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:24:14.691917 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:24:14.697664 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:24:14.834514 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:24:14.845773 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:24:14.856831 ignition[663]: Ignition 2.20.0 Jan 29 11:24:14.856849 ignition[663]: Stage: fetch-offline Jan 29 11:24:14.858631 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:24:14.856917 ignition[663]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:24:14.856933 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:24:14.857090 ignition[663]: parsed url from cmdline: "" Jan 29 11:24:14.857094 ignition[663]: no config URL provided Jan 29 11:24:14.857100 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:24:14.857110 ignition[663]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:24:14.857116 ignition[663]: failed to fetch config: resource requires networking Jan 29 11:24:14.857402 ignition[663]: Ignition finished successfully Jan 29 11:24:14.893059 systemd-networkd[750]: lo: Link UP Jan 29 11:24:14.893078 systemd-networkd[750]: lo: Gained carrier Jan 29 11:24:14.896277 systemd-networkd[750]: Enumeration completed Jan 29 11:24:14.896797 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 29 11:24:14.896803 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 29 11:24:14.896838 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:24:14.897391 systemd[1]: Reached target network.target - Network. Jan 29 11:24:14.898091 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:24:14.898097 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:24:14.898995 systemd-networkd[750]: eth0: Link UP Jan 29 11:24:14.899002 systemd-networkd[750]: eth0: Gained carrier Jan 29 11:24:14.899017 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 29 11:24:14.901919 systemd-networkd[750]: eth1: Link UP Jan 29 11:24:14.901926 systemd-networkd[750]: eth1: Gained carrier Jan 29 11:24:14.901943 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:24:14.904690 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:24:14.916369 systemd-networkd[750]: eth0: DHCPv4 address 164.92.103.73/19, gateway 164.92.96.1 acquired from 169.254.169.253 Jan 29 11:24:14.919374 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.4/20 acquired from 169.254.169.253 Jan 29 11:24:14.942987 ignition[754]: Ignition 2.20.0 Jan 29 11:24:14.943000 ignition[754]: Stage: fetch Jan 29 11:24:14.943219 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:24:14.943253 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:24:14.943356 ignition[754]: parsed url from cmdline: "" Jan 29 11:24:14.943360 ignition[754]: no config URL provided Jan 29 11:24:14.943365 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:24:14.943374 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:24:14.943402 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 29 11:24:14.974725 ignition[754]: GET result: OK Jan 29 11:24:14.975399 ignition[754]: parsing config with SHA512: ece9e1604f772836bddf7448fb99f33e0b760ce3a45a468fd7ecce01c57b4511210334717f5e2cd3a6090b7e44ad3160b2502b5de9cecfa9fd359fa355b74cc0 Jan 29 11:24:14.979100 unknown[754]: fetched base config from "system" Jan 29 11:24:14.979116 unknown[754]: fetched base config from "system" Jan 29 11:24:14.979381 ignition[754]: fetch: fetch complete Jan 29 11:24:14.979122 unknown[754]: fetched user config from "digitalocean" Jan 29 11:24:14.979386 ignition[754]: fetch: fetch passed Jan 29 11:24:14.982656 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:24:14.979440 ignition[754]: Ignition finished successfully Jan 29 11:24:14.988506 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:24:15.015897 ignition[761]: Ignition 2.20.0 Jan 29 11:24:15.015915 ignition[761]: Stage: kargs Jan 29 11:24:15.016124 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:24:15.016135 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:24:15.017042 ignition[761]: kargs: kargs passed Jan 29 11:24:15.017103 ignition[761]: Ignition finished successfully Jan 29 11:24:15.020046 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:24:15.028715 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:24:15.050327 ignition[767]: Ignition 2.20.0 Jan 29 11:24:15.050340 ignition[767]: Stage: disks Jan 29 11:24:15.050587 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:24:15.052603 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:24:15.050602 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:24:15.053788 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:24:15.051322 ignition[767]: disks: disks passed Jan 29 11:24:15.057969 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:24:15.051377 ignition[767]: Ignition finished successfully Jan 29 11:24:15.059173 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:24:15.060276 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:24:15.060802 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:24:15.067616 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:24:15.088902 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:24:15.093474 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:24:15.100404 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:24:15.224297 kernel: EXT4-fs (vda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 29 11:24:15.224306 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:24:15.225601 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:24:15.236428 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:24:15.239163 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:24:15.243578 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 29 11:24:15.250562 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:24:15.258832 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (784) Jan 29 11:24:15.258861 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:24:15.258875 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:24:15.258887 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:24:15.257005 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:24:15.257054 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:24:15.263267 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:24:15.270909 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:24:15.271620 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:24:15.275913 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:24:15.371830 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:24:15.384324 coreos-metadata[787]: Jan 29 11:24:15.384 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:24:15.388226 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:24:15.390967 coreos-metadata[786]: Jan 29 11:24:15.390 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:24:15.397575 coreos-metadata[787]: Jan 29 11:24:15.396 INFO Fetch successful Jan 29 11:24:15.401423 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:24:15.404703 coreos-metadata[786]: Jan 29 11:24:15.404 INFO Fetch successful Jan 29 11:24:15.405469 coreos-metadata[787]: Jan 29 11:24:15.405 INFO wrote hostname ci-4186.1.0-a-fee62db618 to /sysroot/etc/hostname Jan 29 11:24:15.410010 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:24:15.415498 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 29 11:24:15.416926 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:24:15.416971 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 29 11:24:15.550988 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:24:15.558419 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:24:15.560435 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:24:15.575306 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:24:15.597824 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:24:15.613797 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:24:15.625101 ignition[908]: INFO : Ignition 2.20.0 Jan 29 11:24:15.625101 ignition[908]: INFO : Stage: mount Jan 29 11:24:15.626206 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:24:15.626206 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:24:15.626206 ignition[908]: INFO : mount: mount passed Jan 29 11:24:15.626206 ignition[908]: INFO : Ignition finished successfully Jan 29 11:24:15.627187 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:24:15.632428 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:24:15.650526 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:24:15.663266 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (919) Jan 29 11:24:15.665319 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:24:15.665400 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:24:15.666571 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:24:15.670557 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:24:15.673152 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:24:15.706751 ignition[936]: INFO : Ignition 2.20.0 Jan 29 11:24:15.707486 ignition[936]: INFO : Stage: files Jan 29 11:24:15.708808 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:24:15.708808 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:24:15.710315 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:24:15.712199 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:24:15.712199 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:24:15.715985 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:24:15.716569 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:24:15.717259 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:24:15.716626 unknown[936]: wrote ssh authorized keys file for user: core Jan 29 11:24:15.718681 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:24:15.718681 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:24:15.720087 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:24:15.720087 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:24:15.720087 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:24:15.720087 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:24:15.720087 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:24:15.720087 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 29 11:24:16.192550 systemd-networkd[750]: eth0: Gained IPv6LL Jan 29 11:24:16.265676 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 11:24:16.559525 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:24:16.560850 ignition[936]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:24:16.560850 ignition[936]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:24:16.560850 ignition[936]: INFO : files: files passed Jan 29 11:24:16.560850 ignition[936]: INFO : Ignition finished successfully Jan 29 11:24:16.561407 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:24:16.574187 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:24:16.576470 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:24:16.582215 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:24:16.582902 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:24:16.598098 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:24:16.598098 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:24:16.601616 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:24:16.603173 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:24:16.604930 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:24:16.608518 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:24:16.659126 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:24:16.659357 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:24:16.660631 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:24:16.661124 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:24:16.661964 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:24:16.669534 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:24:16.684303 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:24:16.691552 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:24:16.711619 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:24:16.713095 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:24:16.714626 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:24:16.715055 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:24:16.715227 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:24:16.716282 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:24:16.716832 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:24:16.717548 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:24:16.718326 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:24:16.718942 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:24:16.719757 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:24:16.720507 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:24:16.721296 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:24:16.721993 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:24:16.722794 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:24:16.723411 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:24:16.723561 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:24:16.724893 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:24:16.725550 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:24:16.726415 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:24:16.726569 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:24:16.727175 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:24:16.727382 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:24:16.728630 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:24:16.728809 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:24:16.729714 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:24:16.729815 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:24:16.730697 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:24:16.730941 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:24:16.742680 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:24:16.748613 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:24:16.749711 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:24:16.749916 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:24:16.751135 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:24:16.752173 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:24:16.760742 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:24:16.761491 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:24:16.774140 ignition[988]: INFO : Ignition 2.20.0 Jan 29 11:24:16.775954 ignition[988]: INFO : Stage: umount Jan 29 11:24:16.775954 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:24:16.775954 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:24:16.780362 ignition[988]: INFO : umount: umount passed Jan 29 11:24:16.780362 ignition[988]: INFO : Ignition finished successfully Jan 29 11:24:16.778417 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:24:16.778540 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:24:16.779627 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:24:16.779870 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:24:16.782461 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:24:16.782529 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:24:16.783204 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:24:16.783364 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:24:16.783845 systemd[1]: Stopped target network.target - Network. Jan 29 11:24:16.784789 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:24:16.784858 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:24:16.785774 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:24:16.786498 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:24:16.790898 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:24:16.791506 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:24:16.792287 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:24:16.792652 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:24:16.792709 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:24:16.793081 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:24:16.793122 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:24:16.793528 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:24:16.793604 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:24:16.794546 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:24:16.794608 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:24:16.795430 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:24:16.797625 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:24:16.800787 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:24:16.801719 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:24:16.801825 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:24:16.802287 systemd-networkd[750]: eth1: DHCPv6 lease lost Jan 29 11:24:16.805392 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 29 11:24:16.808139 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:24:16.808318 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:24:16.812640 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:24:16.812896 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:24:16.815335 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:24:16.815397 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:24:16.815901 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:24:16.815966 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:24:16.820420 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:24:16.821576 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:24:16.821683 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:24:16.823346 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:24:16.823423 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:24:16.824080 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:24:16.824155 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:24:16.824777 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:24:16.824829 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:24:16.826738 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:24:16.848730 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:24:16.848981 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:24:16.850653 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:24:16.850738 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:24:16.852178 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:24:16.852271 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:24:16.852955 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:24:16.853022 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:24:16.854337 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:24:16.854414 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:24:16.855615 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:24:16.855755 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:24:16.863703 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:24:16.865056 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:24:16.865148 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:24:16.866222 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:24:16.866382 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:24:16.867626 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:24:16.867721 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:24:16.872032 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:24:16.872116 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:24:16.874385 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:24:16.874611 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:24:16.878089 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:24:16.878260 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:24:16.880167 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:24:16.886668 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:24:16.902533 systemd[1]: Switching root. Jan 29 11:24:16.941631 systemd-journald[183]: Journal stopped Jan 29 11:24:18.129781 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 29 11:24:18.129874 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:24:18.129895 kernel: SELinux: policy capability open_perms=1 Jan 29 11:24:18.129907 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:24:18.129919 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:24:18.129931 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:24:18.129943 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:24:18.129957 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:24:18.129969 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:24:18.129981 kernel: audit: type=1403 audit(1738149857.172:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:24:18.129994 systemd[1]: Successfully loaded SELinux policy in 41.706ms. Jan 29 11:24:18.130019 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.485ms. Jan 29 11:24:18.130032 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:24:18.130045 systemd[1]: Detected virtualization kvm. Jan 29 11:24:18.130058 systemd[1]: Detected architecture x86-64. Jan 29 11:24:18.130071 systemd[1]: Detected first boot. Jan 29 11:24:18.130086 systemd[1]: Hostname set to . Jan 29 11:24:18.130099 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:24:18.130112 zram_generator::config[1034]: No configuration found. Jan 29 11:24:18.130125 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:24:18.130138 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:24:18.130151 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:24:18.130164 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:24:18.130177 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:24:18.130192 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:24:18.130204 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:24:18.130217 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:24:18.130242 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:24:18.130260 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:24:18.130273 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:24:18.130285 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:24:18.130298 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:24:18.130310 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:24:18.130326 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:24:18.130338 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:24:18.130354 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:24:18.130366 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:24:18.130379 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:24:18.130391 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:24:18.130403 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:24:18.130419 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:24:18.130432 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:24:18.130445 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:24:18.130457 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:24:18.130473 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:24:18.130486 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:24:18.130498 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:24:18.130510 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:24:18.130526 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:24:18.130538 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:24:18.130551 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:24:18.130563 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:24:18.130575 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:24:18.130588 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:24:18.130600 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:24:18.130613 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:24:18.130625 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:24:18.130640 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:24:18.130652 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:24:18.130664 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:24:18.130677 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:24:18.130689 systemd[1]: Reached target machines.target - Containers. Jan 29 11:24:18.130701 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:24:18.130737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:24:18.130750 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:24:18.130765 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:24:18.130778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:24:18.130790 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:24:18.130802 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:24:18.130814 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:24:18.130826 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:24:18.130839 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:24:18.130852 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:24:18.130864 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:24:18.130879 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:24:18.130891 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:24:18.130903 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:24:18.130915 kernel: fuse: init (API version 7.39) Jan 29 11:24:18.130930 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:24:18.130943 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:24:18.130955 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:24:18.130967 kernel: loop: module loaded Jan 29 11:24:18.130978 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:24:18.130993 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:24:18.131011 systemd[1]: Stopped verity-setup.service. Jan 29 11:24:18.131024 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:24:18.131037 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:24:18.131049 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:24:18.131061 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:24:18.131076 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:24:18.131089 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:24:18.131102 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:24:18.131115 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:24:18.131130 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:24:18.131142 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:24:18.131155 kernel: ACPI: bus type drm_connector registered Jan 29 11:24:18.131167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:24:18.131179 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:24:18.131192 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:24:18.131205 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:24:18.131217 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:24:18.133349 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:24:18.133386 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:24:18.133399 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:24:18.133413 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:24:18.133426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:24:18.133480 systemd-journald[1103]: Collecting audit messages is disabled. Jan 29 11:24:18.133509 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:24:18.133523 systemd-journald[1103]: Journal started Jan 29 11:24:18.133556 systemd-journald[1103]: Runtime Journal (/run/log/journal/d7bb115b42b544b3a0525f2eda2fd5f2) is 4.9M, max 39.3M, 34.4M free. Jan 29 11:24:17.821542 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:24:17.842679 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:24:17.843297 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:24:18.137292 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:24:18.139313 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:24:18.151977 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:24:18.164095 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:24:18.172397 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:24:18.179425 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:24:18.182050 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:24:18.182101 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:24:18.184770 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:24:18.200170 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:24:18.206458 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:24:18.207085 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:24:18.220634 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:24:18.226568 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:24:18.227055 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:24:18.234497 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:24:18.235266 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:24:18.246487 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:24:18.250655 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:24:18.255409 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:24:18.259187 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:24:18.260213 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:24:18.261067 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:24:18.262086 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:24:18.295392 systemd-journald[1103]: Time spent on flushing to /var/log/journal/d7bb115b42b544b3a0525f2eda2fd5f2 is 133.898ms for 971 entries. Jan 29 11:24:18.295392 systemd-journald[1103]: System Journal (/var/log/journal/d7bb115b42b544b3a0525f2eda2fd5f2) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:24:18.457924 systemd-journald[1103]: Received client request to flush runtime journal. Jan 29 11:24:18.458009 kernel: loop0: detected capacity change from 0 to 138184 Jan 29 11:24:18.458030 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:24:18.458047 kernel: loop1: detected capacity change from 0 to 8 Jan 29 11:24:18.308743 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:24:18.320705 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:24:18.321579 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:24:18.323031 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:24:18.335543 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:24:18.403779 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:24:18.438154 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:24:18.440637 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:24:18.442593 udevadm[1156]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:24:18.465155 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:24:18.472681 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jan 29 11:24:18.472703 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jan 29 11:24:18.481889 kernel: loop2: detected capacity change from 0 to 141000 Jan 29 11:24:18.481369 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:24:18.495929 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:24:18.557694 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:24:18.567418 kernel: loop3: detected capacity change from 0 to 218376 Jan 29 11:24:18.565559 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:24:18.606416 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 29 11:24:18.606438 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 29 11:24:18.614304 kernel: loop4: detected capacity change from 0 to 138184 Jan 29 11:24:18.614478 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:24:18.664315 kernel: loop5: detected capacity change from 0 to 8 Jan 29 11:24:18.669278 kernel: loop6: detected capacity change from 0 to 141000 Jan 29 11:24:18.711480 kernel: loop7: detected capacity change from 0 to 218376 Jan 29 11:24:18.737329 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 29 11:24:18.737922 (sd-merge)[1178]: Merged extensions into '/usr'. Jan 29 11:24:18.749838 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:24:18.749864 systemd[1]: Reloading... Jan 29 11:24:18.922070 zram_generator::config[1206]: No configuration found. Jan 29 11:24:19.085630 ldconfig[1144]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:24:19.143444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:24:19.202256 systemd[1]: Reloading finished in 451 ms. Jan 29 11:24:19.227320 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:24:19.234033 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:24:19.244583 systemd[1]: Starting ensure-sysext.service... Jan 29 11:24:19.249555 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:24:19.269460 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:24:19.269489 systemd[1]: Reloading... Jan 29 11:24:19.327324 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:24:19.327677 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:24:19.330105 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:24:19.331488 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 29 11:24:19.331553 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 29 11:24:19.336708 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:24:19.336723 systemd-tmpfiles[1249]: Skipping /boot Jan 29 11:24:19.355784 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:24:19.355800 systemd-tmpfiles[1249]: Skipping /boot Jan 29 11:24:19.429264 zram_generator::config[1279]: No configuration found. Jan 29 11:24:19.582957 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:24:19.653168 systemd[1]: Reloading finished in 382 ms. Jan 29 11:24:19.671344 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:24:19.676031 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:24:19.702657 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:24:19.706028 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:24:19.720598 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:24:19.728581 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:24:19.734568 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:24:19.747552 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:24:19.755388 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:24:19.755721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:24:19.764817 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:24:19.770258 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:24:19.779726 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:24:19.781557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:24:19.781783 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:24:19.785484 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:24:19.785790 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:24:19.785984 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:24:19.786078 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:24:19.789868 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:24:19.790146 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:24:19.799693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:24:19.800403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:24:19.800604 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:24:19.812761 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:24:19.814929 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:24:19.830772 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:24:19.832933 systemd[1]: Finished ensure-sysext.service. Jan 29 11:24:19.835497 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:24:19.838438 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Jan 29 11:24:19.863417 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:24:19.865418 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:24:19.874333 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:24:19.904382 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:24:19.912643 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:24:19.912935 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:24:19.915726 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:24:19.915976 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:24:19.917056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:24:19.918361 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:24:19.919504 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:24:19.919697 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:24:19.922497 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:24:19.922646 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:24:19.933552 augenrules[1365]: No rules Jan 29 11:24:19.935440 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:24:19.936780 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:24:19.937363 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:24:19.939642 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:24:19.956647 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:24:20.038000 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:24:20.038835 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:24:20.072716 systemd-networkd[1376]: lo: Link UP Jan 29 11:24:20.073375 systemd-networkd[1376]: lo: Gained carrier Jan 29 11:24:20.075142 systemd-networkd[1376]: Enumeration completed Jan 29 11:24:20.075511 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:24:20.080517 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:24:20.102421 systemd-resolved[1330]: Positive Trust Anchors: Jan 29 11:24:20.103482 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:24:20.103568 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:24:20.113561 systemd-resolved[1330]: Using system hostname 'ci-4186.1.0-a-fee62db618'. Jan 29 11:24:20.117388 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:24:20.118116 systemd[1]: Reached target network.target - Network. Jan 29 11:24:20.119323 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:24:20.129460 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:24:20.166439 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1392) Jan 29 11:24:20.195491 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 29 11:24:20.197564 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:24:20.197817 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:24:20.202508 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:24:20.204854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:24:20.209475 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:24:20.210021 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:24:20.210066 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:24:20.210084 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:24:20.233733 kernel: ISO 9660 Extensions: RRIP_1991A Jan 29 11:24:20.237418 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 29 11:24:20.240213 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:24:20.242562 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:24:20.252291 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:24:20.252488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:24:20.253751 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:24:20.254504 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:24:20.261207 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:24:20.263388 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:24:20.267193 systemd-networkd[1376]: eth0: Configuring with /run/systemd/network/10-92:38:d5:71:75:b8.network. Jan 29 11:24:20.271799 systemd-networkd[1376]: eth0: Link UP Jan 29 11:24:20.272277 systemd-networkd[1376]: eth0: Gained carrier Jan 29 11:24:20.278436 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 29 11:24:20.288792 systemd-networkd[1376]: eth1: Configuring with /run/systemd/network/10-3e:a2:6b:ee:08:bf.network. Jan 29 11:24:20.292625 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 29 11:24:20.293407 systemd-networkd[1376]: eth1: Link UP Jan 29 11:24:20.293511 systemd-networkd[1376]: eth1: Gained carrier Jan 29 11:24:20.297602 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 29 11:24:20.298796 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 29 11:24:20.307016 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:24:20.315520 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:24:20.319331 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:24:20.328312 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:24:20.330863 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 11:24:20.349223 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:24:20.384332 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:24:20.446319 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:24:20.522893 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:24:20.578298 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:24:20.591320 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 11:24:20.595293 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 11:24:20.599296 kernel: Console: switching to colour dummy device 80x25 Jan 29 11:24:20.600437 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 11:24:20.600536 kernel: [drm] features: -context_init Jan 29 11:24:20.602285 kernel: [drm] number of scanouts: 1 Jan 29 11:24:20.603288 kernel: [drm] number of cap sets: 0 Jan 29 11:24:20.609284 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 11:24:20.613267 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 11:24:20.615964 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 11:24:20.624290 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 11:24:20.626783 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:24:20.627184 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:24:20.644813 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:24:20.650041 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:24:20.650614 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:24:20.659976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:24:20.662840 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:24:20.673386 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:24:20.693886 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:24:20.707416 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:24:20.728287 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:24:20.730479 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:24:20.730674 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:24:20.731043 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:24:20.731183 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:24:20.733683 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:24:20.734822 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:24:20.734962 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:24:20.735031 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:24:20.735063 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:24:20.735117 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:24:20.736853 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:24:20.739853 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:24:20.746886 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:24:20.748981 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:24:20.753822 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:24:20.755495 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:24:20.756145 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:24:20.756755 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:24:20.756787 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:24:20.760464 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:24:20.769605 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:24:20.771496 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:24:20.776504 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:24:20.784526 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:24:20.788450 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:24:20.790761 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:24:20.796492 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:24:20.813715 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:24:20.819336 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:24:20.829575 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:24:20.834673 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:24:20.835494 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:24:20.842506 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:24:20.846566 coreos-metadata[1441]: Jan 29 11:24:20.846 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:24:20.853398 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:24:20.856365 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:24:20.860661 jq[1443]: false Jan 29 11:24:20.865112 coreos-metadata[1441]: Jan 29 11:24:20.864 INFO Fetch successful Jan 29 11:24:20.867776 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:24:20.869323 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:24:20.886855 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:24:20.887056 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:24:20.913422 extend-filesystems[1444]: Found loop4 Jan 29 11:24:20.918329 extend-filesystems[1444]: Found loop5 Jan 29 11:24:20.918329 extend-filesystems[1444]: Found loop6 Jan 29 11:24:20.918329 extend-filesystems[1444]: Found loop7 Jan 29 11:24:20.918329 extend-filesystems[1444]: Found vda Jan 29 11:24:20.918329 extend-filesystems[1444]: Found vda1 Jan 29 11:24:20.918329 extend-filesystems[1444]: Found vda2 Jan 29 11:24:20.918329 extend-filesystems[1444]: Found vda3 Jan 29 11:24:20.918329 extend-filesystems[1444]: Found usr Jan 29 11:24:20.918329 extend-filesystems[1444]: Found vda4 Jan 29 11:24:20.918329 extend-filesystems[1444]: Found vda6 Jan 29 11:24:20.950359 extend-filesystems[1444]: Found vda7 Jan 29 11:24:20.950359 extend-filesystems[1444]: Found vda9 Jan 29 11:24:20.950359 extend-filesystems[1444]: Checking size of /dev/vda9 Jan 29 11:24:20.936952 dbus-daemon[1442]: [system] SELinux support is enabled Jan 29 11:24:20.935569 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:24:20.963391 jq[1453]: true Jan 29 11:24:20.936153 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:24:20.944646 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:24:20.962156 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:24:20.962663 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:24:20.962744 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:24:20.964750 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:24:20.964868 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 29 11:24:20.964891 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:24:20.981970 update_engine[1451]: I20250129 11:24:20.974957 1451 main.cc:92] Flatcar Update Engine starting Jan 29 11:24:20.989777 extend-filesystems[1444]: Resized partition /dev/vda9 Jan 29 11:24:20.988591 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:24:20.995576 update_engine[1451]: I20250129 11:24:20.990811 1451 update_check_scheduler.cc:74] Next update check in 10m27s Jan 29 11:24:20.995645 extend-filesystems[1481]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:24:21.002701 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:24:21.013419 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 29 11:24:21.011296 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:24:21.015875 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:24:21.019682 jq[1474]: true Jan 29 11:24:21.054217 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1379) Jan 29 11:24:21.097457 systemd-logind[1450]: New seat seat0. Jan 29 11:24:21.099483 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:24:21.099658 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:24:21.099931 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:24:21.138026 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 29 11:24:21.170106 extend-filesystems[1481]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:24:21.170106 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 29 11:24:21.170106 extend-filesystems[1481]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 29 11:24:21.176926 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Jan 29 11:24:21.176926 extend-filesystems[1444]: Found vdb Jan 29 11:24:21.179731 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:24:21.179937 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:24:21.225260 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:24:21.232727 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:24:21.250640 systemd[1]: Starting sshkeys.service... Jan 29 11:24:21.265428 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:24:21.292715 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:24:21.305267 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:24:21.332661 coreos-metadata[1512]: Jan 29 11:24:21.332 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:24:21.347938 coreos-metadata[1512]: Jan 29 11:24:21.347 INFO Fetch successful Jan 29 11:24:21.356874 unknown[1512]: wrote ssh authorized keys file for user: core Jan 29 11:24:21.422187 update-ssh-keys[1519]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:24:21.423703 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:24:21.429693 systemd[1]: Finished sshkeys.service. Jan 29 11:24:21.457447 containerd[1470]: time="2025-01-29T11:24:21.457335082Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:24:21.469542 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:24:21.497766 containerd[1470]: time="2025-01-29T11:24:21.496302115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:24:21.497988 containerd[1470]: time="2025-01-29T11:24:21.497946410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:24:21.497988 containerd[1470]: time="2025-01-29T11:24:21.497984861Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:24:21.498092 containerd[1470]: time="2025-01-29T11:24:21.498004306Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:24:21.498195 containerd[1470]: time="2025-01-29T11:24:21.498174783Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:24:21.498195 containerd[1470]: time="2025-01-29T11:24:21.498193211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:24:21.498293 containerd[1470]: time="2025-01-29T11:24:21.498275205Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:24:21.498317 containerd[1470]: time="2025-01-29T11:24:21.498293792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:24:21.498486 containerd[1470]: time="2025-01-29T11:24:21.498466519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:24:21.498515 containerd[1470]: time="2025-01-29T11:24:21.498486635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:24:21.498515 containerd[1470]: time="2025-01-29T11:24:21.498501725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:24:21.498515 containerd[1470]: time="2025-01-29T11:24:21.498511378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:24:21.498606 containerd[1470]: time="2025-01-29T11:24:21.498588733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:24:21.498816 containerd[1470]: time="2025-01-29T11:24:21.498798071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:24:21.498934 containerd[1470]: time="2025-01-29T11:24:21.498917659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:24:21.498965 containerd[1470]: time="2025-01-29T11:24:21.498934480Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:24:21.499037 containerd[1470]: time="2025-01-29T11:24:21.499018571Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:24:21.499093 containerd[1470]: time="2025-01-29T11:24:21.499078884Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:24:21.506614 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:24:21.514715 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:24:21.522279 containerd[1470]: time="2025-01-29T11:24:21.520554519Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:24:21.522279 containerd[1470]: time="2025-01-29T11:24:21.520625735Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:24:21.522279 containerd[1470]: time="2025-01-29T11:24:21.520658623Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:24:21.522279 containerd[1470]: time="2025-01-29T11:24:21.520690825Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:24:21.522279 containerd[1470]: time="2025-01-29T11:24:21.520719246Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:24:21.522279 containerd[1470]: time="2025-01-29T11:24:21.520914018Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:24:21.522279 containerd[1470]: time="2025-01-29T11:24:21.521282454Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:24:21.522279 containerd[1470]: time="2025-01-29T11:24:21.521430766Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:24:21.522279 containerd[1470]: time="2025-01-29T11:24:21.521455692Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:24:21.522279 containerd[1470]: time="2025-01-29T11:24:21.521476361Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:24:21.522279 containerd[1470]: time="2025-01-29T11:24:21.521493999Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:24:21.522279 containerd[1470]: time="2025-01-29T11:24:21.521532152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:24:21.522279 containerd[1470]: time="2025-01-29T11:24:21.521549963Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:24:21.522279 containerd[1470]: time="2025-01-29T11:24:21.521563955Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:24:21.522751 containerd[1470]: time="2025-01-29T11:24:21.521578670Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:24:21.522751 containerd[1470]: time="2025-01-29T11:24:21.521593924Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:24:21.522751 containerd[1470]: time="2025-01-29T11:24:21.521606486Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:24:21.522751 containerd[1470]: time="2025-01-29T11:24:21.521619426Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:24:21.522751 containerd[1470]: time="2025-01-29T11:24:21.521639800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.522751 containerd[1470]: time="2025-01-29T11:24:21.521654874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.522751 containerd[1470]: time="2025-01-29T11:24:21.521667667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.522751 containerd[1470]: time="2025-01-29T11:24:21.521680660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.522751 containerd[1470]: time="2025-01-29T11:24:21.521692470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.522751 containerd[1470]: time="2025-01-29T11:24:21.521705888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.522751 containerd[1470]: time="2025-01-29T11:24:21.521719062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.522751 containerd[1470]: time="2025-01-29T11:24:21.521731638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.522751 containerd[1470]: time="2025-01-29T11:24:21.521743888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.522751 containerd[1470]: time="2025-01-29T11:24:21.521757068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.523030 containerd[1470]: time="2025-01-29T11:24:21.521778387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.523030 containerd[1470]: time="2025-01-29T11:24:21.521791981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.523030 containerd[1470]: time="2025-01-29T11:24:21.521804474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.523030 containerd[1470]: time="2025-01-29T11:24:21.521820343Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:24:21.523030 containerd[1470]: time="2025-01-29T11:24:21.521856014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.523030 containerd[1470]: time="2025-01-29T11:24:21.521870023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.523030 containerd[1470]: time="2025-01-29T11:24:21.521883051Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:24:21.523030 containerd[1470]: time="2025-01-29T11:24:21.521934574Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:24:21.523030 containerd[1470]: time="2025-01-29T11:24:21.521952935Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:24:21.523030 containerd[1470]: time="2025-01-29T11:24:21.521963447Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:24:21.523030 containerd[1470]: time="2025-01-29T11:24:21.521974216Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:24:21.523030 containerd[1470]: time="2025-01-29T11:24:21.521983476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.523030 containerd[1470]: time="2025-01-29T11:24:21.521995685Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:24:21.523030 containerd[1470]: time="2025-01-29T11:24:21.522005876Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:24:21.523400 containerd[1470]: time="2025-01-29T11:24:21.522016987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:24:21.523767 containerd[1470]: time="2025-01-29T11:24:21.523702571Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:24:21.523992 containerd[1470]: time="2025-01-29T11:24:21.523977310Z" level=info msg="Connect containerd service" Jan 29 11:24:21.524333 containerd[1470]: time="2025-01-29T11:24:21.524301037Z" level=info msg="using legacy CRI server" Jan 29 11:24:21.524416 containerd[1470]: time="2025-01-29T11:24:21.524401499Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:24:21.524621 containerd[1470]: time="2025-01-29T11:24:21.524606141Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:24:21.525403 containerd[1470]: time="2025-01-29T11:24:21.525375009Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:24:21.525975 containerd[1470]: time="2025-01-29T11:24:21.525926412Z" level=info msg="Start subscribing containerd event" Jan 29 11:24:21.526177 containerd[1470]: time="2025-01-29T11:24:21.526155929Z" level=info msg="Start recovering state" Jan 29 11:24:21.527675 containerd[1470]: time="2025-01-29T11:24:21.526269908Z" level=info msg="Start event monitor" Jan 29 11:24:21.527675 containerd[1470]: time="2025-01-29T11:24:21.526294388Z" level=info msg="Start snapshots syncer" Jan 29 11:24:21.527675 containerd[1470]: time="2025-01-29T11:24:21.526304612Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:24:21.527675 containerd[1470]: time="2025-01-29T11:24:21.526315025Z" level=info msg="Start streaming server" Jan 29 11:24:21.527675 containerd[1470]: time="2025-01-29T11:24:21.526427513Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:24:21.527675 containerd[1470]: time="2025-01-29T11:24:21.526483320Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:24:21.527675 containerd[1470]: time="2025-01-29T11:24:21.526803713Z" level=info msg="containerd successfully booted in 0.070384s" Jan 29 11:24:21.527384 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:24:21.531787 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:24:21.532029 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:24:21.543886 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:24:21.560179 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:24:21.568647 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:24:21.569432 systemd-networkd[1376]: eth0: Gained IPv6LL Jan 29 11:24:21.571801 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 29 11:24:21.572031 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:24:21.574139 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:24:21.577351 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:24:21.580827 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:24:21.599732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:24:21.603684 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:24:21.635754 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:24:21.952610 systemd-networkd[1376]: eth1: Gained IPv6LL Jan 29 11:24:21.953483 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 29 11:24:22.176556 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:24:22.184884 systemd[1]: Started sshd@0-164.92.103.73:22-194.0.234.37:36890.service - OpenSSH per-connection server daemon (194.0.234.37:36890). Jan 29 11:24:22.838492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:24:22.841541 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:24:22.844780 systemd[1]: Startup finished in 1.140s (kernel) + 5.446s (initrd) + 5.713s (userspace) = 12.300s. Jan 29 11:24:22.858935 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:24:22.875391 agetty[1540]: failed to open credentials directory Jan 29 11:24:22.879657 agetty[1541]: failed to open credentials directory Jan 29 11:24:23.157988 systemd[1]: Started sshd@1-164.92.103.73:22-139.178.89.65:39894.service - OpenSSH per-connection server daemon (139.178.89.65:39894). Jan 29 11:24:23.248337 sshd[1571]: Accepted publickey for core from 139.178.89.65 port 39894 ssh2: RSA SHA256:59drt2qHDKEsS3HhMr44vLZOd7nM2v4xOBrenZzCkc8 Jan 29 11:24:23.249099 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:23.264009 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:24:23.271865 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:24:23.280301 systemd-logind[1450]: New session 1 of user core. Jan 29 11:24:23.311958 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:24:23.324750 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:24:23.331385 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:24:23.517788 systemd[1575]: Queued start job for default target default.target. Jan 29 11:24:23.524920 systemd[1575]: Created slice app.slice - User Application Slice. Jan 29 11:24:23.524990 systemd[1575]: Reached target paths.target - Paths. Jan 29 11:24:23.525015 systemd[1575]: Reached target timers.target - Timers. Jan 29 11:24:23.530545 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:24:23.567047 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:24:23.568685 systemd[1575]: Reached target sockets.target - Sockets. Jan 29 11:24:23.568734 systemd[1575]: Reached target basic.target - Basic System. Jan 29 11:24:23.568835 systemd[1575]: Reached target default.target - Main User Target. Jan 29 11:24:23.568897 systemd[1575]: Startup finished in 228ms. Jan 29 11:24:23.569415 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:24:23.580780 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:24:23.675969 systemd[1]: Started sshd@2-164.92.103.73:22-139.178.89.65:39904.service - OpenSSH per-connection server daemon (139.178.89.65:39904). Jan 29 11:24:23.690035 kubelet[1561]: E0129 11:24:23.689959 1561 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:24:23.693772 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:24:23.694019 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:24:23.695417 systemd[1]: kubelet.service: Consumed 1.382s CPU time. Jan 29 11:24:23.760037 sshd[1587]: Accepted publickey for core from 139.178.89.65 port 39904 ssh2: RSA SHA256:59drt2qHDKEsS3HhMr44vLZOd7nM2v4xOBrenZzCkc8 Jan 29 11:24:23.762289 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:23.770398 systemd-logind[1450]: New session 2 of user core. Jan 29 11:24:23.781639 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:24:23.848642 sshd[1590]: Connection closed by 139.178.89.65 port 39904 Jan 29 11:24:23.849424 sshd-session[1587]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:23.862970 systemd[1]: sshd@2-164.92.103.73:22-139.178.89.65:39904.service: Deactivated successfully. Jan 29 11:24:23.865496 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:24:23.868494 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:24:23.873851 systemd[1]: Started sshd@3-164.92.103.73:22-139.178.89.65:39908.service - OpenSSH per-connection server daemon (139.178.89.65:39908). Jan 29 11:24:23.876950 systemd-logind[1450]: Removed session 2. Jan 29 11:24:23.942872 sshd[1595]: Accepted publickey for core from 139.178.89.65 port 39908 ssh2: RSA SHA256:59drt2qHDKEsS3HhMr44vLZOd7nM2v4xOBrenZzCkc8 Jan 29 11:24:23.945336 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:23.954760 systemd-logind[1450]: New session 3 of user core. Jan 29 11:24:23.961485 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:24:24.020195 sshd[1597]: Connection closed by 139.178.89.65 port 39908 Jan 29 11:24:24.021119 sshd-session[1595]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:24.035932 systemd[1]: sshd@3-164.92.103.73:22-139.178.89.65:39908.service: Deactivated successfully. Jan 29 11:24:24.038177 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:24:24.040615 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:24:24.045695 systemd[1]: Started sshd@4-164.92.103.73:22-139.178.89.65:39920.service - OpenSSH per-connection server daemon (139.178.89.65:39920). Jan 29 11:24:24.047982 systemd-logind[1450]: Removed session 3. Jan 29 11:24:24.108918 sshd[1602]: Accepted publickey for core from 139.178.89.65 port 39920 ssh2: RSA SHA256:59drt2qHDKEsS3HhMr44vLZOd7nM2v4xOBrenZzCkc8 Jan 29 11:24:24.110877 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:24.118589 systemd-logind[1450]: New session 4 of user core. Jan 29 11:24:24.124588 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:24:24.189742 sshd[1604]: Connection closed by 139.178.89.65 port 39920 Jan 29 11:24:24.190487 sshd-session[1602]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:24.206606 systemd[1]: sshd@4-164.92.103.73:22-139.178.89.65:39920.service: Deactivated successfully. Jan 29 11:24:24.209310 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:24:24.212222 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:24:24.226064 systemd[1]: Started sshd@5-164.92.103.73:22-139.178.89.65:39934.service - OpenSSH per-connection server daemon (139.178.89.65:39934). Jan 29 11:24:24.228761 systemd-logind[1450]: Removed session 4. Jan 29 11:24:24.282419 sshd[1609]: Accepted publickey for core from 139.178.89.65 port 39934 ssh2: RSA SHA256:59drt2qHDKEsS3HhMr44vLZOd7nM2v4xOBrenZzCkc8 Jan 29 11:24:24.285195 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:24.293139 systemd-logind[1450]: New session 5 of user core. Jan 29 11:24:24.301666 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:24:24.379203 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:24:24.379842 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:24:24.398432 sudo[1612]: pam_unix(sudo:session): session closed for user root Jan 29 11:24:24.402479 sshd[1611]: Connection closed by 139.178.89.65 port 39934 Jan 29 11:24:24.403370 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:24.415723 systemd[1]: sshd@5-164.92.103.73:22-139.178.89.65:39934.service: Deactivated successfully. Jan 29 11:24:24.418883 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:24:24.421398 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:24:24.426699 systemd[1]: Started sshd@6-164.92.103.73:22-139.178.89.65:39940.service - OpenSSH per-connection server daemon (139.178.89.65:39940). Jan 29 11:24:24.428545 systemd-logind[1450]: Removed session 5. Jan 29 11:24:24.493829 sshd[1617]: Accepted publickey for core from 139.178.89.65 port 39940 ssh2: RSA SHA256:59drt2qHDKEsS3HhMr44vLZOd7nM2v4xOBrenZzCkc8 Jan 29 11:24:24.496066 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:24.506410 systemd-logind[1450]: New session 6 of user core. Jan 29 11:24:24.511816 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:24:24.579106 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:24:24.580753 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:24:24.588010 sudo[1621]: pam_unix(sudo:session): session closed for user root Jan 29 11:24:24.596552 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:24:24.597077 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:24:24.621045 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:24:24.676078 augenrules[1643]: No rules Jan 29 11:24:24.678347 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:24:24.678611 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:24:24.680741 sudo[1620]: pam_unix(sudo:session): session closed for user root Jan 29 11:24:24.684385 sshd[1619]: Connection closed by 139.178.89.65 port 39940 Jan 29 11:24:24.685397 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:24.700795 systemd[1]: sshd@6-164.92.103.73:22-139.178.89.65:39940.service: Deactivated successfully. Jan 29 11:24:24.704473 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:24:24.709575 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:24:24.716151 systemd[1]: Started sshd@7-164.92.103.73:22-139.178.89.65:39948.service - OpenSSH per-connection server daemon (139.178.89.65:39948). Jan 29 11:24:24.718873 systemd-logind[1450]: Removed session 6. Jan 29 11:24:24.761838 sshd[1554]: Invalid user anonymous from 194.0.234.37 port 36890 Jan 29 11:24:24.789508 sshd[1651]: Accepted publickey for core from 139.178.89.65 port 39948 ssh2: RSA SHA256:59drt2qHDKEsS3HhMr44vLZOd7nM2v4xOBrenZzCkc8 Jan 29 11:24:24.792421 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:24.800297 systemd-logind[1450]: New session 7 of user core. Jan 29 11:24:24.811996 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:24:24.875093 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:24:24.875529 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:24:25.817607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:24:25.817855 systemd[1]: kubelet.service: Consumed 1.382s CPU time. Jan 29 11:24:25.824645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:24:25.874712 systemd[1]: Reloading requested from client PID 1688 ('systemctl') (unit session-7.scope)... Jan 29 11:24:25.874729 systemd[1]: Reloading... Jan 29 11:24:26.022282 zram_generator::config[1727]: No configuration found. Jan 29 11:24:26.213987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:24:26.309410 systemd[1]: Reloading finished in 434 ms. Jan 29 11:24:26.374098 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:24:26.374355 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:24:26.374783 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:24:26.380098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:24:26.571971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:24:26.586736 (kubelet)[1783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:24:26.661881 kubelet[1783]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:24:26.661881 kubelet[1783]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:24:26.661881 kubelet[1783]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:24:26.662737 kubelet[1783]: I0129 11:24:26.661943 1783 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:24:26.891973 sshd[1554]: Connection closed by invalid user anonymous 194.0.234.37 port 36890 [preauth] Jan 29 11:24:26.894995 systemd[1]: sshd@0-164.92.103.73:22-194.0.234.37:36890.service: Deactivated successfully. Jan 29 11:24:27.273388 kubelet[1783]: I0129 11:24:27.272833 1783 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:24:27.273388 kubelet[1783]: I0129 11:24:27.272887 1783 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:24:27.274154 kubelet[1783]: I0129 11:24:27.274121 1783 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:24:27.310351 kubelet[1783]: I0129 11:24:27.309995 1783 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:24:27.325508 kubelet[1783]: E0129 11:24:27.325417 1783 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:24:27.325508 kubelet[1783]: I0129 11:24:27.325484 1783 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:24:27.330438 kubelet[1783]: I0129 11:24:27.330396 1783 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:24:27.330944 kubelet[1783]: I0129 11:24:27.330720 1783 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:24:27.331075 kubelet[1783]: I0129 11:24:27.330813 1783 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"164.92.103.73","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:24:27.332687 kubelet[1783]: I0129 11:24:27.331082 1783 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:24:27.332687 kubelet[1783]: I0129 11:24:27.331098 1783 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:24:27.332687 kubelet[1783]: I0129 11:24:27.331347 1783 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:24:27.336697 kubelet[1783]: I0129 11:24:27.336624 1783 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:24:27.336697 kubelet[1783]: I0129 11:24:27.336686 1783 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:24:27.337053 kubelet[1783]: I0129 11:24:27.336777 1783 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:24:27.337053 kubelet[1783]: I0129 11:24:27.336801 1783 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:24:27.340893 kubelet[1783]: E0129 11:24:27.340709 1783 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:27.340893 kubelet[1783]: E0129 11:24:27.340787 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:27.345539 kubelet[1783]: I0129 11:24:27.345021 1783 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:24:27.345793 kubelet[1783]: I0129 11:24:27.345778 1783 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:24:27.346605 kubelet[1783]: W0129 11:24:27.346573 1783 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:24:27.349630 kubelet[1783]: I0129 11:24:27.349579 1783 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:24:27.349868 kubelet[1783]: I0129 11:24:27.349848 1783 server.go:1287] "Started kubelet" Jan 29 11:24:27.353426 kubelet[1783]: I0129 11:24:27.353378 1783 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:24:27.362510 kubelet[1783]: I0129 11:24:27.362451 1783 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:24:27.365319 kubelet[1783]: I0129 11:24:27.365139 1783 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:24:27.367063 kubelet[1783]: I0129 11:24:27.365553 1783 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:24:27.371364 kubelet[1783]: E0129 11:24:27.369129 1783 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"164.92.103.73\" not found" Jan 29 11:24:27.372576 kubelet[1783]: I0129 11:24:27.369574 1783 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:24:27.373032 kubelet[1783]: I0129 11:24:27.372928 1783 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:24:27.373176 kubelet[1783]: I0129 11:24:27.369932 1783 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:24:27.373748 kubelet[1783]: I0129 11:24:27.373732 1783 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:24:27.373898 kubelet[1783]: I0129 11:24:27.373847 1783 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:24:27.379278 kubelet[1783]: I0129 11:24:27.378708 1783 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:24:27.379278 kubelet[1783]: E0129 11:24:27.373863 1783 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{164.92.103.73.181f261861c6baad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:164.92.103.73,UID:164.92.103.73,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:164.92.103.73,},FirstTimestamp:2025-01-29 11:24:27.349785261 +0000 UTC m=+0.749103681,LastTimestamp:2025-01-29 11:24:27.349785261 +0000 UTC m=+0.749103681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:164.92.103.73,}" Jan 29 11:24:27.382275 kubelet[1783]: E0129 11:24:27.379227 1783 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:24:27.382275 kubelet[1783]: W0129 11:24:27.381673 1783 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 11:24:27.382275 kubelet[1783]: E0129 11:24:27.381722 1783 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 11:24:27.382275 kubelet[1783]: W0129 11:24:27.381854 1783 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "164.92.103.73" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 11:24:27.382275 kubelet[1783]: E0129 11:24:27.381871 1783 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"164.92.103.73\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 11:24:27.383803 kubelet[1783]: E0129 11:24:27.383766 1783 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"164.92.103.73\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 29 11:24:27.384083 kubelet[1783]: I0129 11:24:27.384051 1783 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:24:27.384083 kubelet[1783]: I0129 11:24:27.384082 1783 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:24:27.422201 kubelet[1783]: I0129 11:24:27.422065 1783 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:24:27.422201 kubelet[1783]: I0129 11:24:27.422111 1783 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:24:27.422201 kubelet[1783]: I0129 11:24:27.422211 1783 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:24:27.428058 kubelet[1783]: I0129 11:24:27.428006 1783 policy_none.go:49] "None policy: Start" Jan 29 11:24:27.428058 kubelet[1783]: I0129 11:24:27.428040 1783 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:24:27.428058 kubelet[1783]: I0129 11:24:27.428054 1783 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:24:27.437717 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:24:27.451054 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:24:27.457008 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:24:27.466565 kubelet[1783]: I0129 11:24:27.466269 1783 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:24:27.466565 kubelet[1783]: I0129 11:24:27.466499 1783 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:24:27.466565 kubelet[1783]: I0129 11:24:27.466523 1783 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:24:27.466861 kubelet[1783]: I0129 11:24:27.466823 1783 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:24:27.468434 kubelet[1783]: I0129 11:24:27.468402 1783 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:24:27.468553 kubelet[1783]: I0129 11:24:27.468545 1783 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:24:27.468645 kubelet[1783]: I0129 11:24:27.468636 1783 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:24:27.468686 kubelet[1783]: I0129 11:24:27.468680 1783 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:24:27.468854 kubelet[1783]: E0129 11:24:27.468835 1783 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 11:24:27.472036 kubelet[1783]: I0129 11:24:27.471919 1783 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:24:27.479054 kubelet[1783]: E0129 11:24:27.479025 1783 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:24:27.479179 kubelet[1783]: E0129 11:24:27.479073 1783 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"164.92.103.73\" not found" Jan 29 11:24:27.567985 kubelet[1783]: I0129 11:24:27.567806 1783 kubelet_node_status.go:76] "Attempting to register node" node="164.92.103.73" Jan 29 11:24:27.582090 kubelet[1783]: I0129 11:24:27.582016 1783 kubelet_node_status.go:79] "Successfully registered node" node="164.92.103.73" Jan 29 11:24:27.582090 kubelet[1783]: E0129 11:24:27.582101 1783 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"164.92.103.73\": node \"164.92.103.73\" not found" Jan 29 11:24:27.598127 kubelet[1783]: E0129 11:24:27.598091 1783 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"164.92.103.73\" not found" Jan 29 11:24:27.657343 sudo[1654]: pam_unix(sudo:session): session closed for user root Jan 29 11:24:27.661184 sshd[1653]: Connection closed by 139.178.89.65 port 39948 Jan 29 11:24:27.662210 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:27.666704 systemd[1]: sshd@7-164.92.103.73:22-139.178.89.65:39948.service: Deactivated successfully. Jan 29 11:24:27.669399 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:24:27.671484 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:24:27.672949 systemd-logind[1450]: Removed session 7. Jan 29 11:24:27.699186 kubelet[1783]: E0129 11:24:27.699100 1783 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"164.92.103.73\" not found" Jan 29 11:24:27.799967 kubelet[1783]: E0129 11:24:27.799904 1783 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"164.92.103.73\" not found" Jan 29 11:24:27.900493 kubelet[1783]: E0129 11:24:27.900411 1783 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"164.92.103.73\" not found" Jan 29 11:24:28.001287 kubelet[1783]: E0129 11:24:28.001196 1783 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"164.92.103.73\" not found" Jan 29 11:24:28.102371 kubelet[1783]: E0129 11:24:28.102308 1783 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"164.92.103.73\" not found" Jan 29 11:24:28.203642 kubelet[1783]: E0129 11:24:28.203449 1783 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"164.92.103.73\" not found" Jan 29 11:24:28.278282 kubelet[1783]: I0129 11:24:28.278126 1783 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 11:24:28.278601 kubelet[1783]: W0129 11:24:28.278471 1783 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:24:28.278601 kubelet[1783]: W0129 11:24:28.278550 1783 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:24:28.303787 kubelet[1783]: E0129 11:24:28.303701 1783 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"164.92.103.73\" not found" Jan 29 11:24:28.341678 kubelet[1783]: E0129 11:24:28.341501 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:28.404346 kubelet[1783]: E0129 11:24:28.404263 1783 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"164.92.103.73\" not found" Jan 29 11:24:28.504990 kubelet[1783]: E0129 11:24:28.504691 1783 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"164.92.103.73\" not found" Jan 29 11:24:28.607425 kubelet[1783]: I0129 11:24:28.607143 1783 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 11:24:28.608141 containerd[1470]: time="2025-01-29T11:24:28.608003811Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:24:28.608877 kubelet[1783]: I0129 11:24:28.608365 1783 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 11:24:29.342447 kubelet[1783]: I0129 11:24:29.342372 1783 apiserver.go:52] "Watching apiserver" Jan 29 11:24:29.343062 kubelet[1783]: E0129 11:24:29.342367 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:29.350550 kubelet[1783]: E0129 11:24:29.349446 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4x5wc" podUID="27184255-bf1a-4cc5-b67d-e597a1ff246d" Jan 29 11:24:29.361104 systemd[1]: Created slice kubepods-besteffort-podd680fae3_bced_460c_a58b_8ebe48dfde4f.slice - libcontainer container kubepods-besteffort-podd680fae3_bced_460c_a58b_8ebe48dfde4f.slice. Jan 29 11:24:29.371053 systemd[1]: Created slice kubepods-besteffort-podad856552_61cc_43a5_86c0_ab923739b40b.slice - libcontainer container kubepods-besteffort-podad856552_61cc_43a5_86c0_ab923739b40b.slice. Jan 29 11:24:29.375396 kubelet[1783]: I0129 11:24:29.374776 1783 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:24:29.388495 kubelet[1783]: I0129 11:24:29.388441 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-var-lib-calico\") pod \"calico-node-w74sz\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " pod="calico-system/calico-node-w74sz" Jan 29 11:24:29.388928 kubelet[1783]: I0129 11:24:29.388889 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbcjx\" (UniqueName: \"kubernetes.io/projected/d680fae3-bced-460c-a58b-8ebe48dfde4f-kube-api-access-sbcjx\") pod \"calico-node-w74sz\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " pod="calico-system/calico-node-w74sz" Jan 29 11:24:29.389066 kubelet[1783]: I0129 11:24:29.389049 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmf7j\" (UniqueName: \"kubernetes.io/projected/27184255-bf1a-4cc5-b67d-e597a1ff246d-kube-api-access-nmf7j\") pod \"csi-node-driver-4x5wc\" (UID: \"27184255-bf1a-4cc5-b67d-e597a1ff246d\") " pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:29.389172 kubelet[1783]: I0129 11:24:29.389153 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-xtables-lock\") pod \"calico-node-w74sz\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " pod="calico-system/calico-node-w74sz" Jan 29 11:24:29.389316 kubelet[1783]: I0129 11:24:29.389298 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-policysync\") pod \"calico-node-w74sz\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " pod="calico-system/calico-node-w74sz" Jan 29 11:24:29.389435 kubelet[1783]: I0129 11:24:29.389418 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-flexvol-driver-host\") pod \"calico-node-w74sz\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " pod="calico-system/calico-node-w74sz" Jan 29 11:24:29.389554 kubelet[1783]: I0129 11:24:29.389537 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/27184255-bf1a-4cc5-b67d-e597a1ff246d-socket-dir\") pod \"csi-node-driver-4x5wc\" (UID: \"27184255-bf1a-4cc5-b67d-e597a1ff246d\") " pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:29.389653 kubelet[1783]: I0129 11:24:29.389638 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sj5z\" (UniqueName: \"kubernetes.io/projected/ad856552-61cc-43a5-86c0-ab923739b40b-kube-api-access-7sj5z\") pod \"kube-proxy-6g7vt\" (UID: \"ad856552-61cc-43a5-86c0-ab923739b40b\") " pod="kube-system/kube-proxy-6g7vt" Jan 29 11:24:29.389741 kubelet[1783]: I0129 11:24:29.389728 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-lib-modules\") pod \"calico-node-w74sz\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " pod="calico-system/calico-node-w74sz" Jan 29 11:24:29.389823 kubelet[1783]: I0129 11:24:29.389810 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-cni-bin-dir\") pod \"calico-node-w74sz\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " pod="calico-system/calico-node-w74sz" Jan 29 11:24:29.389921 kubelet[1783]: I0129 11:24:29.389904 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-cni-log-dir\") pod \"calico-node-w74sz\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " pod="calico-system/calico-node-w74sz" Jan 29 11:24:29.390031 kubelet[1783]: I0129 11:24:29.390015 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/27184255-bf1a-4cc5-b67d-e597a1ff246d-varrun\") pod \"csi-node-driver-4x5wc\" (UID: \"27184255-bf1a-4cc5-b67d-e597a1ff246d\") " pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:29.390295 kubelet[1783]: I0129 11:24:29.390113 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27184255-bf1a-4cc5-b67d-e597a1ff246d-kubelet-dir\") pod \"csi-node-driver-4x5wc\" (UID: \"27184255-bf1a-4cc5-b67d-e597a1ff246d\") " pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:29.390295 kubelet[1783]: I0129 11:24:29.390142 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad856552-61cc-43a5-86c0-ab923739b40b-lib-modules\") pod \"kube-proxy-6g7vt\" (UID: \"ad856552-61cc-43a5-86c0-ab923739b40b\") " pod="kube-system/kube-proxy-6g7vt" Jan 29 11:24:29.390295 kubelet[1783]: I0129 11:24:29.390169 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d680fae3-bced-460c-a58b-8ebe48dfde4f-tigera-ca-bundle\") pod \"calico-node-w74sz\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " pod="calico-system/calico-node-w74sz" Jan 29 11:24:29.390295 kubelet[1783]: I0129 11:24:29.390214 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-var-run-calico\") pod \"calico-node-w74sz\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " pod="calico-system/calico-node-w74sz" Jan 29 11:24:29.390295 kubelet[1783]: I0129 11:24:29.390260 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/27184255-bf1a-4cc5-b67d-e597a1ff246d-registration-dir\") pod \"csi-node-driver-4x5wc\" (UID: \"27184255-bf1a-4cc5-b67d-e597a1ff246d\") " pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:29.390549 kubelet[1783]: I0129 11:24:29.390328 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ad856552-61cc-43a5-86c0-ab923739b40b-kube-proxy\") pod \"kube-proxy-6g7vt\" (UID: \"ad856552-61cc-43a5-86c0-ab923739b40b\") " pod="kube-system/kube-proxy-6g7vt" Jan 29 11:24:29.390549 kubelet[1783]: I0129 11:24:29.390388 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad856552-61cc-43a5-86c0-ab923739b40b-xtables-lock\") pod \"kube-proxy-6g7vt\" (UID: \"ad856552-61cc-43a5-86c0-ab923739b40b\") " pod="kube-system/kube-proxy-6g7vt" Jan 29 11:24:29.390549 kubelet[1783]: I0129 11:24:29.390423 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d680fae3-bced-460c-a58b-8ebe48dfde4f-node-certs\") pod \"calico-node-w74sz\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " pod="calico-system/calico-node-w74sz" Jan 29 11:24:29.390549 kubelet[1783]: I0129 11:24:29.390452 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-cni-net-dir\") pod \"calico-node-w74sz\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " pod="calico-system/calico-node-w74sz" Jan 29 11:24:29.494557 kubelet[1783]: E0129 11:24:29.494314 1783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:24:29.494557 kubelet[1783]: W0129 11:24:29.494360 1783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:24:29.494557 kubelet[1783]: E0129 11:24:29.494406 1783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:24:29.495328 kubelet[1783]: E0129 11:24:29.495100 1783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:24:29.495328 kubelet[1783]: W0129 11:24:29.495141 1783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:24:29.495643 kubelet[1783]: E0129 11:24:29.495166 1783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:24:29.495936 kubelet[1783]: E0129 11:24:29.495914 1783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:24:29.496359 kubelet[1783]: W0129 11:24:29.496108 1783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:24:29.496359 kubelet[1783]: E0129 11:24:29.496143 1783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:24:29.496738 kubelet[1783]: E0129 11:24:29.496718 1783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:24:29.496855 kubelet[1783]: W0129 11:24:29.496835 1783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:24:29.496952 kubelet[1783]: E0129 11:24:29.496935 1783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:24:29.497556 kubelet[1783]: E0129 11:24:29.497529 1783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:24:29.497736 kubelet[1783]: W0129 11:24:29.497663 1783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:24:29.497736 kubelet[1783]: E0129 11:24:29.497692 1783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:24:29.508751 kubelet[1783]: E0129 11:24:29.508685 1783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:24:29.508959 kubelet[1783]: W0129 11:24:29.508848 1783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:24:29.509131 kubelet[1783]: E0129 11:24:29.508894 1783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:24:29.561497 kubelet[1783]: E0129 11:24:29.561347 1783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:24:29.561497 kubelet[1783]: W0129 11:24:29.561377 1783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:24:29.561497 kubelet[1783]: E0129 11:24:29.561406 1783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:24:29.565888 kubelet[1783]: E0129 11:24:29.565774 1783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:24:29.565888 kubelet[1783]: W0129 11:24:29.565806 1783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:24:29.565888 kubelet[1783]: E0129 11:24:29.565834 1783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:24:29.570373 kubelet[1783]: E0129 11:24:29.569653 1783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:24:29.570373 kubelet[1783]: W0129 11:24:29.570282 1783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:24:29.570373 kubelet[1783]: E0129 11:24:29.570323 1783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:24:29.684183 kubelet[1783]: E0129 11:24:29.684129 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:29.684905 kubelet[1783]: E0129 11:24:29.684685 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:29.685464 containerd[1470]: time="2025-01-29T11:24:29.685419577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w74sz,Uid:d680fae3-bced-460c-a58b-8ebe48dfde4f,Namespace:calico-system,Attempt:0,}" Jan 29 11:24:29.685935 containerd[1470]: time="2025-01-29T11:24:29.685842529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6g7vt,Uid:ad856552-61cc-43a5-86c0-ab923739b40b,Namespace:kube-system,Attempt:0,}" Jan 29 11:24:30.343601 kubelet[1783]: E0129 11:24:30.343434 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:30.437930 containerd[1470]: time="2025-01-29T11:24:30.437857476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:24:30.441273 containerd[1470]: time="2025-01-29T11:24:30.441149517Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:24:30.442737 containerd[1470]: time="2025-01-29T11:24:30.442657967Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:24:30.445808 containerd[1470]: time="2025-01-29T11:24:30.445467383Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:24:30.447760 containerd[1470]: time="2025-01-29T11:24:30.447688289Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:24:30.450211 containerd[1470]: time="2025-01-29T11:24:30.450130402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:24:30.451927 containerd[1470]: time="2025-01-29T11:24:30.451031281Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 765.100172ms" Jan 29 11:24:30.457951 containerd[1470]: time="2025-01-29T11:24:30.457792372Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 772.222226ms" Jan 29 11:24:30.504011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount86412391.mount: Deactivated successfully. Jan 29 11:24:30.681934 containerd[1470]: time="2025-01-29T11:24:30.681782101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:24:30.682691 containerd[1470]: time="2025-01-29T11:24:30.682427675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:24:30.682691 containerd[1470]: time="2025-01-29T11:24:30.682504835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:30.683030 containerd[1470]: time="2025-01-29T11:24:30.682668306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:30.685804 containerd[1470]: time="2025-01-29T11:24:30.685474836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:24:30.686696 containerd[1470]: time="2025-01-29T11:24:30.686443167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:24:30.686696 containerd[1470]: time="2025-01-29T11:24:30.686561276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:30.688213 containerd[1470]: time="2025-01-29T11:24:30.688104107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:30.819598 systemd[1]: Started cri-containerd-8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a.scope - libcontainer container 8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a. Jan 29 11:24:30.821800 systemd[1]: Started cri-containerd-9308b83949826db1c1be7aa76b5ecc738ee19e27ce3e9ff18b51747aca8da184.scope - libcontainer container 9308b83949826db1c1be7aa76b5ecc738ee19e27ce3e9ff18b51747aca8da184. Jan 29 11:24:30.870759 containerd[1470]: time="2025-01-29T11:24:30.870659918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w74sz,Uid:d680fae3-bced-460c-a58b-8ebe48dfde4f,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a\"" Jan 29 11:24:30.873344 kubelet[1783]: E0129 11:24:30.873174 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:30.875643 containerd[1470]: time="2025-01-29T11:24:30.875529494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:24:30.879418 containerd[1470]: time="2025-01-29T11:24:30.879310746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6g7vt,Uid:ad856552-61cc-43a5-86c0-ab923739b40b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9308b83949826db1c1be7aa76b5ecc738ee19e27ce3e9ff18b51747aca8da184\"" Jan 29 11:24:30.880475 kubelet[1783]: E0129 11:24:30.880380 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:31.344628 kubelet[1783]: E0129 11:24:31.344565 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:31.470726 kubelet[1783]: E0129 11:24:31.470186 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4x5wc" podUID="27184255-bf1a-4cc5-b67d-e597a1ff246d" Jan 29 11:24:31.501916 systemd[1]: run-containerd-runc-k8s.io-9308b83949826db1c1be7aa76b5ecc738ee19e27ce3e9ff18b51747aca8da184-runc.vw51VL.mount: Deactivated successfully. Jan 29 11:24:32.132291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3754824532.mount: Deactivated successfully. Jan 29 11:24:32.274815 containerd[1470]: time="2025-01-29T11:24:32.274718989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:32.276282 containerd[1470]: time="2025-01-29T11:24:32.276178662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 11:24:32.277345 containerd[1470]: time="2025-01-29T11:24:32.277282014Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:32.279673 containerd[1470]: time="2025-01-29T11:24:32.279600304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:32.280567 containerd[1470]: time="2025-01-29T11:24:32.280380590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.404793258s" Jan 29 11:24:32.280567 containerd[1470]: time="2025-01-29T11:24:32.280419147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:24:32.282865 containerd[1470]: time="2025-01-29T11:24:32.282587051Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 11:24:32.284524 containerd[1470]: time="2025-01-29T11:24:32.284310222Z" level=info msg="CreateContainer within sandbox \"8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:24:32.310499 containerd[1470]: time="2025-01-29T11:24:32.310258977Z" level=info msg="CreateContainer within sandbox \"8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93\"" Jan 29 11:24:32.311813 containerd[1470]: time="2025-01-29T11:24:32.311754990Z" level=info msg="StartContainer for \"fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93\"" Jan 29 11:24:32.345043 kubelet[1783]: E0129 11:24:32.344998 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:32.358481 systemd[1]: Started cri-containerd-fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93.scope - libcontainer container fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93. Jan 29 11:24:32.409543 containerd[1470]: time="2025-01-29T11:24:32.409364239Z" level=info msg="StartContainer for \"fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93\" returns successfully" Jan 29 11:24:32.439275 systemd[1]: cri-containerd-fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93.scope: Deactivated successfully. Jan 29 11:24:32.495704 kubelet[1783]: E0129 11:24:32.495651 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:32.509123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93-rootfs.mount: Deactivated successfully. Jan 29 11:24:32.546935 containerd[1470]: time="2025-01-29T11:24:32.545980159Z" level=info msg="shim disconnected" id=fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93 namespace=k8s.io Jan 29 11:24:32.546935 containerd[1470]: time="2025-01-29T11:24:32.546079178Z" level=warning msg="cleaning up after shim disconnected" id=fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93 namespace=k8s.io Jan 29 11:24:32.546935 containerd[1470]: time="2025-01-29T11:24:32.546096007Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:24:33.346268 kubelet[1783]: E0129 11:24:33.346107 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:33.471727 kubelet[1783]: E0129 11:24:33.471148 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4x5wc" podUID="27184255-bf1a-4cc5-b67d-e597a1ff246d" Jan 29 11:24:33.499471 kubelet[1783]: E0129 11:24:33.499422 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:33.580794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3367678438.mount: Deactivated successfully. Jan 29 11:24:34.272314 containerd[1470]: time="2025-01-29T11:24:34.271985545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:34.273895 containerd[1470]: time="2025-01-29T11:24:34.273822287Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 29 11:24:34.275218 containerd[1470]: time="2025-01-29T11:24:34.274961715Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:34.277789 containerd[1470]: time="2025-01-29T11:24:34.277705644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:34.278821 containerd[1470]: time="2025-01-29T11:24:34.278583737Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.995949047s" Jan 29 11:24:34.278821 containerd[1470]: time="2025-01-29T11:24:34.278627358Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 29 11:24:34.281739 containerd[1470]: time="2025-01-29T11:24:34.280645159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:24:34.282953 containerd[1470]: time="2025-01-29T11:24:34.282899097Z" level=info msg="CreateContainer within sandbox \"9308b83949826db1c1be7aa76b5ecc738ee19e27ce3e9ff18b51747aca8da184\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:24:34.312728 containerd[1470]: time="2025-01-29T11:24:34.312633577Z" level=info msg="CreateContainer within sandbox \"9308b83949826db1c1be7aa76b5ecc738ee19e27ce3e9ff18b51747aca8da184\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a0399535bcd9cf2f19f325d59d461e65ad639498b7d1b770dc2914d62d246e9b\"" Jan 29 11:24:34.313637 containerd[1470]: time="2025-01-29T11:24:34.313601991Z" level=info msg="StartContainer for \"a0399535bcd9cf2f19f325d59d461e65ad639498b7d1b770dc2914d62d246e9b\"" Jan 29 11:24:34.347303 kubelet[1783]: E0129 11:24:34.347216 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:34.370726 systemd[1]: Started cri-containerd-a0399535bcd9cf2f19f325d59d461e65ad639498b7d1b770dc2914d62d246e9b.scope - libcontainer container a0399535bcd9cf2f19f325d59d461e65ad639498b7d1b770dc2914d62d246e9b. Jan 29 11:24:34.429080 containerd[1470]: time="2025-01-29T11:24:34.428877952Z" level=info msg="StartContainer for \"a0399535bcd9cf2f19f325d59d461e65ad639498b7d1b770dc2914d62d246e9b\" returns successfully" Jan 29 11:24:34.510269 kubelet[1783]: E0129 11:24:34.510075 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:34.538467 kubelet[1783]: I0129 11:24:34.538132 1783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6g7vt" podStartSLOduration=4.138822866 podStartE2EDuration="7.53809842s" podCreationTimestamp="2025-01-29 11:24:27 +0000 UTC" firstStartedPulling="2025-01-29 11:24:30.881142846 +0000 UTC m=+4.280461262" lastFinishedPulling="2025-01-29 11:24:34.280418403 +0000 UTC m=+7.679736816" observedRunningTime="2025-01-29 11:24:34.537792942 +0000 UTC m=+7.937111366" watchObservedRunningTime="2025-01-29 11:24:34.53809842 +0000 UTC m=+7.937416860" Jan 29 11:24:35.348968 kubelet[1783]: E0129 11:24:35.348867 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:35.470008 kubelet[1783]: E0129 11:24:35.469462 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4x5wc" podUID="27184255-bf1a-4cc5-b67d-e597a1ff246d" Jan 29 11:24:35.515286 kubelet[1783]: E0129 11:24:35.514595 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:36.350282 kubelet[1783]: E0129 11:24:36.349858 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:37.350818 kubelet[1783]: E0129 11:24:37.350769 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:37.475353 kubelet[1783]: E0129 11:24:37.474724 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4x5wc" podUID="27184255-bf1a-4cc5-b67d-e597a1ff246d" Jan 29 11:24:38.226750 containerd[1470]: time="2025-01-29T11:24:38.225698745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:38.227762 containerd[1470]: time="2025-01-29T11:24:38.227698925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:24:38.229645 containerd[1470]: time="2025-01-29T11:24:38.229567861Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:38.234274 containerd[1470]: time="2025-01-29T11:24:38.233862558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:38.235541 containerd[1470]: time="2025-01-29T11:24:38.235488145Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.954790674s" Jan 29 11:24:38.235742 containerd[1470]: time="2025-01-29T11:24:38.235718106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:24:38.239778 containerd[1470]: time="2025-01-29T11:24:38.239610457Z" level=info msg="CreateContainer within sandbox \"8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:24:38.272006 containerd[1470]: time="2025-01-29T11:24:38.271813153Z" level=info msg="CreateContainer within sandbox \"8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2\"" Jan 29 11:24:38.273212 containerd[1470]: time="2025-01-29T11:24:38.273156058Z" level=info msg="StartContainer for \"18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2\"" Jan 29 11:24:38.327628 systemd[1]: Started cri-containerd-18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2.scope - libcontainer container 18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2. Jan 29 11:24:38.351528 kubelet[1783]: E0129 11:24:38.351461 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:38.381358 containerd[1470]: time="2025-01-29T11:24:38.380969550Z" level=info msg="StartContainer for \"18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2\" returns successfully" Jan 29 11:24:38.526782 kubelet[1783]: E0129 11:24:38.526672 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:39.189198 containerd[1470]: time="2025-01-29T11:24:39.189121422Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:24:39.192801 systemd[1]: cri-containerd-18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2.scope: Deactivated successfully. Jan 29 11:24:39.220087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2-rootfs.mount: Deactivated successfully. Jan 29 11:24:39.259642 kubelet[1783]: I0129 11:24:39.259603 1783 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 11:24:39.297209 containerd[1470]: time="2025-01-29T11:24:39.296948825Z" level=info msg="shim disconnected" id=18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2 namespace=k8s.io Jan 29 11:24:39.297209 containerd[1470]: time="2025-01-29T11:24:39.297023417Z" level=warning msg="cleaning up after shim disconnected" id=18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2 namespace=k8s.io Jan 29 11:24:39.297209 containerd[1470]: time="2025-01-29T11:24:39.297035963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:24:39.352372 kubelet[1783]: E0129 11:24:39.352292 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:39.486731 systemd[1]: Created slice kubepods-besteffort-pod27184255_bf1a_4cc5_b67d_e597a1ff246d.slice - libcontainer container kubepods-besteffort-pod27184255_bf1a_4cc5_b67d_e597a1ff246d.slice. Jan 29 11:24:39.490564 containerd[1470]: time="2025-01-29T11:24:39.490513141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:0,}" Jan 29 11:24:39.533706 kubelet[1783]: E0129 11:24:39.532997 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:39.534791 containerd[1470]: time="2025-01-29T11:24:39.534743890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:24:39.540160 systemd-resolved[1330]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 29 11:24:39.589804 containerd[1470]: time="2025-01-29T11:24:39.589737260Z" level=error msg="Failed to destroy network for sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:39.590624 containerd[1470]: time="2025-01-29T11:24:39.590188624Z" level=error msg="encountered an error cleaning up failed sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:39.590624 containerd[1470]: time="2025-01-29T11:24:39.590318367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:39.590728 kubelet[1783]: E0129 11:24:39.590564 1783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:39.590728 kubelet[1783]: E0129 11:24:39.590645 1783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:39.590728 kubelet[1783]: E0129 11:24:39.590671 1783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:39.591067 kubelet[1783]: E0129 11:24:39.590725 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4x5wc_calico-system(27184255-bf1a-4cc5-b67d-e597a1ff246d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4x5wc_calico-system(27184255-bf1a-4cc5-b67d-e597a1ff246d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4x5wc" podUID="27184255-bf1a-4cc5-b67d-e597a1ff246d" Jan 29 11:24:39.592565 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e-shm.mount: Deactivated successfully. Jan 29 11:24:40.353271 kubelet[1783]: E0129 11:24:40.353170 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:40.536256 kubelet[1783]: I0129 11:24:40.535917 1783 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e" Jan 29 11:24:40.539176 containerd[1470]: time="2025-01-29T11:24:40.536725877Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\"" Jan 29 11:24:40.539176 containerd[1470]: time="2025-01-29T11:24:40.536988778Z" level=info msg="Ensure that sandbox e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e in task-service has been cleanup successfully" Jan 29 11:24:40.539074 systemd[1]: run-netns-cni\x2d7c9ea52d\x2dbbcc\x2d1cad\x2d8104\x2d742b778322c3.mount: Deactivated successfully. Jan 29 11:24:40.540551 containerd[1470]: time="2025-01-29T11:24:40.539938638Z" level=info msg="TearDown network for sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" successfully" Jan 29 11:24:40.540551 containerd[1470]: time="2025-01-29T11:24:40.539971806Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" returns successfully" Jan 29 11:24:40.541062 containerd[1470]: time="2025-01-29T11:24:40.541037270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:1,}" Jan 29 11:24:40.623122 containerd[1470]: time="2025-01-29T11:24:40.622503668Z" level=error msg="Failed to destroy network for sandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:40.624617 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c-shm.mount: Deactivated successfully. Jan 29 11:24:40.625889 containerd[1470]: time="2025-01-29T11:24:40.625576310Z" level=error msg="encountered an error cleaning up failed sandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:40.626583 containerd[1470]: time="2025-01-29T11:24:40.626540324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:40.628073 kubelet[1783]: E0129 11:24:40.627706 1783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:40.628073 kubelet[1783]: E0129 11:24:40.627774 1783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:40.628073 kubelet[1783]: E0129 11:24:40.627797 1783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:40.628254 kubelet[1783]: E0129 11:24:40.627843 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4x5wc_calico-system(27184255-bf1a-4cc5-b67d-e597a1ff246d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4x5wc_calico-system(27184255-bf1a-4cc5-b67d-e597a1ff246d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4x5wc" podUID="27184255-bf1a-4cc5-b67d-e597a1ff246d" Jan 29 11:24:40.930883 systemd[1]: Created slice kubepods-besteffort-pod48d527ce_aca9_411e_89b8_198e2a4f2d33.slice - libcontainer container kubepods-besteffort-pod48d527ce_aca9_411e_89b8_198e2a4f2d33.slice. Jan 29 11:24:40.978272 kubelet[1783]: I0129 11:24:40.977701 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp8cf\" (UniqueName: \"kubernetes.io/projected/48d527ce-aca9-411e-89b8-198e2a4f2d33-kube-api-access-dp8cf\") pod \"nginx-deployment-7fcdb87857-x27bp\" (UID: \"48d527ce-aca9-411e-89b8-198e2a4f2d33\") " pod="default/nginx-deployment-7fcdb87857-x27bp" Jan 29 11:24:41.236678 containerd[1470]: time="2025-01-29T11:24:41.235956129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-x27bp,Uid:48d527ce-aca9-411e-89b8-198e2a4f2d33,Namespace:default,Attempt:0,}" Jan 29 11:24:41.335743 containerd[1470]: time="2025-01-29T11:24:41.335678431Z" level=error msg="Failed to destroy network for sandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:41.336160 containerd[1470]: time="2025-01-29T11:24:41.336082878Z" level=error msg="encountered an error cleaning up failed sandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:41.336206 containerd[1470]: time="2025-01-29T11:24:41.336182486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-x27bp,Uid:48d527ce-aca9-411e-89b8-198e2a4f2d33,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:41.336522 kubelet[1783]: E0129 11:24:41.336473 1783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:41.337305 kubelet[1783]: E0129 11:24:41.337082 1783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-x27bp" Jan 29 11:24:41.337305 kubelet[1783]: E0129 11:24:41.337138 1783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-x27bp" Jan 29 11:24:41.337305 kubelet[1783]: E0129 11:24:41.337211 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-x27bp_default(48d527ce-aca9-411e-89b8-198e2a4f2d33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-x27bp_default(48d527ce-aca9-411e-89b8-198e2a4f2d33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-x27bp" podUID="48d527ce-aca9-411e-89b8-198e2a4f2d33" Jan 29 11:24:41.353903 kubelet[1783]: E0129 11:24:41.353842 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:41.555625 kubelet[1783]: I0129 11:24:41.555467 1783 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c" Jan 29 11:24:41.557312 containerd[1470]: time="2025-01-29T11:24:41.556799474Z" level=info msg="StopPodSandbox for \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\"" Jan 29 11:24:41.557312 containerd[1470]: time="2025-01-29T11:24:41.557122825Z" level=info msg="Ensure that sandbox eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c in task-service has been cleanup successfully" Jan 29 11:24:41.559615 containerd[1470]: time="2025-01-29T11:24:41.559264260Z" level=info msg="TearDown network for sandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" successfully" Jan 29 11:24:41.559615 containerd[1470]: time="2025-01-29T11:24:41.559330308Z" level=info msg="StopPodSandbox for \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" returns successfully" Jan 29 11:24:41.560450 containerd[1470]: time="2025-01-29T11:24:41.559908360Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\"" Jan 29 11:24:41.560450 containerd[1470]: time="2025-01-29T11:24:41.560011098Z" level=info msg="TearDown network for sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" successfully" Jan 29 11:24:41.560450 containerd[1470]: time="2025-01-29T11:24:41.560027599Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" returns successfully" Jan 29 11:24:41.560564 systemd[1]: run-netns-cni\x2dbbfe6a58\x2dd867\x2d1267\x2d4ee4\x2d3238d21645c2.mount: Deactivated successfully. Jan 29 11:24:41.566776 containerd[1470]: time="2025-01-29T11:24:41.566354581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:2,}" Jan 29 11:24:41.570114 kubelet[1783]: I0129 11:24:41.566477 1783 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241" Jan 29 11:24:41.569303 systemd[1]: run-netns-cni\x2d520b8538\x2de5fe\x2d989e\x2d8205\x2d683a01db74c7.mount: Deactivated successfully. Jan 29 11:24:41.570297 containerd[1470]: time="2025-01-29T11:24:41.567335662Z" level=info msg="StopPodSandbox for \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\"" Jan 29 11:24:41.570297 containerd[1470]: time="2025-01-29T11:24:41.567536310Z" level=info msg="Ensure that sandbox 962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241 in task-service has been cleanup successfully" Jan 29 11:24:41.571327 containerd[1470]: time="2025-01-29T11:24:41.571192104Z" level=info msg="TearDown network for sandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" successfully" Jan 29 11:24:41.571327 containerd[1470]: time="2025-01-29T11:24:41.571219315Z" level=info msg="StopPodSandbox for \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" returns successfully" Jan 29 11:24:41.572481 containerd[1470]: time="2025-01-29T11:24:41.572454490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-x27bp,Uid:48d527ce-aca9-411e-89b8-198e2a4f2d33,Namespace:default,Attempt:1,}" Jan 29 11:24:41.756158 containerd[1470]: time="2025-01-29T11:24:41.754668933Z" level=error msg="Failed to destroy network for sandbox \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:41.756158 containerd[1470]: time="2025-01-29T11:24:41.755070495Z" level=error msg="encountered an error cleaning up failed sandbox \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:41.756158 containerd[1470]: time="2025-01-29T11:24:41.755145340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:41.756970 kubelet[1783]: E0129 11:24:41.756574 1783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:41.756970 kubelet[1783]: E0129 11:24:41.756643 1783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:41.756970 kubelet[1783]: E0129 11:24:41.756672 1783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:41.757123 kubelet[1783]: E0129 11:24:41.756716 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4x5wc_calico-system(27184255-bf1a-4cc5-b67d-e597a1ff246d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4x5wc_calico-system(27184255-bf1a-4cc5-b67d-e597a1ff246d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4x5wc" podUID="27184255-bf1a-4cc5-b67d-e597a1ff246d" Jan 29 11:24:41.767258 containerd[1470]: time="2025-01-29T11:24:41.767088383Z" level=error msg="Failed to destroy network for sandbox \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:41.768180 containerd[1470]: time="2025-01-29T11:24:41.768139864Z" level=error msg="encountered an error cleaning up failed sandbox \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:41.768284 containerd[1470]: time="2025-01-29T11:24:41.768245747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-x27bp,Uid:48d527ce-aca9-411e-89b8-198e2a4f2d33,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:41.768561 kubelet[1783]: E0129 11:24:41.768526 1783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:41.768677 kubelet[1783]: E0129 11:24:41.768662 1783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-x27bp" Jan 29 11:24:41.768792 kubelet[1783]: E0129 11:24:41.768771 1783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-x27bp" Jan 29 11:24:41.768917 kubelet[1783]: E0129 11:24:41.768889 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-x27bp_default(48d527ce-aca9-411e-89b8-198e2a4f2d33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-x27bp_default(48d527ce-aca9-411e-89b8-198e2a4f2d33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-x27bp" podUID="48d527ce-aca9-411e-89b8-198e2a4f2d33" Jan 29 11:24:42.355071 kubelet[1783]: E0129 11:24:42.354995 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:42.540500 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc-shm.mount: Deactivated successfully. Jan 29 11:24:42.571825 kubelet[1783]: I0129 11:24:42.571780 1783 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead" Jan 29 11:24:42.572960 containerd[1470]: time="2025-01-29T11:24:42.572695070Z" level=info msg="StopPodSandbox for \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\"" Jan 29 11:24:42.574214 containerd[1470]: time="2025-01-29T11:24:42.573999709Z" level=info msg="Ensure that sandbox eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead in task-service has been cleanup successfully" Jan 29 11:24:42.574323 kubelet[1783]: I0129 11:24:42.574251 1783 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc" Jan 29 11:24:42.575154 containerd[1470]: time="2025-01-29T11:24:42.574809122Z" level=info msg="StopPodSandbox for \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\"" Jan 29 11:24:42.576313 containerd[1470]: time="2025-01-29T11:24:42.574900978Z" level=info msg="TearDown network for sandbox \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\" successfully" Jan 29 11:24:42.576535 containerd[1470]: time="2025-01-29T11:24:42.576393256Z" level=info msg="StopPodSandbox for \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\" returns successfully" Jan 29 11:24:42.576535 containerd[1470]: time="2025-01-29T11:24:42.575044880Z" level=info msg="Ensure that sandbox 7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc in task-service has been cleanup successfully" Jan 29 11:24:42.577481 containerd[1470]: time="2025-01-29T11:24:42.577364843Z" level=info msg="StopPodSandbox for \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\"" Jan 29 11:24:42.577973 containerd[1470]: time="2025-01-29T11:24:42.577632197Z" level=info msg="TearDown network for sandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" successfully" Jan 29 11:24:42.577973 containerd[1470]: time="2025-01-29T11:24:42.577733178Z" level=info msg="StopPodSandbox for \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" returns successfully" Jan 29 11:24:42.578194 containerd[1470]: time="2025-01-29T11:24:42.578177812Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\"" Jan 29 11:24:42.578777 containerd[1470]: time="2025-01-29T11:24:42.578505220Z" level=info msg="TearDown network for sandbox \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\" successfully" Jan 29 11:24:42.578976 containerd[1470]: time="2025-01-29T11:24:42.578957918Z" level=info msg="StopPodSandbox for \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\" returns successfully" Jan 29 11:24:42.579192 containerd[1470]: time="2025-01-29T11:24:42.578602097Z" level=info msg="TearDown network for sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" successfully" Jan 29 11:24:42.580119 containerd[1470]: time="2025-01-29T11:24:42.579261045Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" returns successfully" Jan 29 11:24:42.579372 systemd[1]: run-netns-cni\x2d689de522\x2d1f54\x2d5a7e\x2d2629\x2ddd459c62abb9.mount: Deactivated successfully. Jan 29 11:24:42.581864 containerd[1470]: time="2025-01-29T11:24:42.581427276Z" level=info msg="StopPodSandbox for \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\"" Jan 29 11:24:42.581864 containerd[1470]: time="2025-01-29T11:24:42.581450874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:3,}" Jan 29 11:24:42.581864 containerd[1470]: time="2025-01-29T11:24:42.581830940Z" level=info msg="TearDown network for sandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" successfully" Jan 29 11:24:42.582133 containerd[1470]: time="2025-01-29T11:24:42.581847512Z" level=info msg="StopPodSandbox for \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" returns successfully" Jan 29 11:24:42.582785 containerd[1470]: time="2025-01-29T11:24:42.582540565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-x27bp,Uid:48d527ce-aca9-411e-89b8-198e2a4f2d33,Namespace:default,Attempt:2,}" Jan 29 11:24:42.586410 systemd[1]: run-netns-cni\x2df0bfbbc5\x2ddb69\x2dfd65\x2ddceb\x2d7cc50e9bca41.mount: Deactivated successfully. Jan 29 11:24:42.624491 systemd-resolved[1330]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 29 11:24:42.755520 containerd[1470]: time="2025-01-29T11:24:42.755449562Z" level=error msg="Failed to destroy network for sandbox \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:42.756115 containerd[1470]: time="2025-01-29T11:24:42.756069342Z" level=error msg="encountered an error cleaning up failed sandbox \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:42.756376 containerd[1470]: time="2025-01-29T11:24:42.756337091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:42.757322 kubelet[1783]: E0129 11:24:42.757274 1783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:42.757748 kubelet[1783]: E0129 11:24:42.757595 1783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:42.757748 kubelet[1783]: E0129 11:24:42.757642 1783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:42.758700 kubelet[1783]: E0129 11:24:42.758560 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4x5wc_calico-system(27184255-bf1a-4cc5-b67d-e597a1ff246d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4x5wc_calico-system(27184255-bf1a-4cc5-b67d-e597a1ff246d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4x5wc" podUID="27184255-bf1a-4cc5-b67d-e597a1ff246d" Jan 29 11:24:42.781336 containerd[1470]: time="2025-01-29T11:24:42.781181795Z" level=error msg="Failed to destroy network for sandbox \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:42.781665 containerd[1470]: time="2025-01-29T11:24:42.781555911Z" level=error msg="encountered an error cleaning up failed sandbox \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:42.781665 containerd[1470]: time="2025-01-29T11:24:42.781631348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-x27bp,Uid:48d527ce-aca9-411e-89b8-198e2a4f2d33,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:42.782386 kubelet[1783]: E0129 11:24:42.781939 1783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:42.782386 kubelet[1783]: E0129 11:24:42.781997 1783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-x27bp" Jan 29 11:24:42.782386 kubelet[1783]: E0129 11:24:42.782019 1783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-x27bp" Jan 29 11:24:42.782542 kubelet[1783]: E0129 11:24:42.782068 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-x27bp_default(48d527ce-aca9-411e-89b8-198e2a4f2d33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-x27bp_default(48d527ce-aca9-411e-89b8-198e2a4f2d33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-x27bp" podUID="48d527ce-aca9-411e-89b8-198e2a4f2d33" Jan 29 11:24:43.355887 kubelet[1783]: E0129 11:24:43.355805 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:43.541988 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f-shm.mount: Deactivated successfully. Jan 29 11:24:43.581286 kubelet[1783]: I0129 11:24:43.580679 1783 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33" Jan 29 11:24:43.582166 containerd[1470]: time="2025-01-29T11:24:43.581591532Z" level=info msg="StopPodSandbox for \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\"" Jan 29 11:24:43.582166 containerd[1470]: time="2025-01-29T11:24:43.581890023Z" level=info msg="Ensure that sandbox 4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33 in task-service has been cleanup successfully" Jan 29 11:24:43.583345 containerd[1470]: time="2025-01-29T11:24:43.582563118Z" level=info msg="TearDown network for sandbox \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\" successfully" Jan 29 11:24:43.583345 containerd[1470]: time="2025-01-29T11:24:43.582584557Z" level=info msg="StopPodSandbox for \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\" returns successfully" Jan 29 11:24:43.585262 containerd[1470]: time="2025-01-29T11:24:43.584704754Z" level=info msg="StopPodSandbox for \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\"" Jan 29 11:24:43.586314 containerd[1470]: time="2025-01-29T11:24:43.585546901Z" level=info msg="TearDown network for sandbox \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\" successfully" Jan 29 11:24:43.586080 systemd[1]: run-netns-cni\x2dbf1ca63c\x2dca84\x2d7f55\x2dab3d\x2d287623c8fb35.mount: Deactivated successfully. Jan 29 11:24:43.587794 containerd[1470]: time="2025-01-29T11:24:43.587456292Z" level=info msg="StopPodSandbox for \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\" returns successfully" Jan 29 11:24:43.588838 containerd[1470]: time="2025-01-29T11:24:43.588814053Z" level=info msg="StopPodSandbox for \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\"" Jan 29 11:24:43.589009 containerd[1470]: time="2025-01-29T11:24:43.588995286Z" level=info msg="TearDown network for sandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" successfully" Jan 29 11:24:43.589060 containerd[1470]: time="2025-01-29T11:24:43.589049161Z" level=info msg="StopPodSandbox for \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" returns successfully" Jan 29 11:24:43.591780 kubelet[1783]: I0129 11:24:43.591396 1783 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f" Jan 29 11:24:43.591938 containerd[1470]: time="2025-01-29T11:24:43.591469321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-x27bp,Uid:48d527ce-aca9-411e-89b8-198e2a4f2d33,Namespace:default,Attempt:3,}" Jan 29 11:24:43.595357 containerd[1470]: time="2025-01-29T11:24:43.595321990Z" level=info msg="StopPodSandbox for \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\"" Jan 29 11:24:43.595694 containerd[1470]: time="2025-01-29T11:24:43.595673001Z" level=info msg="Ensure that sandbox bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f in task-service has been cleanup successfully" Jan 29 11:24:43.595914 containerd[1470]: time="2025-01-29T11:24:43.595895185Z" level=info msg="TearDown network for sandbox \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\" successfully" Jan 29 11:24:43.595975 containerd[1470]: time="2025-01-29T11:24:43.595965128Z" level=info msg="StopPodSandbox for \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\" returns successfully" Jan 29 11:24:43.597619 containerd[1470]: time="2025-01-29T11:24:43.597569650Z" level=info msg="StopPodSandbox for \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\"" Jan 29 11:24:43.599043 systemd[1]: run-netns-cni\x2d645995d9\x2d22a5\x2dddc2\x2db7fa\x2d78aceff533e2.mount: Deactivated successfully. Jan 29 11:24:43.599363 containerd[1470]: time="2025-01-29T11:24:43.599331192Z" level=info msg="TearDown network for sandbox \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\" successfully" Jan 29 11:24:43.601205 containerd[1470]: time="2025-01-29T11:24:43.599431455Z" level=info msg="StopPodSandbox for \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\" returns successfully" Jan 29 11:24:43.602933 containerd[1470]: time="2025-01-29T11:24:43.602902153Z" level=info msg="StopPodSandbox for \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\"" Jan 29 11:24:43.603176 containerd[1470]: time="2025-01-29T11:24:43.603157785Z" level=info msg="TearDown network for sandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" successfully" Jan 29 11:24:43.603262 containerd[1470]: time="2025-01-29T11:24:43.603240221Z" level=info msg="StopPodSandbox for \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" returns successfully" Jan 29 11:24:43.603688 containerd[1470]: time="2025-01-29T11:24:43.603669153Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\"" Jan 29 11:24:43.603868 containerd[1470]: time="2025-01-29T11:24:43.603848114Z" level=info msg="TearDown network for sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" successfully" Jan 29 11:24:43.603952 containerd[1470]: time="2025-01-29T11:24:43.603938520Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" returns successfully" Jan 29 11:24:43.605646 containerd[1470]: time="2025-01-29T11:24:43.605621900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:4,}" Jan 29 11:24:43.775994 containerd[1470]: time="2025-01-29T11:24:43.775919588Z" level=error msg="Failed to destroy network for sandbox \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:43.776559 containerd[1470]: time="2025-01-29T11:24:43.776418077Z" level=error msg="encountered an error cleaning up failed sandbox \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:43.776559 containerd[1470]: time="2025-01-29T11:24:43.776513992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-x27bp,Uid:48d527ce-aca9-411e-89b8-198e2a4f2d33,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:43.777000 kubelet[1783]: E0129 11:24:43.776846 1783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:43.777000 kubelet[1783]: E0129 11:24:43.776925 1783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-x27bp" Jan 29 11:24:43.777000 kubelet[1783]: E0129 11:24:43.776960 1783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-x27bp" Jan 29 11:24:43.777137 kubelet[1783]: E0129 11:24:43.777026 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-x27bp_default(48d527ce-aca9-411e-89b8-198e2a4f2d33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-x27bp_default(48d527ce-aca9-411e-89b8-198e2a4f2d33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-x27bp" podUID="48d527ce-aca9-411e-89b8-198e2a4f2d33" Jan 29 11:24:43.799060 containerd[1470]: time="2025-01-29T11:24:43.798989513Z" level=error msg="Failed to destroy network for sandbox \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:43.800108 containerd[1470]: time="2025-01-29T11:24:43.800051165Z" level=error msg="encountered an error cleaning up failed sandbox \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:43.800209 containerd[1470]: time="2025-01-29T11:24:43.800159347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:43.800928 kubelet[1783]: E0129 11:24:43.800480 1783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:43.800928 kubelet[1783]: E0129 11:24:43.800574 1783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:43.800928 kubelet[1783]: E0129 11:24:43.800611 1783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:43.801104 kubelet[1783]: E0129 11:24:43.800676 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4x5wc_calico-system(27184255-bf1a-4cc5-b67d-e597a1ff246d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4x5wc_calico-system(27184255-bf1a-4cc5-b67d-e597a1ff246d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4x5wc" podUID="27184255-bf1a-4cc5-b67d-e597a1ff246d" Jan 29 11:24:44.356526 kubelet[1783]: E0129 11:24:44.356453 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:44.541685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534-shm.mount: Deactivated successfully. Jan 29 11:24:44.596505 kubelet[1783]: I0129 11:24:44.595443 1783 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f" Jan 29 11:24:44.596678 containerd[1470]: time="2025-01-29T11:24:44.596323844Z" level=info msg="StopPodSandbox for \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\"" Jan 29 11:24:44.599358 kubelet[1783]: I0129 11:24:44.598657 1783 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534" Jan 29 11:24:44.599510 containerd[1470]: time="2025-01-29T11:24:44.599197199Z" level=info msg="StopPodSandbox for \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\"" Jan 29 11:24:44.621276 containerd[1470]: time="2025-01-29T11:24:44.621177863Z" level=info msg="Ensure that sandbox 409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f in task-service has been cleanup successfully" Jan 29 11:24:44.625164 containerd[1470]: time="2025-01-29T11:24:44.622330769Z" level=info msg="TearDown network for sandbox \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\" successfully" Jan 29 11:24:44.625164 containerd[1470]: time="2025-01-29T11:24:44.622373324Z" level=info msg="StopPodSandbox for \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\" returns successfully" Jan 29 11:24:44.625164 containerd[1470]: time="2025-01-29T11:24:44.622560195Z" level=info msg="Ensure that sandbox 44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534 in task-service has been cleanup successfully" Jan 29 11:24:44.624833 systemd[1]: run-netns-cni\x2dfff451bb\x2dfd6d\x2dc5ac\x2dd7af\x2d9d6289911ac4.mount: Deactivated successfully. Jan 29 11:24:44.624967 systemd[1]: run-netns-cni\x2d95d2024d\x2da781\x2dbfb2\x2d3b54\x2d05c9c29ef004.mount: Deactivated successfully. Jan 29 11:24:44.628446 containerd[1470]: time="2025-01-29T11:24:44.626133346Z" level=info msg="TearDown network for sandbox \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\" successfully" Jan 29 11:24:44.628446 containerd[1470]: time="2025-01-29T11:24:44.626169117Z" level=info msg="StopPodSandbox for \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\" returns successfully" Jan 29 11:24:44.628446 containerd[1470]: time="2025-01-29T11:24:44.626487935Z" level=info msg="StopPodSandbox for \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\"" Jan 29 11:24:44.628446 containerd[1470]: time="2025-01-29T11:24:44.626630397Z" level=info msg="TearDown network for sandbox \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\" successfully" Jan 29 11:24:44.628446 containerd[1470]: time="2025-01-29T11:24:44.626648946Z" level=info msg="StopPodSandbox for \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\" returns successfully" Jan 29 11:24:44.630326 containerd[1470]: time="2025-01-29T11:24:44.629699044Z" level=info msg="StopPodSandbox for \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\"" Jan 29 11:24:44.630326 containerd[1470]: time="2025-01-29T11:24:44.629862972Z" level=info msg="TearDown network for sandbox \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\" successfully" Jan 29 11:24:44.630326 containerd[1470]: time="2025-01-29T11:24:44.629880617Z" level=info msg="StopPodSandbox for \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\" returns successfully" Jan 29 11:24:44.631896 containerd[1470]: time="2025-01-29T11:24:44.631838643Z" level=info msg="StopPodSandbox for \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\"" Jan 29 11:24:44.632021 containerd[1470]: time="2025-01-29T11:24:44.631989838Z" level=info msg="TearDown network for sandbox \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\" successfully" Jan 29 11:24:44.632021 containerd[1470]: time="2025-01-29T11:24:44.632006388Z" level=info msg="StopPodSandbox for \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\" returns successfully" Jan 29 11:24:44.633104 containerd[1470]: time="2025-01-29T11:24:44.633080244Z" level=info msg="StopPodSandbox for \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\"" Jan 29 11:24:44.633323 containerd[1470]: time="2025-01-29T11:24:44.633298910Z" level=info msg="TearDown network for sandbox \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\" successfully" Jan 29 11:24:44.633416 containerd[1470]: time="2025-01-29T11:24:44.633399781Z" level=info msg="StopPodSandbox for \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\" returns successfully" Jan 29 11:24:44.633600 containerd[1470]: time="2025-01-29T11:24:44.633573897Z" level=info msg="StopPodSandbox for \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\"" Jan 29 11:24:44.633736 containerd[1470]: time="2025-01-29T11:24:44.633722090Z" level=info msg="TearDown network for sandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" successfully" Jan 29 11:24:44.633788 containerd[1470]: time="2025-01-29T11:24:44.633778289Z" level=info msg="StopPodSandbox for \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" returns successfully" Jan 29 11:24:44.634273 containerd[1470]: time="2025-01-29T11:24:44.634252366Z" level=info msg="StopPodSandbox for \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\"" Jan 29 11:24:44.634452 containerd[1470]: time="2025-01-29T11:24:44.634437023Z" level=info msg="TearDown network for sandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" successfully" Jan 29 11:24:44.634507 containerd[1470]: time="2025-01-29T11:24:44.634497933Z" level=info msg="StopPodSandbox for \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" returns successfully" Jan 29 11:24:44.634796 containerd[1470]: time="2025-01-29T11:24:44.634736368Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\"" Jan 29 11:24:44.635089 containerd[1470]: time="2025-01-29T11:24:44.634936786Z" level=info msg="TearDown network for sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" successfully" Jan 29 11:24:44.635089 containerd[1470]: time="2025-01-29T11:24:44.634951876Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" returns successfully" Jan 29 11:24:44.635974 containerd[1470]: time="2025-01-29T11:24:44.635651574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-x27bp,Uid:48d527ce-aca9-411e-89b8-198e2a4f2d33,Namespace:default,Attempt:4,}" Jan 29 11:24:44.636412 containerd[1470]: time="2025-01-29T11:24:44.636314447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:5,}" Jan 29 11:24:44.960471 containerd[1470]: time="2025-01-29T11:24:44.959622512Z" level=error msg="Failed to destroy network for sandbox \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:44.961325 containerd[1470]: time="2025-01-29T11:24:44.961250794Z" level=error msg="encountered an error cleaning up failed sandbox \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:44.962026 containerd[1470]: time="2025-01-29T11:24:44.961352949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-x27bp,Uid:48d527ce-aca9-411e-89b8-198e2a4f2d33,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:44.964154 kubelet[1783]: E0129 11:24:44.963619 1783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:44.964154 kubelet[1783]: E0129 11:24:44.963714 1783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-x27bp" Jan 29 11:24:44.964154 kubelet[1783]: E0129 11:24:44.963745 1783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-x27bp" Jan 29 11:24:44.964449 kubelet[1783]: E0129 11:24:44.963810 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-x27bp_default(48d527ce-aca9-411e-89b8-198e2a4f2d33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-x27bp_default(48d527ce-aca9-411e-89b8-198e2a4f2d33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-x27bp" podUID="48d527ce-aca9-411e-89b8-198e2a4f2d33" Jan 29 11:24:44.980757 containerd[1470]: time="2025-01-29T11:24:44.980493614Z" level=error msg="Failed to destroy network for sandbox \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:44.981409 containerd[1470]: time="2025-01-29T11:24:44.981213184Z" level=error msg="encountered an error cleaning up failed sandbox \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:44.981409 containerd[1470]: time="2025-01-29T11:24:44.981303786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:44.981691 kubelet[1783]: E0129 11:24:44.981587 1783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:44.981738 kubelet[1783]: E0129 11:24:44.981679 1783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:44.981738 kubelet[1783]: E0129 11:24:44.981722 1783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:44.981961 kubelet[1783]: E0129 11:24:44.981781 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4x5wc_calico-system(27184255-bf1a-4cc5-b67d-e597a1ff246d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4x5wc_calico-system(27184255-bf1a-4cc5-b67d-e597a1ff246d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4x5wc" podUID="27184255-bf1a-4cc5-b67d-e597a1ff246d" Jan 29 11:24:45.357277 kubelet[1783]: E0129 11:24:45.357048 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:45.540901 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518-shm.mount: Deactivated successfully. Jan 29 11:24:45.541484 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1-shm.mount: Deactivated successfully. Jan 29 11:24:45.607839 kubelet[1783]: I0129 11:24:45.606212 1783 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1" Jan 29 11:24:45.611400 containerd[1470]: time="2025-01-29T11:24:45.611125388Z" level=info msg="StopPodSandbox for \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\"" Jan 29 11:24:45.612838 containerd[1470]: time="2025-01-29T11:24:45.612797903Z" level=info msg="Ensure that sandbox 446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1 in task-service has been cleanup successfully" Jan 29 11:24:45.615385 containerd[1470]: time="2025-01-29T11:24:45.615345005Z" level=info msg="TearDown network for sandbox \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\" successfully" Jan 29 11:24:45.616256 containerd[1470]: time="2025-01-29T11:24:45.615518009Z" level=info msg="StopPodSandbox for \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\" returns successfully" Jan 29 11:24:45.617737 systemd[1]: run-netns-cni\x2daa12755d\x2d2acb\x2d506c\x2d66d8\x2d674dd29e8558.mount: Deactivated successfully. Jan 29 11:24:45.621592 containerd[1470]: time="2025-01-29T11:24:45.621543203Z" level=info msg="StopPodSandbox for \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\"" Jan 29 11:24:45.621709 containerd[1470]: time="2025-01-29T11:24:45.621660148Z" level=info msg="TearDown network for sandbox \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\" successfully" Jan 29 11:24:45.621709 containerd[1470]: time="2025-01-29T11:24:45.621672343Z" level=info msg="StopPodSandbox for \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\" returns successfully" Jan 29 11:24:45.626270 containerd[1470]: time="2025-01-29T11:24:45.624928414Z" level=info msg="StopPodSandbox for \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\"" Jan 29 11:24:45.626270 containerd[1470]: time="2025-01-29T11:24:45.625022927Z" level=info msg="TearDown network for sandbox \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\" successfully" Jan 29 11:24:45.626270 containerd[1470]: time="2025-01-29T11:24:45.625068352Z" level=info msg="StopPodSandbox for \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\" returns successfully" Jan 29 11:24:45.627712 containerd[1470]: time="2025-01-29T11:24:45.627209513Z" level=info msg="StopPodSandbox for \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\"" Jan 29 11:24:45.628024 containerd[1470]: time="2025-01-29T11:24:45.627948914Z" level=info msg="TearDown network for sandbox \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\" successfully" Jan 29 11:24:45.628157 containerd[1470]: time="2025-01-29T11:24:45.628134853Z" level=info msg="StopPodSandbox for \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\" returns successfully" Jan 29 11:24:45.628902 containerd[1470]: time="2025-01-29T11:24:45.628874600Z" level=info msg="StopPodSandbox for \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\"" Jan 29 11:24:45.629165 containerd[1470]: time="2025-01-29T11:24:45.629140848Z" level=info msg="TearDown network for sandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" successfully" Jan 29 11:24:45.629291 containerd[1470]: time="2025-01-29T11:24:45.629272460Z" level=info msg="StopPodSandbox for \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" returns successfully" Jan 29 11:24:45.630329 containerd[1470]: time="2025-01-29T11:24:45.630298996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-x27bp,Uid:48d527ce-aca9-411e-89b8-198e2a4f2d33,Namespace:default,Attempt:5,}" Jan 29 11:24:45.643907 kubelet[1783]: I0129 11:24:45.643017 1783 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518" Jan 29 11:24:45.644084 containerd[1470]: time="2025-01-29T11:24:45.643974519Z" level=info msg="StopPodSandbox for \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\"" Jan 29 11:24:45.644328 containerd[1470]: time="2025-01-29T11:24:45.644300973Z" level=info msg="Ensure that sandbox c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518 in task-service has been cleanup successfully" Jan 29 11:24:45.648614 containerd[1470]: time="2025-01-29T11:24:45.646474474Z" level=info msg="TearDown network for sandbox \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\" successfully" Jan 29 11:24:45.648614 containerd[1470]: time="2025-01-29T11:24:45.646501687Z" level=info msg="StopPodSandbox for \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\" returns successfully" Jan 29 11:24:45.648171 systemd[1]: run-netns-cni\x2d07016e5b\x2d1732\x2d4225\x2deabb\x2d40ea7050a1a6.mount: Deactivated successfully. Jan 29 11:24:45.650099 containerd[1470]: time="2025-01-29T11:24:45.649670082Z" level=info msg="StopPodSandbox for \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\"" Jan 29 11:24:45.650099 containerd[1470]: time="2025-01-29T11:24:45.649816886Z" level=info msg="TearDown network for sandbox \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\" successfully" Jan 29 11:24:45.650099 containerd[1470]: time="2025-01-29T11:24:45.649835192Z" level=info msg="StopPodSandbox for \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\" returns successfully" Jan 29 11:24:45.650579 containerd[1470]: time="2025-01-29T11:24:45.650554385Z" level=info msg="StopPodSandbox for \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\"" Jan 29 11:24:45.651048 containerd[1470]: time="2025-01-29T11:24:45.650967609Z" level=info msg="TearDown network for sandbox \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\" successfully" Jan 29 11:24:45.651048 containerd[1470]: time="2025-01-29T11:24:45.651008976Z" level=info msg="StopPodSandbox for \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\" returns successfully" Jan 29 11:24:45.652092 containerd[1470]: time="2025-01-29T11:24:45.651850804Z" level=info msg="StopPodSandbox for \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\"" Jan 29 11:24:45.652092 containerd[1470]: time="2025-01-29T11:24:45.651968478Z" level=info msg="TearDown network for sandbox \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\" successfully" Jan 29 11:24:45.652092 containerd[1470]: time="2025-01-29T11:24:45.651985793Z" level=info msg="StopPodSandbox for \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\" returns successfully" Jan 29 11:24:45.653141 containerd[1470]: time="2025-01-29T11:24:45.652777279Z" level=info msg="StopPodSandbox for \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\"" Jan 29 11:24:45.653141 containerd[1470]: time="2025-01-29T11:24:45.652886322Z" level=info msg="TearDown network for sandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" successfully" Jan 29 11:24:45.653141 containerd[1470]: time="2025-01-29T11:24:45.652902849Z" level=info msg="StopPodSandbox for \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" returns successfully" Jan 29 11:24:45.654080 containerd[1470]: time="2025-01-29T11:24:45.653391088Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\"" Jan 29 11:24:45.654080 containerd[1470]: time="2025-01-29T11:24:45.653571473Z" level=info msg="TearDown network for sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" successfully" Jan 29 11:24:45.654080 containerd[1470]: time="2025-01-29T11:24:45.653589930Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" returns successfully" Jan 29 11:24:45.655100 containerd[1470]: time="2025-01-29T11:24:45.654700411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:6,}" Jan 29 11:24:45.797135 containerd[1470]: time="2025-01-29T11:24:45.797057380Z" level=error msg="Failed to destroy network for sandbox \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:45.797567 containerd[1470]: time="2025-01-29T11:24:45.797537604Z" level=error msg="encountered an error cleaning up failed sandbox \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:45.797650 containerd[1470]: time="2025-01-29T11:24:45.797623121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-x27bp,Uid:48d527ce-aca9-411e-89b8-198e2a4f2d33,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:45.799508 kubelet[1783]: E0129 11:24:45.799466 1783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:45.799846 kubelet[1783]: E0129 11:24:45.799821 1783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-x27bp" Jan 29 11:24:45.799963 kubelet[1783]: E0129 11:24:45.799927 1783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-x27bp" Jan 29 11:24:45.800258 kubelet[1783]: E0129 11:24:45.800044 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-x27bp_default(48d527ce-aca9-411e-89b8-198e2a4f2d33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-x27bp_default(48d527ce-aca9-411e-89b8-198e2a4f2d33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-x27bp" podUID="48d527ce-aca9-411e-89b8-198e2a4f2d33" Jan 29 11:24:45.854836 containerd[1470]: time="2025-01-29T11:24:45.854692779Z" level=error msg="Failed to destroy network for sandbox \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:45.855791 containerd[1470]: time="2025-01-29T11:24:45.855569452Z" level=error msg="encountered an error cleaning up failed sandbox \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:45.855791 containerd[1470]: time="2025-01-29T11:24:45.855672507Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:45.857479 kubelet[1783]: E0129 11:24:45.856152 1783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:24:45.857479 kubelet[1783]: E0129 11:24:45.856287 1783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:45.857479 kubelet[1783]: E0129 11:24:45.856320 1783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4x5wc" Jan 29 11:24:45.857646 kubelet[1783]: E0129 11:24:45.857162 1783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4x5wc_calico-system(27184255-bf1a-4cc5-b67d-e597a1ff246d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4x5wc_calico-system(27184255-bf1a-4cc5-b67d-e597a1ff246d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4x5wc" podUID="27184255-bf1a-4cc5-b67d-e597a1ff246d" Jan 29 11:24:45.976942 containerd[1470]: time="2025-01-29T11:24:45.976854503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:45.977895 containerd[1470]: time="2025-01-29T11:24:45.977829018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:24:45.978680 containerd[1470]: time="2025-01-29T11:24:45.978634455Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:45.982343 containerd[1470]: time="2025-01-29T11:24:45.982217854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:45.983371 containerd[1470]: time="2025-01-29T11:24:45.983327468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.448538551s" Jan 29 11:24:45.983460 containerd[1470]: time="2025-01-29T11:24:45.983379531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:24:46.011755 containerd[1470]: time="2025-01-29T11:24:46.011712321Z" level=info msg="CreateContainer within sandbox \"8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:24:46.035148 containerd[1470]: time="2025-01-29T11:24:46.035003462Z" level=info msg="CreateContainer within sandbox \"8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8\"" Jan 29 11:24:46.036279 containerd[1470]: time="2025-01-29T11:24:46.035633750Z" level=info msg="StartContainer for \"18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8\"" Jan 29 11:24:46.142781 systemd[1]: Started cri-containerd-18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8.scope - libcontainer container 18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8. Jan 29 11:24:46.187618 containerd[1470]: time="2025-01-29T11:24:46.187470202Z" level=info msg="StartContainer for \"18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8\" returns successfully" Jan 29 11:24:46.285701 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:24:46.285995 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:24:46.358324 kubelet[1783]: E0129 11:24:46.357852 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:46.543813 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb-shm.mount: Deactivated successfully. Jan 29 11:24:46.543979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2800765538.mount: Deactivated successfully. Jan 29 11:24:46.651451 kubelet[1783]: I0129 11:24:46.649325 1783 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e" Jan 29 11:24:46.651632 containerd[1470]: time="2025-01-29T11:24:46.650852272Z" level=info msg="StopPodSandbox for \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\"" Jan 29 11:24:46.651975 containerd[1470]: time="2025-01-29T11:24:46.651859542Z" level=info msg="Ensure that sandbox b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e in task-service has been cleanup successfully" Jan 29 11:24:46.655028 containerd[1470]: time="2025-01-29T11:24:46.654623159Z" level=info msg="TearDown network for sandbox \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\" successfully" Jan 29 11:24:46.655028 containerd[1470]: time="2025-01-29T11:24:46.654669873Z" level=info msg="StopPodSandbox for \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\" returns successfully" Jan 29 11:24:46.657298 systemd[1]: run-netns-cni\x2d147d15fc\x2dfd98\x2ddc28\x2d950c\x2df18322a3c427.mount: Deactivated successfully. Jan 29 11:24:46.660274 containerd[1470]: time="2025-01-29T11:24:46.660118436Z" level=info msg="StopPodSandbox for \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\"" Jan 29 11:24:46.660538 containerd[1470]: time="2025-01-29T11:24:46.660439788Z" level=info msg="TearDown network for sandbox \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\" successfully" Jan 29 11:24:46.660538 containerd[1470]: time="2025-01-29T11:24:46.660462382Z" level=info msg="StopPodSandbox for \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\" returns successfully" Jan 29 11:24:46.663670 containerd[1470]: time="2025-01-29T11:24:46.663617917Z" level=info msg="StopPodSandbox for \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\"" Jan 29 11:24:46.663793 containerd[1470]: time="2025-01-29T11:24:46.663763608Z" level=info msg="TearDown network for sandbox \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\" successfully" Jan 29 11:24:46.663793 containerd[1470]: time="2025-01-29T11:24:46.663785295Z" level=info msg="StopPodSandbox for \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\" returns successfully" Jan 29 11:24:46.664539 containerd[1470]: time="2025-01-29T11:24:46.664501373Z" level=info msg="StopPodSandbox for \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\"" Jan 29 11:24:46.664683 containerd[1470]: time="2025-01-29T11:24:46.664641787Z" level=info msg="TearDown network for sandbox \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\" successfully" Jan 29 11:24:46.664683 containerd[1470]: time="2025-01-29T11:24:46.664660574Z" level=info msg="StopPodSandbox for \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\" returns successfully" Jan 29 11:24:46.665084 containerd[1470]: time="2025-01-29T11:24:46.665060645Z" level=info msg="StopPodSandbox for \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\"" Jan 29 11:24:46.666354 containerd[1470]: time="2025-01-29T11:24:46.666176878Z" level=info msg="TearDown network for sandbox \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\" successfully" Jan 29 11:24:46.666354 containerd[1470]: time="2025-01-29T11:24:46.666218915Z" level=info msg="StopPodSandbox for \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\" returns successfully" Jan 29 11:24:46.667289 containerd[1470]: time="2025-01-29T11:24:46.667263223Z" level=info msg="StopPodSandbox for \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\"" Jan 29 11:24:46.668261 kubelet[1783]: I0129 11:24:46.667419 1783 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb" Jan 29 11:24:46.668348 containerd[1470]: time="2025-01-29T11:24:46.667948604Z" level=info msg="StopPodSandbox for \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\"" Jan 29 11:24:46.668432 containerd[1470]: time="2025-01-29T11:24:46.668414443Z" level=info msg="Ensure that sandbox 8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb in task-service has been cleanup successfully" Jan 29 11:24:46.670719 containerd[1470]: time="2025-01-29T11:24:46.670688819Z" level=info msg="TearDown network for sandbox \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\" successfully" Jan 29 11:24:46.670833 containerd[1470]: time="2025-01-29T11:24:46.670820447Z" level=info msg="StopPodSandbox for \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\" returns successfully" Jan 29 11:24:46.670944 containerd[1470]: time="2025-01-29T11:24:46.668504004Z" level=info msg="TearDown network for sandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" successfully" Jan 29 11:24:46.671009 containerd[1470]: time="2025-01-29T11:24:46.670998359Z" level=info msg="StopPodSandbox for \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" returns successfully" Jan 29 11:24:46.672200 containerd[1470]: time="2025-01-29T11:24:46.671693211Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\"" Jan 29 11:24:46.672200 containerd[1470]: time="2025-01-29T11:24:46.671778292Z" level=info msg="TearDown network for sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" successfully" Jan 29 11:24:46.672200 containerd[1470]: time="2025-01-29T11:24:46.671787808Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" returns successfully" Jan 29 11:24:46.672200 containerd[1470]: time="2025-01-29T11:24:46.671856254Z" level=info msg="StopPodSandbox for \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\"" Jan 29 11:24:46.672200 containerd[1470]: time="2025-01-29T11:24:46.671931278Z" level=info msg="TearDown network for sandbox \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\" successfully" Jan 29 11:24:46.672200 containerd[1470]: time="2025-01-29T11:24:46.671945690Z" level=info msg="StopPodSandbox for \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\" returns successfully" Jan 29 11:24:46.672837 containerd[1470]: time="2025-01-29T11:24:46.672813762Z" level=info msg="StopPodSandbox for \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\"" Jan 29 11:24:46.672850 systemd[1]: run-netns-cni\x2d7ca3e730\x2d1988\x2d71d5\x2d5863\x2d612597056710.mount: Deactivated successfully. Jan 29 11:24:46.673417 containerd[1470]: time="2025-01-29T11:24:46.673038640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:7,}" Jan 29 11:24:46.673417 containerd[1470]: time="2025-01-29T11:24:46.673335897Z" level=info msg="TearDown network for sandbox \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\" successfully" Jan 29 11:24:46.673417 containerd[1470]: time="2025-01-29T11:24:46.673354878Z" level=info msg="StopPodSandbox for \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\" returns successfully" Jan 29 11:24:46.675747 containerd[1470]: time="2025-01-29T11:24:46.674816177Z" level=info msg="StopPodSandbox for \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\"" Jan 29 11:24:46.675747 containerd[1470]: time="2025-01-29T11:24:46.674910881Z" level=info msg="TearDown network for sandbox \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\" successfully" Jan 29 11:24:46.675747 containerd[1470]: time="2025-01-29T11:24:46.674921466Z" level=info msg="StopPodSandbox for \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\" returns successfully" Jan 29 11:24:46.675747 containerd[1470]: time="2025-01-29T11:24:46.675216738Z" level=info msg="StopPodSandbox for \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\"" Jan 29 11:24:46.675747 containerd[1470]: time="2025-01-29T11:24:46.675650114Z" level=info msg="TearDown network for sandbox \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\" successfully" Jan 29 11:24:46.675747 containerd[1470]: time="2025-01-29T11:24:46.675672457Z" level=info msg="StopPodSandbox for \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\" returns successfully" Jan 29 11:24:46.676596 kubelet[1783]: E0129 11:24:46.676172 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:46.676709 containerd[1470]: time="2025-01-29T11:24:46.676403388Z" level=info msg="StopPodSandbox for \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\"" Jan 29 11:24:46.676709 containerd[1470]: time="2025-01-29T11:24:46.676510613Z" level=info msg="TearDown network for sandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" successfully" Jan 29 11:24:46.676709 containerd[1470]: time="2025-01-29T11:24:46.676525856Z" level=info msg="StopPodSandbox for \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" returns successfully" Jan 29 11:24:46.677614 containerd[1470]: time="2025-01-29T11:24:46.677193166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-x27bp,Uid:48d527ce-aca9-411e-89b8-198e2a4f2d33,Namespace:default,Attempt:6,}" Jan 29 11:24:47.105198 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 29 11:24:47.108432 systemd-networkd[1376]: cali09cfb0603f2: Link UP Jan 29 11:24:47.113295 systemd-networkd[1376]: cali09cfb0603f2: Gained carrier Jan 29 11:24:47.113571 systemd-resolved[1330]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 29 11:24:47.131567 kubelet[1783]: I0129 11:24:47.130371 1783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-w74sz" podStartSLOduration=5.020193905 podStartE2EDuration="20.130325191s" podCreationTimestamp="2025-01-29 11:24:27 +0000 UTC" firstStartedPulling="2025-01-29 11:24:30.874644877 +0000 UTC m=+4.273963294" lastFinishedPulling="2025-01-29 11:24:45.98477618 +0000 UTC m=+19.384094580" observedRunningTime="2025-01-29 11:24:46.732982261 +0000 UTC m=+20.132300707" watchObservedRunningTime="2025-01-29 11:24:47.130325191 +0000 UTC m=+20.529643592" Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:46.767 [INFO][2709] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:46.813 [INFO][2709] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {164.92.103.73-k8s-nginx--deployment--7fcdb87857--x27bp-eth0 nginx-deployment-7fcdb87857- default 48d527ce-aca9-411e-89b8-198e2a4f2d33 1211 0 2025-01-29 11:24:40 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 164.92.103.73 nginx-deployment-7fcdb87857-x27bp eth0 default [] [] [kns.default ksa.default.default] cali09cfb0603f2 [] []}} ContainerID="a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" Namespace="default" Pod="nginx-deployment-7fcdb87857-x27bp" WorkloadEndpoint="164.92.103.73-k8s-nginx--deployment--7fcdb87857--x27bp-" Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:46.813 [INFO][2709] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" Namespace="default" Pod="nginx-deployment-7fcdb87857-x27bp" WorkloadEndpoint="164.92.103.73-k8s-nginx--deployment--7fcdb87857--x27bp-eth0" Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:46.875 [INFO][2732] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" HandleID="k8s-pod-network.a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" Workload="164.92.103.73-k8s-nginx--deployment--7fcdb87857--x27bp-eth0" Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.000 [INFO][2732] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" HandleID="k8s-pod-network.a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" Workload="164.92.103.73-k8s-nginx--deployment--7fcdb87857--x27bp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b370), Attrs:map[string]string{"namespace":"default", "node":"164.92.103.73", "pod":"nginx-deployment-7fcdb87857-x27bp", "timestamp":"2025-01-29 11:24:46.875655881 +0000 UTC"}, Hostname:"164.92.103.73", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.000 [INFO][2732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.000 [INFO][2732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.000 [INFO][2732] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '164.92.103.73' Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.009 [INFO][2732] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" host="164.92.103.73" Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.017 [INFO][2732] ipam/ipam.go 372: Looking up existing affinities for host host="164.92.103.73" Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.039 [INFO][2732] ipam/ipam.go 489: Trying affinity for 192.168.127.192/26 host="164.92.103.73" Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.043 [INFO][2732] ipam/ipam.go 155: Attempting to load block cidr=192.168.127.192/26 host="164.92.103.73" Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.048 [INFO][2732] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.127.192/26 host="164.92.103.73" Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.048 [INFO][2732] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.127.192/26 handle="k8s-pod-network.a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" host="164.92.103.73" Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.054 [INFO][2732] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25 Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.062 [INFO][2732] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.127.192/26 handle="k8s-pod-network.a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" host="164.92.103.73" Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.086 [INFO][2732] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.127.193/26] block=192.168.127.192/26 handle="k8s-pod-network.a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" host="164.92.103.73" Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.087 [INFO][2732] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.127.193/26] handle="k8s-pod-network.a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" host="164.92.103.73" Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.087 [INFO][2732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:24:47.137537 containerd[1470]: 2025-01-29 11:24:47.087 [INFO][2732] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.127.193/26] IPv6=[] ContainerID="a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" HandleID="k8s-pod-network.a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" Workload="164.92.103.73-k8s-nginx--deployment--7fcdb87857--x27bp-eth0" Jan 29 11:24:47.138749 containerd[1470]: 2025-01-29 11:24:47.091 [INFO][2709] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" Namespace="default" Pod="nginx-deployment-7fcdb87857-x27bp" WorkloadEndpoint="164.92.103.73-k8s-nginx--deployment--7fcdb87857--x27bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.103.73-k8s-nginx--deployment--7fcdb87857--x27bp-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"48d527ce-aca9-411e-89b8-198e2a4f2d33", ResourceVersion:"1211", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 24, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.103.73", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-x27bp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.127.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali09cfb0603f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:24:47.138749 containerd[1470]: 2025-01-29 11:24:47.092 [INFO][2709] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.127.193/32] ContainerID="a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" Namespace="default" Pod="nginx-deployment-7fcdb87857-x27bp" WorkloadEndpoint="164.92.103.73-k8s-nginx--deployment--7fcdb87857--x27bp-eth0" Jan 29 11:24:47.138749 containerd[1470]: 2025-01-29 11:24:47.092 [INFO][2709] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09cfb0603f2 ContainerID="a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" Namespace="default" Pod="nginx-deployment-7fcdb87857-x27bp" WorkloadEndpoint="164.92.103.73-k8s-nginx--deployment--7fcdb87857--x27bp-eth0" Jan 29 11:24:47.138749 containerd[1470]: 2025-01-29 11:24:47.110 [INFO][2709] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" Namespace="default" Pod="nginx-deployment-7fcdb87857-x27bp" WorkloadEndpoint="164.92.103.73-k8s-nginx--deployment--7fcdb87857--x27bp-eth0" Jan 29 11:24:47.138749 containerd[1470]: 2025-01-29 11:24:47.112 [INFO][2709] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" Namespace="default" Pod="nginx-deployment-7fcdb87857-x27bp" WorkloadEndpoint="164.92.103.73-k8s-nginx--deployment--7fcdb87857--x27bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.103.73-k8s-nginx--deployment--7fcdb87857--x27bp-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"48d527ce-aca9-411e-89b8-198e2a4f2d33", ResourceVersion:"1211", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 24, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.103.73", ContainerID:"a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25", Pod:"nginx-deployment-7fcdb87857-x27bp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.127.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali09cfb0603f2", MAC:"42:a6:c6:6f:b6:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:24:47.138749 containerd[1470]: 2025-01-29 11:24:47.131 [INFO][2709] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25" Namespace="default" Pod="nginx-deployment-7fcdb87857-x27bp" WorkloadEndpoint="164.92.103.73-k8s-nginx--deployment--7fcdb87857--x27bp-eth0" Jan 29 11:24:47.186977 systemd-timesyncd[1351]: Contacted time server 72.30.35.88:123 (2.flatcar.pool.ntp.org). Jan 29 11:24:47.187092 systemd-timesyncd[1351]: Initial clock synchronization to Wed 2025-01-29 11:24:46.840630 UTC. Jan 29 11:24:47.233177 containerd[1470]: time="2025-01-29T11:24:47.232565228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:24:47.233177 containerd[1470]: time="2025-01-29T11:24:47.232661754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:24:47.233177 containerd[1470]: time="2025-01-29T11:24:47.232674722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:47.234353 containerd[1470]: time="2025-01-29T11:24:47.233324834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:47.276610 systemd-networkd[1376]: cali0fe5c1d001e: Link UP Jan 29 11:24:47.277601 systemd-networkd[1376]: cali0fe5c1d001e: Gained carrier Jan 29 11:24:47.283602 systemd[1]: Started cri-containerd-a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25.scope - libcontainer container a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25. Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:46.759 [INFO][2704] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:46.813 [INFO][2704] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {164.92.103.73-k8s-csi--node--driver--4x5wc-eth0 csi-node-driver- calico-system 27184255-bf1a-4cc5-b67d-e597a1ff246d 1096 0 2025-01-29 11:24:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 164.92.103.73 csi-node-driver-4x5wc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0fe5c1d001e [] []}} ContainerID="e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" Namespace="calico-system" Pod="csi-node-driver-4x5wc" WorkloadEndpoint="164.92.103.73-k8s-csi--node--driver--4x5wc-" Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:46.813 [INFO][2704] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" Namespace="calico-system" Pod="csi-node-driver-4x5wc" WorkloadEndpoint="164.92.103.73-k8s-csi--node--driver--4x5wc-eth0" Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:46.885 [INFO][2728] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" HandleID="k8s-pod-network.e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" Workload="164.92.103.73-k8s-csi--node--driver--4x5wc-eth0" Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.005 [INFO][2728] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" HandleID="k8s-pod-network.e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" Workload="164.92.103.73-k8s-csi--node--driver--4x5wc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a9a10), Attrs:map[string]string{"namespace":"calico-system", "node":"164.92.103.73", "pod":"csi-node-driver-4x5wc", "timestamp":"2025-01-29 11:24:46.885294163 +0000 UTC"}, Hostname:"164.92.103.73", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.005 [INFO][2728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.088 [INFO][2728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.088 [INFO][2728] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '164.92.103.73' Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.114 [INFO][2728] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" host="164.92.103.73" Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.148 [INFO][2728] ipam/ipam.go 372: Looking up existing affinities for host host="164.92.103.73" Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.169 [INFO][2728] ipam/ipam.go 489: Trying affinity for 192.168.127.192/26 host="164.92.103.73" Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.179 [INFO][2728] ipam/ipam.go 155: Attempting to load block cidr=192.168.127.192/26 host="164.92.103.73" Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.184 [INFO][2728] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.127.192/26 host="164.92.103.73" Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.184 [INFO][2728] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.127.192/26 handle="k8s-pod-network.e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" host="164.92.103.73" Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.200 [INFO][2728] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831 Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.225 [INFO][2728] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.127.192/26 handle="k8s-pod-network.e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" host="164.92.103.73" Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.238 [INFO][2728] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.127.194/26] block=192.168.127.192/26 handle="k8s-pod-network.e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" host="164.92.103.73" Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.241 [INFO][2728] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.127.194/26] handle="k8s-pod-network.e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" host="164.92.103.73" Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.241 [INFO][2728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:24:47.324511 containerd[1470]: 2025-01-29 11:24:47.241 [INFO][2728] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.127.194/26] IPv6=[] ContainerID="e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" HandleID="k8s-pod-network.e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" Workload="164.92.103.73-k8s-csi--node--driver--4x5wc-eth0" Jan 29 11:24:47.325666 containerd[1470]: 2025-01-29 11:24:47.265 [INFO][2704] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" Namespace="calico-system" Pod="csi-node-driver-4x5wc" WorkloadEndpoint="164.92.103.73-k8s-csi--node--driver--4x5wc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.103.73-k8s-csi--node--driver--4x5wc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27184255-bf1a-4cc5-b67d-e597a1ff246d", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.103.73", ContainerID:"", Pod:"csi-node-driver-4x5wc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0fe5c1d001e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:24:47.325666 containerd[1470]: 2025-01-29 11:24:47.265 [INFO][2704] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.127.194/32] ContainerID="e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" Namespace="calico-system" Pod="csi-node-driver-4x5wc" WorkloadEndpoint="164.92.103.73-k8s-csi--node--driver--4x5wc-eth0" Jan 29 11:24:47.325666 containerd[1470]: 2025-01-29 11:24:47.265 [INFO][2704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fe5c1d001e ContainerID="e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" Namespace="calico-system" Pod="csi-node-driver-4x5wc" WorkloadEndpoint="164.92.103.73-k8s-csi--node--driver--4x5wc-eth0" Jan 29 11:24:47.325666 containerd[1470]: 2025-01-29 11:24:47.279 [INFO][2704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" Namespace="calico-system" Pod="csi-node-driver-4x5wc" WorkloadEndpoint="164.92.103.73-k8s-csi--node--driver--4x5wc-eth0" Jan 29 11:24:47.325666 containerd[1470]: 2025-01-29 11:24:47.283 [INFO][2704] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" Namespace="calico-system" Pod="csi-node-driver-4x5wc" WorkloadEndpoint="164.92.103.73-k8s-csi--node--driver--4x5wc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.103.73-k8s-csi--node--driver--4x5wc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27184255-bf1a-4cc5-b67d-e597a1ff246d", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.103.73", ContainerID:"e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831", Pod:"csi-node-driver-4x5wc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0fe5c1d001e", MAC:"9a:c7:ff:3c:14:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:24:47.325666 containerd[1470]: 2025-01-29 11:24:47.317 [INFO][2704] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831" Namespace="calico-system" Pod="csi-node-driver-4x5wc" WorkloadEndpoint="164.92.103.73-k8s-csi--node--driver--4x5wc-eth0" Jan 29 11:24:47.336987 kubelet[1783]: E0129 11:24:47.336885 1783 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:47.360369 kubelet[1783]: E0129 11:24:47.360149 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:47.397098 containerd[1470]: time="2025-01-29T11:24:47.395649633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:24:47.397098 containerd[1470]: time="2025-01-29T11:24:47.395772323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:24:47.397098 containerd[1470]: time="2025-01-29T11:24:47.395798906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:47.397098 containerd[1470]: time="2025-01-29T11:24:47.395954794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:47.446051 systemd[1]: Started cri-containerd-e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831.scope - libcontainer container e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831. Jan 29 11:24:47.480359 containerd[1470]: time="2025-01-29T11:24:47.480012842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-x27bp,Uid:48d527ce-aca9-411e-89b8-198e2a4f2d33,Namespace:default,Attempt:6,} returns sandbox id \"a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25\"" Jan 29 11:24:47.488093 containerd[1470]: time="2025-01-29T11:24:47.487728428Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:24:47.603513 containerd[1470]: time="2025-01-29T11:24:47.603378503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4x5wc,Uid:27184255-bf1a-4cc5-b67d-e597a1ff246d,Namespace:calico-system,Attempt:7,} returns sandbox id \"e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831\"" Jan 29 11:24:47.688949 kubelet[1783]: I0129 11:24:47.688908 1783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:24:47.690279 kubelet[1783]: E0129 11:24:47.690144 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:47.711265 kernel: bpftool[2978]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:24:47.991505 systemd-networkd[1376]: vxlan.calico: Link UP Jan 29 11:24:47.993056 systemd-networkd[1376]: vxlan.calico: Gained carrier Jan 29 11:24:48.361505 kubelet[1783]: E0129 11:24:48.361063 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:48.640760 systemd-networkd[1376]: cali09cfb0603f2: Gained IPv6LL Jan 29 11:24:49.217574 systemd-networkd[1376]: cali0fe5c1d001e: Gained IPv6LL Jan 29 11:24:49.361618 kubelet[1783]: E0129 11:24:49.361571 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:49.409848 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Jan 29 11:24:50.228534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3752782725.mount: Deactivated successfully. Jan 29 11:24:50.362599 kubelet[1783]: E0129 11:24:50.362537 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:50.706303 kubelet[1783]: I0129 11:24:50.706008 1783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:24:50.706562 kubelet[1783]: E0129 11:24:50.706478 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:51.363825 kubelet[1783]: E0129 11:24:51.363777 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:51.698924 kubelet[1783]: E0129 11:24:51.698315 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:51.734285 containerd[1470]: time="2025-01-29T11:24:51.734024103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:51.735495 containerd[1470]: time="2025-01-29T11:24:51.735246519Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 11:24:51.736328 containerd[1470]: time="2025-01-29T11:24:51.736190020Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:51.739628 containerd[1470]: time="2025-01-29T11:24:51.739160692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:51.740367 containerd[1470]: time="2025-01-29T11:24:51.740331080Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 4.252130966s" Jan 29 11:24:51.740367 containerd[1470]: time="2025-01-29T11:24:51.740367291Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 11:24:51.742759 containerd[1470]: time="2025-01-29T11:24:51.742731257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:24:51.744777 containerd[1470]: time="2025-01-29T11:24:51.744560648Z" level=info msg="CreateContainer within sandbox \"a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 11:24:51.773840 containerd[1470]: time="2025-01-29T11:24:51.773745113Z" level=info msg="CreateContainer within sandbox \"a52d4b14e840aa8d6106161f0c22c1fc9d58d503f184eab7ea96f9b32ffa4f25\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"01907b5ac4c7d18cc8c090212f76c4d6facf9d37cff20b44a82910b47d090e81\"" Jan 29 11:24:51.775758 containerd[1470]: time="2025-01-29T11:24:51.774610188Z" level=info msg="StartContainer for \"01907b5ac4c7d18cc8c090212f76c4d6facf9d37cff20b44a82910b47d090e81\"" Jan 29 11:24:51.822544 systemd[1]: Started cri-containerd-01907b5ac4c7d18cc8c090212f76c4d6facf9d37cff20b44a82910b47d090e81.scope - libcontainer container 01907b5ac4c7d18cc8c090212f76c4d6facf9d37cff20b44a82910b47d090e81. Jan 29 11:24:51.858929 containerd[1470]: time="2025-01-29T11:24:51.858880544Z" level=info msg="StartContainer for \"01907b5ac4c7d18cc8c090212f76c4d6facf9d37cff20b44a82910b47d090e81\" returns successfully" Jan 29 11:24:52.365773 kubelet[1783]: E0129 11:24:52.365702 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:53.063676 containerd[1470]: time="2025-01-29T11:24:53.063619973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:53.065005 containerd[1470]: time="2025-01-29T11:24:53.064952200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:24:53.065355 containerd[1470]: time="2025-01-29T11:24:53.065328840Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:53.067714 containerd[1470]: time="2025-01-29T11:24:53.067673470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:53.068707 containerd[1470]: time="2025-01-29T11:24:53.068677722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.325587415s" Jan 29 11:24:53.068857 containerd[1470]: time="2025-01-29T11:24:53.068836173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:24:53.071730 containerd[1470]: time="2025-01-29T11:24:53.071700825Z" level=info msg="CreateContainer within sandbox \"e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:24:53.091544 containerd[1470]: time="2025-01-29T11:24:53.091463793Z" level=info msg="CreateContainer within sandbox \"e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"062960a670277b25ea4fc78ec4f9121fcc7b5fc798dd01925548daaff0de3156\"" Jan 29 11:24:53.092600 containerd[1470]: time="2025-01-29T11:24:53.092547382Z" level=info msg="StartContainer for \"062960a670277b25ea4fc78ec4f9121fcc7b5fc798dd01925548daaff0de3156\"" Jan 29 11:24:53.128523 systemd[1]: run-containerd-runc-k8s.io-062960a670277b25ea4fc78ec4f9121fcc7b5fc798dd01925548daaff0de3156-runc.WvegZM.mount: Deactivated successfully. Jan 29 11:24:53.142496 systemd[1]: Started cri-containerd-062960a670277b25ea4fc78ec4f9121fcc7b5fc798dd01925548daaff0de3156.scope - libcontainer container 062960a670277b25ea4fc78ec4f9121fcc7b5fc798dd01925548daaff0de3156. Jan 29 11:24:53.181756 containerd[1470]: time="2025-01-29T11:24:53.181639865Z" level=info msg="StartContainer for \"062960a670277b25ea4fc78ec4f9121fcc7b5fc798dd01925548daaff0de3156\" returns successfully" Jan 29 11:24:53.185033 containerd[1470]: time="2025-01-29T11:24:53.184847735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:24:53.366091 kubelet[1783]: E0129 11:24:53.366036 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:54.367342 kubelet[1783]: E0129 11:24:54.367176 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:54.540330 containerd[1470]: time="2025-01-29T11:24:54.539626234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:54.541801 containerd[1470]: time="2025-01-29T11:24:54.541703240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:24:54.543159 containerd[1470]: time="2025-01-29T11:24:54.543083664Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:54.545402 containerd[1470]: time="2025-01-29T11:24:54.545339555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:54.546753 containerd[1470]: time="2025-01-29T11:24:54.546292376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.361401199s" Jan 29 11:24:54.546753 containerd[1470]: time="2025-01-29T11:24:54.546340313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:24:54.548839 containerd[1470]: time="2025-01-29T11:24:54.548803915Z" level=info msg="CreateContainer within sandbox \"e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:24:54.570118 containerd[1470]: time="2025-01-29T11:24:54.569967482Z" level=info msg="CreateContainer within sandbox \"e70205848d6fa228bab7d506ea40499e3e92b821025d238b84c09d430d5cb831\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"44c265c0af553111258cb1d1c1de1af8661c4d76f9808594bc33f2b4fdfe21f3\"" Jan 29 11:24:54.570997 containerd[1470]: time="2025-01-29T11:24:54.570770742Z" level=info msg="StartContainer for \"44c265c0af553111258cb1d1c1de1af8661c4d76f9808594bc33f2b4fdfe21f3\"" Jan 29 11:24:54.611549 systemd[1]: Started cri-containerd-44c265c0af553111258cb1d1c1de1af8661c4d76f9808594bc33f2b4fdfe21f3.scope - libcontainer container 44c265c0af553111258cb1d1c1de1af8661c4d76f9808594bc33f2b4fdfe21f3. Jan 29 11:24:54.649648 containerd[1470]: time="2025-01-29T11:24:54.649359274Z" level=info msg="StartContainer for \"44c265c0af553111258cb1d1c1de1af8661c4d76f9808594bc33f2b4fdfe21f3\" returns successfully" Jan 29 11:24:54.747666 kubelet[1783]: I0129 11:24:54.747213 1783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4x5wc" podStartSLOduration=20.806519339 podStartE2EDuration="27.747189014s" podCreationTimestamp="2025-01-29 11:24:27 +0000 UTC" firstStartedPulling="2025-01-29 11:24:47.606919729 +0000 UTC m=+21.006238146" lastFinishedPulling="2025-01-29 11:24:54.547589417 +0000 UTC m=+27.946907821" observedRunningTime="2025-01-29 11:24:54.746964028 +0000 UTC m=+28.146282451" watchObservedRunningTime="2025-01-29 11:24:54.747189014 +0000 UTC m=+28.146507438" Jan 29 11:24:54.747666 kubelet[1783]: I0129 11:24:54.747522 1783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-x27bp" podStartSLOduration=10.489346729 podStartE2EDuration="14.747508291s" podCreationTimestamp="2025-01-29 11:24:40 +0000 UTC" firstStartedPulling="2025-01-29 11:24:47.484097629 +0000 UTC m=+20.883416044" lastFinishedPulling="2025-01-29 11:24:51.742259193 +0000 UTC m=+25.141577606" observedRunningTime="2025-01-29 11:24:52.724412502 +0000 UTC m=+26.123730925" watchObservedRunningTime="2025-01-29 11:24:54.747508291 +0000 UTC m=+28.146826714" Jan 29 11:24:55.046981 systemd[1]: Created slice kubepods-besteffort-poddd1370cd_581b_4938_bae8_ab438dd8d42c.slice - libcontainer container kubepods-besteffort-poddd1370cd_581b_4938_bae8_ab438dd8d42c.slice. Jan 29 11:24:55.097317 kubelet[1783]: I0129 11:24:55.097259 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dkdt\" (UniqueName: \"kubernetes.io/projected/dd1370cd-581b-4938-bae8-ab438dd8d42c-kube-api-access-8dkdt\") pod \"calico-typha-7cb95dc59c-r842s\" (UID: \"dd1370cd-581b-4938-bae8-ab438dd8d42c\") " pod="calico-system/calico-typha-7cb95dc59c-r842s" Jan 29 11:24:55.097457 kubelet[1783]: I0129 11:24:55.097331 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1370cd-581b-4938-bae8-ab438dd8d42c-tigera-ca-bundle\") pod \"calico-typha-7cb95dc59c-r842s\" (UID: \"dd1370cd-581b-4938-bae8-ab438dd8d42c\") " pod="calico-system/calico-typha-7cb95dc59c-r842s" Jan 29 11:24:55.097457 kubelet[1783]: I0129 11:24:55.097367 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dd1370cd-581b-4938-bae8-ab438dd8d42c-typha-certs\") pod \"calico-typha-7cb95dc59c-r842s\" (UID: \"dd1370cd-581b-4938-bae8-ab438dd8d42c\") " pod="calico-system/calico-typha-7cb95dc59c-r842s" Jan 29 11:24:55.351659 kubelet[1783]: E0129 11:24:55.350790 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:55.351814 containerd[1470]: time="2025-01-29T11:24:55.351573776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cb95dc59c-r842s,Uid:dd1370cd-581b-4938-bae8-ab438dd8d42c,Namespace:calico-system,Attempt:0,}" Jan 29 11:24:55.368266 kubelet[1783]: E0129 11:24:55.368141 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:55.386103 containerd[1470]: time="2025-01-29T11:24:55.385943199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:24:55.386757 containerd[1470]: time="2025-01-29T11:24:55.386360366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:24:55.386757 containerd[1470]: time="2025-01-29T11:24:55.386398036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:55.386757 containerd[1470]: time="2025-01-29T11:24:55.386565369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:55.410560 systemd[1]: Started cri-containerd-d7f50d98a0326990a8e416b973eeb1f380e9c51f05939d1a91f603bb0ba0c026.scope - libcontainer container d7f50d98a0326990a8e416b973eeb1f380e9c51f05939d1a91f603bb0ba0c026. Jan 29 11:24:55.464341 containerd[1470]: time="2025-01-29T11:24:55.464167594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cb95dc59c-r842s,Uid:dd1370cd-581b-4938-bae8-ab438dd8d42c,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7f50d98a0326990a8e416b973eeb1f380e9c51f05939d1a91f603bb0ba0c026\"" Jan 29 11:24:55.465612 kubelet[1783]: E0129 11:24:55.465207 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:55.466982 containerd[1470]: time="2025-01-29T11:24:55.466608712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:24:55.493854 kubelet[1783]: I0129 11:24:55.493802 1783 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:24:55.494520 kubelet[1783]: I0129 11:24:55.494147 1783 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:24:56.369350 kubelet[1783]: E0129 11:24:56.369286 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:57.290084 systemd[1]: Created slice kubepods-besteffort-podb185d7d2_b846_446e_baa7_62814adc9399.slice - libcontainer container kubepods-besteffort-podb185d7d2_b846_446e_baa7_62814adc9399.slice. Jan 29 11:24:57.370206 kubelet[1783]: E0129 11:24:57.370137 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:57.378137 containerd[1470]: time="2025-01-29T11:24:57.377460286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:57.378848 containerd[1470]: time="2025-01-29T11:24:57.378788448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 29 11:24:57.379122 containerd[1470]: time="2025-01-29T11:24:57.379099561Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:57.382082 containerd[1470]: time="2025-01-29T11:24:57.382032293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:57.382762 containerd[1470]: time="2025-01-29T11:24:57.382725184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.916076706s" Jan 29 11:24:57.382762 containerd[1470]: time="2025-01-29T11:24:57.382760314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 11:24:57.401302 containerd[1470]: time="2025-01-29T11:24:57.400508971Z" level=info msg="CreateContainer within sandbox \"d7f50d98a0326990a8e416b973eeb1f380e9c51f05939d1a91f603bb0ba0c026\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:24:57.414763 kubelet[1783]: I0129 11:24:57.414054 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b185d7d2-b846-446e-baa7-62814adc9399-tigera-ca-bundle\") pod \"calico-kube-controllers-77d9686979-n7vxn\" (UID: \"b185d7d2-b846-446e-baa7-62814adc9399\") " pod="calico-system/calico-kube-controllers-77d9686979-n7vxn" Jan 29 11:24:57.414763 kubelet[1783]: I0129 11:24:57.414115 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq6xx\" (UniqueName: \"kubernetes.io/projected/b185d7d2-b846-446e-baa7-62814adc9399-kube-api-access-pq6xx\") pod \"calico-kube-controllers-77d9686979-n7vxn\" (UID: \"b185d7d2-b846-446e-baa7-62814adc9399\") " pod="calico-system/calico-kube-controllers-77d9686979-n7vxn" Jan 29 11:24:57.420801 containerd[1470]: time="2025-01-29T11:24:57.420596382Z" level=info msg="CreateContainer within sandbox \"d7f50d98a0326990a8e416b973eeb1f380e9c51f05939d1a91f603bb0ba0c026\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d966db77864800f37178b75af656a2efce270208568e3a59a1211d1a2f2d3240\"" Jan 29 11:24:57.421621 containerd[1470]: time="2025-01-29T11:24:57.421444306Z" level=info msg="StartContainer for \"d966db77864800f37178b75af656a2efce270208568e3a59a1211d1a2f2d3240\"" Jan 29 11:24:57.460842 systemd[1]: Started cri-containerd-d966db77864800f37178b75af656a2efce270208568e3a59a1211d1a2f2d3240.scope - libcontainer container d966db77864800f37178b75af656a2efce270208568e3a59a1211d1a2f2d3240. Jan 29 11:24:57.528604 containerd[1470]: time="2025-01-29T11:24:57.528439665Z" level=info msg="StartContainer for \"d966db77864800f37178b75af656a2efce270208568e3a59a1211d1a2f2d3240\" returns successfully" Jan 29 11:24:57.606042 containerd[1470]: time="2025-01-29T11:24:57.605834050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77d9686979-n7vxn,Uid:b185d7d2-b846-446e-baa7-62814adc9399,Namespace:calico-system,Attempt:0,}" Jan 29 11:24:57.729957 kubelet[1783]: E0129 11:24:57.729664 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:57.772730 kubelet[1783]: I0129 11:24:57.772656 1783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cb95dc59c-r842s" podStartSLOduration=1.854269664 podStartE2EDuration="3.7726383s" podCreationTimestamp="2025-01-29 11:24:54 +0000 UTC" firstStartedPulling="2025-01-29 11:24:55.466105621 +0000 UTC m=+28.865424040" lastFinishedPulling="2025-01-29 11:24:57.384474274 +0000 UTC m=+30.783792676" observedRunningTime="2025-01-29 11:24:57.770384532 +0000 UTC m=+31.169702953" watchObservedRunningTime="2025-01-29 11:24:57.7726383 +0000 UTC m=+31.171956722" Jan 29 11:24:57.915031 systemd-networkd[1376]: cali31a297e064a: Link UP Jan 29 11:24:57.915736 systemd-networkd[1376]: cali31a297e064a: Gained carrier Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.687 [INFO][3377] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {164.92.103.73-k8s-calico--kube--controllers--77d9686979--n7vxn-eth0 calico-kube-controllers-77d9686979- calico-system b185d7d2-b846-446e-baa7-62814adc9399 1412 0 2025-01-29 11:24:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77d9686979 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 164.92.103.73 calico-kube-controllers-77d9686979-n7vxn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali31a297e064a [] []}} ContainerID="a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" Namespace="calico-system" Pod="calico-kube-controllers-77d9686979-n7vxn" WorkloadEndpoint="164.92.103.73-k8s-calico--kube--controllers--77d9686979--n7vxn-" Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.687 [INFO][3377] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" Namespace="calico-system" Pod="calico-kube-controllers-77d9686979-n7vxn" WorkloadEndpoint="164.92.103.73-k8s-calico--kube--controllers--77d9686979--n7vxn-eth0" Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.728 [INFO][3387] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" HandleID="k8s-pod-network.a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" Workload="164.92.103.73-k8s-calico--kube--controllers--77d9686979--n7vxn-eth0" Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.846 [INFO][3387] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" HandleID="k8s-pod-network.a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" Workload="164.92.103.73-k8s-calico--kube--controllers--77d9686979--n7vxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290830), Attrs:map[string]string{"namespace":"calico-system", "node":"164.92.103.73", "pod":"calico-kube-controllers-77d9686979-n7vxn", "timestamp":"2025-01-29 11:24:57.728049017 +0000 UTC"}, Hostname:"164.92.103.73", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.846 [INFO][3387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.846 [INFO][3387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.846 [INFO][3387] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '164.92.103.73' Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.852 [INFO][3387] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" host="164.92.103.73" Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.859 [INFO][3387] ipam/ipam.go 372: Looking up existing affinities for host host="164.92.103.73" Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.871 [INFO][3387] ipam/ipam.go 489: Trying affinity for 192.168.127.192/26 host="164.92.103.73" Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.875 [INFO][3387] ipam/ipam.go 155: Attempting to load block cidr=192.168.127.192/26 host="164.92.103.73" Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.879 [INFO][3387] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.127.192/26 host="164.92.103.73" Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.879 [INFO][3387] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.127.192/26 handle="k8s-pod-network.a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" host="164.92.103.73" Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.882 [INFO][3387] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3 Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.899 [INFO][3387] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.127.192/26 handle="k8s-pod-network.a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" host="164.92.103.73" Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.909 [INFO][3387] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.127.195/26] block=192.168.127.192/26 handle="k8s-pod-network.a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" host="164.92.103.73" Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.909 [INFO][3387] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.127.195/26] handle="k8s-pod-network.a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" host="164.92.103.73" Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.909 [INFO][3387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:24:57.936058 containerd[1470]: 2025-01-29 11:24:57.910 [INFO][3387] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.127.195/26] IPv6=[] ContainerID="a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" HandleID="k8s-pod-network.a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" Workload="164.92.103.73-k8s-calico--kube--controllers--77d9686979--n7vxn-eth0" Jan 29 11:24:57.937482 containerd[1470]: 2025-01-29 11:24:57.912 [INFO][3377] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" Namespace="calico-system" Pod="calico-kube-controllers-77d9686979-n7vxn" WorkloadEndpoint="164.92.103.73-k8s-calico--kube--controllers--77d9686979--n7vxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.103.73-k8s-calico--kube--controllers--77d9686979--n7vxn-eth0", GenerateName:"calico-kube-controllers-77d9686979-", Namespace:"calico-system", SelfLink:"", UID:"b185d7d2-b846-446e-baa7-62814adc9399", ResourceVersion:"1412", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77d9686979", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.103.73", ContainerID:"", Pod:"calico-kube-controllers-77d9686979-n7vxn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali31a297e064a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:24:57.937482 containerd[1470]: 2025-01-29 11:24:57.912 [INFO][3377] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.127.195/32] ContainerID="a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" Namespace="calico-system" Pod="calico-kube-controllers-77d9686979-n7vxn" WorkloadEndpoint="164.92.103.73-k8s-calico--kube--controllers--77d9686979--n7vxn-eth0" Jan 29 11:24:57.937482 containerd[1470]: 2025-01-29 11:24:57.912 [INFO][3377] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31a297e064a ContainerID="a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" Namespace="calico-system" Pod="calico-kube-controllers-77d9686979-n7vxn" WorkloadEndpoint="164.92.103.73-k8s-calico--kube--controllers--77d9686979--n7vxn-eth0" Jan 29 11:24:57.937482 containerd[1470]: 2025-01-29 11:24:57.915 [INFO][3377] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" Namespace="calico-system" Pod="calico-kube-controllers-77d9686979-n7vxn" WorkloadEndpoint="164.92.103.73-k8s-calico--kube--controllers--77d9686979--n7vxn-eth0" Jan 29 11:24:57.937482 containerd[1470]: 2025-01-29 11:24:57.916 [INFO][3377] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" Namespace="calico-system" Pod="calico-kube-controllers-77d9686979-n7vxn" WorkloadEndpoint="164.92.103.73-k8s-calico--kube--controllers--77d9686979--n7vxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.103.73-k8s-calico--kube--controllers--77d9686979--n7vxn-eth0", GenerateName:"calico-kube-controllers-77d9686979-", Namespace:"calico-system", SelfLink:"", UID:"b185d7d2-b846-446e-baa7-62814adc9399", ResourceVersion:"1412", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77d9686979", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.103.73", ContainerID:"a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3", Pod:"calico-kube-controllers-77d9686979-n7vxn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali31a297e064a", MAC:"66:60:30:55:fd:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:24:57.937482 containerd[1470]: 2025-01-29 11:24:57.934 [INFO][3377] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3" Namespace="calico-system" Pod="calico-kube-controllers-77d9686979-n7vxn" WorkloadEndpoint="164.92.103.73-k8s-calico--kube--controllers--77d9686979--n7vxn-eth0" Jan 29 11:24:57.967433 containerd[1470]: time="2025-01-29T11:24:57.967341057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:24:57.967668 containerd[1470]: time="2025-01-29T11:24:57.967409917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:24:57.967668 containerd[1470]: time="2025-01-29T11:24:57.967437827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:57.968912 containerd[1470]: time="2025-01-29T11:24:57.968659080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:57.989489 systemd[1]: Started cri-containerd-a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3.scope - libcontainer container a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3. Jan 29 11:24:58.040811 containerd[1470]: time="2025-01-29T11:24:58.040695230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77d9686979-n7vxn,Uid:b185d7d2-b846-446e-baa7-62814adc9399,Namespace:calico-system,Attempt:0,} returns sandbox id \"a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3\"" Jan 29 11:24:58.076547 containerd[1470]: time="2025-01-29T11:24:58.076344897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:24:58.370382 kubelet[1783]: E0129 11:24:58.370328 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:58.731909 kubelet[1783]: I0129 11:24:58.731768 1783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:24:58.732330 kubelet[1783]: E0129 11:24:58.732131 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:24:59.371067 kubelet[1783]: E0129 11:24:59.370998 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:24:59.457927 systemd-networkd[1376]: cali31a297e064a: Gained IPv6LL Jan 29 11:24:59.872354 systemd[1]: Created slice kubepods-besteffort-pod7ed6788c_feda_44d1_82c5_d9c6fc053a44.slice - libcontainer container kubepods-besteffort-pod7ed6788c_feda_44d1_82c5_d9c6fc053a44.slice. Jan 29 11:24:59.932257 kubelet[1783]: I0129 11:24:59.931589 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jxcz\" (UniqueName: \"kubernetes.io/projected/7ed6788c-feda-44d1-82c5-d9c6fc053a44-kube-api-access-5jxcz\") pod \"nfs-server-provisioner-0\" (UID: \"7ed6788c-feda-44d1-82c5-d9c6fc053a44\") " pod="default/nfs-server-provisioner-0" Jan 29 11:24:59.932257 kubelet[1783]: I0129 11:24:59.931665 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7ed6788c-feda-44d1-82c5-d9c6fc053a44-data\") pod \"nfs-server-provisioner-0\" (UID: \"7ed6788c-feda-44d1-82c5-d9c6fc053a44\") " pod="default/nfs-server-provisioner-0" Jan 29 11:25:00.179245 containerd[1470]: time="2025-01-29T11:25:00.178758476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7ed6788c-feda-44d1-82c5-d9c6fc053a44,Namespace:default,Attempt:0,}" Jan 29 11:25:00.187839 containerd[1470]: time="2025-01-29T11:25:00.186559713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 11:25:00.187839 containerd[1470]: time="2025-01-29T11:25:00.187320141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:00.190262 containerd[1470]: time="2025-01-29T11:25:00.190126524Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:00.191746 containerd[1470]: time="2025-01-29T11:25:00.190750300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.114369129s" Jan 29 11:25:00.191746 containerd[1470]: time="2025-01-29T11:25:00.190786233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 11:25:00.191746 containerd[1470]: time="2025-01-29T11:25:00.191114331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:00.212996 containerd[1470]: time="2025-01-29T11:25:00.210938651Z" level=info msg="CreateContainer within sandbox \"a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:25:00.305130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2236061124.mount: Deactivated successfully. Jan 29 11:25:00.317145 containerd[1470]: time="2025-01-29T11:25:00.316906817Z" level=info msg="CreateContainer within sandbox \"a664bd0914ad44a8b379da135123bd3f2e6f7494cd39bfa08037720a0e84b2f3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5670d5e9f3f26a35874df273ccfaa56d407ac9b1d5e55b5f187fe48ca3f1d8c2\"" Jan 29 11:25:00.322387 containerd[1470]: time="2025-01-29T11:25:00.322209721Z" level=info msg="StartContainer for \"5670d5e9f3f26a35874df273ccfaa56d407ac9b1d5e55b5f187fe48ca3f1d8c2\"" Jan 29 11:25:00.371841 kubelet[1783]: E0129 11:25:00.371339 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:00.418772 systemd[1]: Started cri-containerd-5670d5e9f3f26a35874df273ccfaa56d407ac9b1d5e55b5f187fe48ca3f1d8c2.scope - libcontainer container 5670d5e9f3f26a35874df273ccfaa56d407ac9b1d5e55b5f187fe48ca3f1d8c2. Jan 29 11:25:00.533560 containerd[1470]: time="2025-01-29T11:25:00.533428465Z" level=info msg="StartContainer for \"5670d5e9f3f26a35874df273ccfaa56d407ac9b1d5e55b5f187fe48ca3f1d8c2\" returns successfully" Jan 29 11:25:00.839148 systemd-networkd[1376]: cali60e51b789ff: Link UP Jan 29 11:25:00.841298 systemd-networkd[1376]: cali60e51b789ff: Gained carrier Jan 29 11:25:00.846159 kubelet[1783]: I0129 11:25:00.845701 1783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77d9686979-n7vxn" podStartSLOduration=1.7284717509999998 podStartE2EDuration="3.845680026s" podCreationTimestamp="2025-01-29 11:24:57 +0000 UTC" firstStartedPulling="2025-01-29 11:24:58.075663971 +0000 UTC m=+31.474982375" lastFinishedPulling="2025-01-29 11:25:00.192872248 +0000 UTC m=+33.592190650" observedRunningTime="2025-01-29 11:25:00.845652121 +0000 UTC m=+34.244970544" watchObservedRunningTime="2025-01-29 11:25:00.845680026 +0000 UTC m=+34.244998448" Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.514 [INFO][3513] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {164.92.103.73-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 7ed6788c-feda-44d1-82c5-d9c6fc053a44 1460 0 2025-01-29 11:24:59 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 164.92.103.73 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.103.73-k8s-nfs--server--provisioner--0-" Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.514 [INFO][3513] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.103.73-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.581 [INFO][3575] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" HandleID="k8s-pod-network.add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" Workload="164.92.103.73-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.634 [INFO][3575] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" HandleID="k8s-pod-network.add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" Workload="164.92.103.73-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002938a0), Attrs:map[string]string{"namespace":"default", "node":"164.92.103.73", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 11:25:00.581170715 +0000 UTC"}, Hostname:"164.92.103.73", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.634 [INFO][3575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.635 [INFO][3575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.635 [INFO][3575] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '164.92.103.73' Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.648 [INFO][3575] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" host="164.92.103.73" Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.674 [INFO][3575] ipam/ipam.go 372: Looking up existing affinities for host host="164.92.103.73" Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.693 [INFO][3575] ipam/ipam.go 489: Trying affinity for 192.168.127.192/26 host="164.92.103.73" Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.709 [INFO][3575] ipam/ipam.go 155: Attempting to load block cidr=192.168.127.192/26 host="164.92.103.73" Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.725 [INFO][3575] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.127.192/26 host="164.92.103.73" Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.725 [INFO][3575] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.127.192/26 handle="k8s-pod-network.add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" host="164.92.103.73" Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.731 [INFO][3575] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63 Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.764 [INFO][3575] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.127.192/26 handle="k8s-pod-network.add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" host="164.92.103.73" Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.831 [INFO][3575] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.127.196/26] block=192.168.127.192/26 handle="k8s-pod-network.add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" host="164.92.103.73" Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.831 [INFO][3575] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.127.196/26] handle="k8s-pod-network.add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" host="164.92.103.73" Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.831 [INFO][3575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:25:00.888153 containerd[1470]: 2025-01-29 11:25:00.831 [INFO][3575] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.127.196/26] IPv6=[] ContainerID="add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" HandleID="k8s-pod-network.add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" Workload="164.92.103.73-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:25:00.890035 containerd[1470]: 2025-01-29 11:25:00.832 [INFO][3513] cni-plugin/k8s.go 386: Populated endpoint ContainerID="add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.103.73-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.103.73-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7ed6788c-feda-44d1-82c5-d9c6fc053a44", ResourceVersion:"1460", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 24, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.103.73", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.127.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:25:00.890035 containerd[1470]: 2025-01-29 11:25:00.833 [INFO][3513] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.127.196/32] ContainerID="add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.103.73-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:25:00.890035 containerd[1470]: 2025-01-29 11:25:00.833 [INFO][3513] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.103.73-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:25:00.890035 containerd[1470]: 2025-01-29 11:25:00.841 [INFO][3513] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.103.73-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:25:00.890536 containerd[1470]: 2025-01-29 11:25:00.842 [INFO][3513] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.103.73-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.103.73-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7ed6788c-feda-44d1-82c5-d9c6fc053a44", ResourceVersion:"1460", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 24, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.103.73", ContainerID:"add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.127.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"b6:ae:75:64:ab:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:25:00.890536 containerd[1470]: 2025-01-29 11:25:00.883 [INFO][3513] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="164.92.103.73-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:25:00.927581 containerd[1470]: time="2025-01-29T11:25:00.927357206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:25:00.927581 containerd[1470]: time="2025-01-29T11:25:00.927456995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:25:00.927581 containerd[1470]: time="2025-01-29T11:25:00.927474813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:25:00.928008 containerd[1470]: time="2025-01-29T11:25:00.927760022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:25:00.953621 systemd[1]: Started cri-containerd-add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63.scope - libcontainer container add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63. Jan 29 11:25:01.016727 containerd[1470]: time="2025-01-29T11:25:01.016675509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7ed6788c-feda-44d1-82c5-d9c6fc053a44,Namespace:default,Attempt:0,} returns sandbox id \"add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63\"" Jan 29 11:25:01.019285 containerd[1470]: time="2025-01-29T11:25:01.019199301Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 11:25:01.372695 kubelet[1783]: E0129 11:25:01.372563 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:01.676748 kubelet[1783]: I0129 11:25:01.676574 1783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:25:01.677946 kubelet[1783]: E0129 11:25:01.677132 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:25:01.759442 kubelet[1783]: E0129 11:25:01.758984 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:25:02.080668 systemd-networkd[1376]: cali60e51b789ff: Gained IPv6LL Jan 29 11:25:02.375307 kubelet[1783]: E0129 11:25:02.375243 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:03.375679 kubelet[1783]: E0129 11:25:03.375622 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:04.377006 kubelet[1783]: E0129 11:25:04.376813 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:04.861620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195809434.mount: Deactivated successfully. Jan 29 11:25:05.377694 kubelet[1783]: E0129 11:25:05.377480 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:06.379560 kubelet[1783]: E0129 11:25:06.379375 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:06.549703 update_engine[1451]: I20250129 11:25:06.549458 1451 update_attempter.cc:509] Updating boot flags... Jan 29 11:25:06.634770 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3822) Jan 29 11:25:06.779076 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3825) Jan 29 11:25:07.345525 kubelet[1783]: E0129 11:25:07.342916 1783 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:07.379867 kubelet[1783]: E0129 11:25:07.379811 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:08.210166 containerd[1470]: time="2025-01-29T11:25:08.210086722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:08.212971 containerd[1470]: time="2025-01-29T11:25:08.212013090Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 29 11:25:08.214183 containerd[1470]: time="2025-01-29T11:25:08.214114102Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:08.244045 containerd[1470]: time="2025-01-29T11:25:08.243976870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:08.248651 containerd[1470]: time="2025-01-29T11:25:08.248550006Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 7.229149175s" Jan 29 11:25:08.248816 containerd[1470]: time="2025-01-29T11:25:08.248657417Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 11:25:08.255289 containerd[1470]: time="2025-01-29T11:25:08.255211193Z" level=info msg="CreateContainer within sandbox \"add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 11:25:08.282402 containerd[1470]: time="2025-01-29T11:25:08.282328759Z" level=info msg="CreateContainer within sandbox \"add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"134b9816dd47734d625595ee87676e0ff3ac0e4fd68ff8fa6849b622eb0ec6eb\"" Jan 29 11:25:08.283563 containerd[1470]: time="2025-01-29T11:25:08.283488697Z" level=info msg="StartContainer for \"134b9816dd47734d625595ee87676e0ff3ac0e4fd68ff8fa6849b622eb0ec6eb\"" Jan 29 11:25:08.331560 systemd[1]: Started cri-containerd-134b9816dd47734d625595ee87676e0ff3ac0e4fd68ff8fa6849b622eb0ec6eb.scope - libcontainer container 134b9816dd47734d625595ee87676e0ff3ac0e4fd68ff8fa6849b622eb0ec6eb. Jan 29 11:25:08.376782 containerd[1470]: time="2025-01-29T11:25:08.376728059Z" level=info msg="StartContainer for \"134b9816dd47734d625595ee87676e0ff3ac0e4fd68ff8fa6849b622eb0ec6eb\" returns successfully" Jan 29 11:25:08.380350 kubelet[1783]: E0129 11:25:08.380308 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:09.381697 kubelet[1783]: E0129 11:25:09.381582 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:10.381907 kubelet[1783]: E0129 11:25:10.381828 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:11.382430 kubelet[1783]: E0129 11:25:11.382348 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:12.383045 kubelet[1783]: E0129 11:25:12.382947 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:13.383626 kubelet[1783]: E0129 11:25:13.383568 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:14.384402 kubelet[1783]: E0129 11:25:14.384320 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:15.384612 kubelet[1783]: E0129 11:25:15.384545 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:16.385492 kubelet[1783]: E0129 11:25:16.385436 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:17.385975 kubelet[1783]: E0129 11:25:17.385883 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:18.386659 kubelet[1783]: E0129 11:25:18.386583 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:19.387005 kubelet[1783]: E0129 11:25:19.386938 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:20.387677 kubelet[1783]: E0129 11:25:20.387623 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:21.388858 kubelet[1783]: E0129 11:25:21.388799 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:22.389944 kubelet[1783]: E0129 11:25:22.389867 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:23.390928 kubelet[1783]: E0129 11:25:23.390851 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:24.391712 kubelet[1783]: E0129 11:25:24.391645 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:25.392332 kubelet[1783]: E0129 11:25:25.392226 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:26.393460 kubelet[1783]: E0129 11:25:26.393380 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:27.337601 kubelet[1783]: E0129 11:25:27.337537 1783 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:27.396434 kubelet[1783]: E0129 11:25:27.395874 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:27.401998 containerd[1470]: time="2025-01-29T11:25:27.401650654Z" level=info msg="StopPodSandbox for \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\"" Jan 29 11:25:27.401998 containerd[1470]: time="2025-01-29T11:25:27.401794363Z" level=info msg="TearDown network for sandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" successfully" Jan 29 11:25:27.401998 containerd[1470]: time="2025-01-29T11:25:27.401915040Z" level=info msg="StopPodSandbox for \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" returns successfully" Jan 29 11:25:27.441638 containerd[1470]: time="2025-01-29T11:25:27.441451132Z" level=info msg="RemovePodSandbox for \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\"" Jan 29 11:25:27.448967 containerd[1470]: time="2025-01-29T11:25:27.448714451Z" level=info msg="Forcibly stopping sandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\"" Jan 29 11:25:27.460274 containerd[1470]: time="2025-01-29T11:25:27.448863695Z" level=info msg="TearDown network for sandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" successfully" Jan 29 11:25:27.479709 containerd[1470]: time="2025-01-29T11:25:27.479642046Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:25:27.480363 containerd[1470]: time="2025-01-29T11:25:27.479940564Z" level=info msg="RemovePodSandbox \"962060306f364562504175d5ca88e218ec3d6e9d0953dea8df7f04e5078f9241\" returns successfully" Jan 29 11:25:27.480740 containerd[1470]: time="2025-01-29T11:25:27.480588179Z" level=info msg="StopPodSandbox for \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\"" Jan 29 11:25:27.480740 containerd[1470]: time="2025-01-29T11:25:27.480691174Z" level=info msg="TearDown network for sandbox \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\" successfully" Jan 29 11:25:27.480740 containerd[1470]: time="2025-01-29T11:25:27.480703421Z" level=info msg="StopPodSandbox for \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\" returns successfully" Jan 29 11:25:27.482252 containerd[1470]: time="2025-01-29T11:25:27.481110906Z" level=info msg="RemovePodSandbox for \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\"" Jan 29 11:25:27.482252 containerd[1470]: time="2025-01-29T11:25:27.481136273Z" level=info msg="Forcibly stopping sandbox \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\"" Jan 29 11:25:27.482252 containerd[1470]: time="2025-01-29T11:25:27.481200527Z" level=info msg="TearDown network for sandbox \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\" successfully" Jan 29 11:25:27.484816 containerd[1470]: time="2025-01-29T11:25:27.484778567Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:25:27.485002 containerd[1470]: time="2025-01-29T11:25:27.484986724Z" level=info msg="RemovePodSandbox \"7062181499181cdfc8af432621a2cec3cf9f3b596e5ae99ef45fab9bf0367dfc\" returns successfully" Jan 29 11:25:27.486183 containerd[1470]: time="2025-01-29T11:25:27.486153910Z" level=info msg="StopPodSandbox for \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\"" Jan 29 11:25:27.486829 containerd[1470]: time="2025-01-29T11:25:27.486274802Z" level=info msg="TearDown network for sandbox \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\" successfully" Jan 29 11:25:27.486829 containerd[1470]: time="2025-01-29T11:25:27.486288359Z" level=info msg="StopPodSandbox for \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\" returns successfully" Jan 29 11:25:27.487397 containerd[1470]: time="2025-01-29T11:25:27.487373932Z" level=info msg="RemovePodSandbox for \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\"" Jan 29 11:25:27.487523 containerd[1470]: time="2025-01-29T11:25:27.487506423Z" level=info msg="Forcibly stopping sandbox \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\"" Jan 29 11:25:27.487693 containerd[1470]: time="2025-01-29T11:25:27.487650119Z" level=info msg="TearDown network for sandbox \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\" successfully" Jan 29 11:25:27.497248 containerd[1470]: time="2025-01-29T11:25:27.497187989Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:25:27.497476 containerd[1470]: time="2025-01-29T11:25:27.497455872Z" level=info msg="RemovePodSandbox \"4c142d26b926e06e18182b24a91ce1c4c5d07ab577c714ed2ef9ca882af41a33\" returns successfully" Jan 29 11:25:27.498080 containerd[1470]: time="2025-01-29T11:25:27.498053739Z" level=info msg="StopPodSandbox for \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\"" Jan 29 11:25:27.498363 containerd[1470]: time="2025-01-29T11:25:27.498346486Z" level=info msg="TearDown network for sandbox \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\" successfully" Jan 29 11:25:27.498454 containerd[1470]: time="2025-01-29T11:25:27.498443087Z" level=info msg="StopPodSandbox for \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\" returns successfully" Jan 29 11:25:27.498765 containerd[1470]: time="2025-01-29T11:25:27.498747091Z" level=info msg="RemovePodSandbox for \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\"" Jan 29 11:25:27.499047 containerd[1470]: time="2025-01-29T11:25:27.498905928Z" level=info msg="Forcibly stopping sandbox \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\"" Jan 29 11:25:27.499047 containerd[1470]: time="2025-01-29T11:25:27.498972486Z" level=info msg="TearDown network for sandbox \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\" successfully" Jan 29 11:25:27.502091 containerd[1470]: time="2025-01-29T11:25:27.501803433Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:25:27.502091 containerd[1470]: time="2025-01-29T11:25:27.501982041Z" level=info msg="RemovePodSandbox \"44bb5dce2760c31e226406f9875c1b514052d2deca8818c1ea5081b453750534\" returns successfully" Jan 29 11:25:27.502755 containerd[1470]: time="2025-01-29T11:25:27.502585502Z" level=info msg="StopPodSandbox for \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\"" Jan 29 11:25:27.502755 containerd[1470]: time="2025-01-29T11:25:27.502693095Z" level=info msg="TearDown network for sandbox \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\" successfully" Jan 29 11:25:27.502755 containerd[1470]: time="2025-01-29T11:25:27.502702555Z" level=info msg="StopPodSandbox for \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\" returns successfully" Jan 29 11:25:27.503016 containerd[1470]: time="2025-01-29T11:25:27.502990228Z" level=info msg="RemovePodSandbox for \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\"" Jan 29 11:25:27.503049 containerd[1470]: time="2025-01-29T11:25:27.503022377Z" level=info msg="Forcibly stopping sandbox \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\"" Jan 29 11:25:27.503137 containerd[1470]: time="2025-01-29T11:25:27.503098132Z" level=info msg="TearDown network for sandbox \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\" successfully" Jan 29 11:25:27.506260 containerd[1470]: time="2025-01-29T11:25:27.506201976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:25:27.506365 containerd[1470]: time="2025-01-29T11:25:27.506291419Z" level=info msg="RemovePodSandbox \"446fc504a7a0fbf1ea770f2087671e650cfe4019641a836cc3bdfda4944a15a1\" returns successfully" Jan 29 11:25:27.506804 containerd[1470]: time="2025-01-29T11:25:27.506754923Z" level=info msg="StopPodSandbox for \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\"" Jan 29 11:25:27.507033 containerd[1470]: time="2025-01-29T11:25:27.506871226Z" level=info msg="TearDown network for sandbox \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\" successfully" Jan 29 11:25:27.507033 containerd[1470]: time="2025-01-29T11:25:27.506884127Z" level=info msg="StopPodSandbox for \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\" returns successfully" Jan 29 11:25:27.508317 containerd[1470]: time="2025-01-29T11:25:27.507316426Z" level=info msg="RemovePodSandbox for \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\"" Jan 29 11:25:27.508317 containerd[1470]: time="2025-01-29T11:25:27.507341276Z" level=info msg="Forcibly stopping sandbox \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\"" Jan 29 11:25:27.508317 containerd[1470]: time="2025-01-29T11:25:27.507429642Z" level=info msg="TearDown network for sandbox \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\" successfully" Jan 29 11:25:27.510464 containerd[1470]: time="2025-01-29T11:25:27.510420991Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:25:27.510702 containerd[1470]: time="2025-01-29T11:25:27.510681797Z" level=info msg="RemovePodSandbox \"8fe88bf9dac637e8bd88760e94fca2ad4620093dd28444d38421ac99e390bccb\" returns successfully" Jan 29 11:25:27.515389 containerd[1470]: time="2025-01-29T11:25:27.515331832Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\"" Jan 29 11:25:27.515765 containerd[1470]: time="2025-01-29T11:25:27.515741920Z" level=info msg="TearDown network for sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" successfully" Jan 29 11:25:27.516280 containerd[1470]: time="2025-01-29T11:25:27.516226247Z" level=info msg="StopPodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" returns successfully" Jan 29 11:25:27.516774 containerd[1470]: time="2025-01-29T11:25:27.516744641Z" level=info msg="RemovePodSandbox for \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\"" Jan 29 11:25:27.516774 containerd[1470]: time="2025-01-29T11:25:27.516775411Z" level=info msg="Forcibly stopping sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\"" Jan 29 11:25:27.516901 containerd[1470]: time="2025-01-29T11:25:27.516856407Z" level=info msg="TearDown network for sandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" successfully" Jan 29 11:25:27.519984 containerd[1470]: time="2025-01-29T11:25:27.519931886Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:25:27.520158 containerd[1470]: time="2025-01-29T11:25:27.520009292Z" level=info msg="RemovePodSandbox \"e1442704f00a8622ec73b9fa09c3507a22283c309257c94937ce99c1ea6e305e\" returns successfully" Jan 29 11:25:27.520646 containerd[1470]: time="2025-01-29T11:25:27.520613765Z" level=info msg="StopPodSandbox for \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\"" Jan 29 11:25:27.521043 containerd[1470]: time="2025-01-29T11:25:27.520911578Z" level=info msg="TearDown network for sandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" successfully" Jan 29 11:25:27.521043 containerd[1470]: time="2025-01-29T11:25:27.520967599Z" level=info msg="StopPodSandbox for \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" returns successfully" Jan 29 11:25:27.521730 containerd[1470]: time="2025-01-29T11:25:27.521669895Z" level=info msg="RemovePodSandbox for \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\"" Jan 29 11:25:27.521816 containerd[1470]: time="2025-01-29T11:25:27.521744746Z" level=info msg="Forcibly stopping sandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\"" Jan 29 11:25:27.522414 containerd[1470]: time="2025-01-29T11:25:27.521992078Z" level=info msg="TearDown network for sandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" successfully" Jan 29 11:25:27.525492 containerd[1470]: time="2025-01-29T11:25:27.525443018Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:25:27.525655 containerd[1470]: time="2025-01-29T11:25:27.525526837Z" level=info msg="RemovePodSandbox \"eb42c6688aaa3a8c17c02f4c459f29d4da35aaa24eba6777ea695fc1bf4bd02c\" returns successfully" Jan 29 11:25:27.526221 containerd[1470]: time="2025-01-29T11:25:27.526190604Z" level=info msg="StopPodSandbox for \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\"" Jan 29 11:25:27.526633 containerd[1470]: time="2025-01-29T11:25:27.526499374Z" level=info msg="TearDown network for sandbox \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\" successfully" Jan 29 11:25:27.526633 containerd[1470]: time="2025-01-29T11:25:27.526529351Z" level=info msg="StopPodSandbox for \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\" returns successfully" Jan 29 11:25:27.526958 containerd[1470]: time="2025-01-29T11:25:27.526934317Z" level=info msg="RemovePodSandbox for \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\"" Jan 29 11:25:27.527036 containerd[1470]: time="2025-01-29T11:25:27.527008805Z" level=info msg="Forcibly stopping sandbox \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\"" Jan 29 11:25:27.527159 containerd[1470]: time="2025-01-29T11:25:27.527107940Z" level=info msg="TearDown network for sandbox \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\" successfully" Jan 29 11:25:27.530772 containerd[1470]: time="2025-01-29T11:25:27.530725515Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:25:27.530917 containerd[1470]: time="2025-01-29T11:25:27.530805405Z" level=info msg="RemovePodSandbox \"eacde3d7cf8032f9e0178c3260bb01bbe31dd16ff19c73483fb76be06326eead\" returns successfully" Jan 29 11:25:27.531407 containerd[1470]: time="2025-01-29T11:25:27.531377729Z" level=info msg="StopPodSandbox for \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\"" Jan 29 11:25:27.531535 containerd[1470]: time="2025-01-29T11:25:27.531513219Z" level=info msg="TearDown network for sandbox \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\" successfully" Jan 29 11:25:27.531601 containerd[1470]: time="2025-01-29T11:25:27.531535428Z" level=info msg="StopPodSandbox for \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\" returns successfully" Jan 29 11:25:27.531977 containerd[1470]: time="2025-01-29T11:25:27.531950967Z" level=info msg="RemovePodSandbox for \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\"" Jan 29 11:25:27.532246 containerd[1470]: time="2025-01-29T11:25:27.532098722Z" level=info msg="Forcibly stopping sandbox \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\"" Jan 29 11:25:27.532395 containerd[1470]: time="2025-01-29T11:25:27.532188533Z" level=info msg="TearDown network for sandbox \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\" successfully" Jan 29 11:25:27.535547 containerd[1470]: time="2025-01-29T11:25:27.535352106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:25:27.535547 containerd[1470]: time="2025-01-29T11:25:27.535435845Z" level=info msg="RemovePodSandbox \"bfe6f5215b170e1526c06b78bd177fcd08c1269a1cc7faff8987aa4d2fb38c1f\" returns successfully" Jan 29 11:25:27.538084 containerd[1470]: time="2025-01-29T11:25:27.538040006Z" level=info msg="StopPodSandbox for \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\"" Jan 29 11:25:27.538248 containerd[1470]: time="2025-01-29T11:25:27.538189237Z" level=info msg="TearDown network for sandbox \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\" successfully" Jan 29 11:25:27.538248 containerd[1470]: time="2025-01-29T11:25:27.538205786Z" level=info msg="StopPodSandbox for \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\" returns successfully" Jan 29 11:25:27.538731 containerd[1470]: time="2025-01-29T11:25:27.538694580Z" level=info msg="RemovePodSandbox for \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\"" Jan 29 11:25:27.538731 containerd[1470]: time="2025-01-29T11:25:27.538730892Z" level=info msg="Forcibly stopping sandbox \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\"" Jan 29 11:25:27.538875 containerd[1470]: time="2025-01-29T11:25:27.538818162Z" level=info msg="TearDown network for sandbox \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\" successfully" Jan 29 11:25:27.542252 containerd[1470]: time="2025-01-29T11:25:27.542193668Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:25:27.542431 containerd[1470]: time="2025-01-29T11:25:27.542294973Z" level=info msg="RemovePodSandbox \"409729e49a1985edb8d924d6f9d124d18f96460ee046e1b2fb613efebe38803f\" returns successfully" Jan 29 11:25:27.543197 containerd[1470]: time="2025-01-29T11:25:27.542983569Z" level=info msg="StopPodSandbox for \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\"" Jan 29 11:25:27.543197 containerd[1470]: time="2025-01-29T11:25:27.543105136Z" level=info msg="TearDown network for sandbox \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\" successfully" Jan 29 11:25:27.543197 containerd[1470]: time="2025-01-29T11:25:27.543116957Z" level=info msg="StopPodSandbox for \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\" returns successfully" Jan 29 11:25:27.543709 containerd[1470]: time="2025-01-29T11:25:27.543674209Z" level=info msg="RemovePodSandbox for \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\"" Jan 29 11:25:27.543709 containerd[1470]: time="2025-01-29T11:25:27.543708554Z" level=info msg="Forcibly stopping sandbox \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\"" Jan 29 11:25:27.543856 containerd[1470]: time="2025-01-29T11:25:27.543800294Z" level=info msg="TearDown network for sandbox \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\" successfully" Jan 29 11:25:27.546875 containerd[1470]: time="2025-01-29T11:25:27.546800476Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:25:27.546875 containerd[1470]: time="2025-01-29T11:25:27.546875018Z" level=info msg="RemovePodSandbox \"c071d1bffc98e255d621c96b80e2073fc41b7d8bed2d1f7652f100279fb0f518\" returns successfully" Jan 29 11:25:27.547646 containerd[1470]: time="2025-01-29T11:25:27.547609499Z" level=info msg="StopPodSandbox for \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\"" Jan 29 11:25:27.547761 containerd[1470]: time="2025-01-29T11:25:27.547734223Z" level=info msg="TearDown network for sandbox \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\" successfully" Jan 29 11:25:27.547822 containerd[1470]: time="2025-01-29T11:25:27.547757563Z" level=info msg="StopPodSandbox for \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\" returns successfully" Jan 29 11:25:27.548353 containerd[1470]: time="2025-01-29T11:25:27.548318261Z" level=info msg="RemovePodSandbox for \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\"" Jan 29 11:25:27.548353 containerd[1470]: time="2025-01-29T11:25:27.548352907Z" level=info msg="Forcibly stopping sandbox \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\"" Jan 29 11:25:27.548549 containerd[1470]: time="2025-01-29T11:25:27.548443801Z" level=info msg="TearDown network for sandbox \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\" successfully" Jan 29 11:25:27.551418 containerd[1470]: time="2025-01-29T11:25:27.551376674Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:25:27.551617 containerd[1470]: time="2025-01-29T11:25:27.551451067Z" level=info msg="RemovePodSandbox \"b53253f0f750c8f17094c1be2e6932a543af954e96085fae99cbf36aacdc801e\" returns successfully" Jan 29 11:25:28.396572 kubelet[1783]: E0129 11:25:28.396509 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:29.397014 kubelet[1783]: E0129 11:25:29.396915 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:30.397675 kubelet[1783]: E0129 11:25:30.397610 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:31.398690 kubelet[1783]: E0129 11:25:31.398455 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:32.399730 kubelet[1783]: E0129 11:25:32.399649 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:33.400352 kubelet[1783]: E0129 11:25:33.400264 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:33.988206 kubelet[1783]: I0129 11:25:33.988116 1783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=27.754610671000002 podStartE2EDuration="34.988059939s" podCreationTimestamp="2025-01-29 11:24:59 +0000 UTC" firstStartedPulling="2025-01-29 11:25:01.018687541 +0000 UTC m=+34.418005945" lastFinishedPulling="2025-01-29 11:25:08.25213681 +0000 UTC m=+41.651455213" observedRunningTime="2025-01-29 11:25:08.920955213 +0000 UTC m=+42.320273660" watchObservedRunningTime="2025-01-29 11:25:33.988059939 +0000 UTC m=+67.387378362" Jan 29 11:25:34.025297 systemd[1]: run-containerd-runc-k8s.io-18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8-runc.wfs4d9.mount: Deactivated successfully. Jan 29 11:25:34.148858 containerd[1470]: time="2025-01-29T11:25:34.148421219Z" level=info msg="StopContainer for \"18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8\" with timeout 5 (s)" Jan 29 11:25:34.154628 containerd[1470]: time="2025-01-29T11:25:34.154062005Z" level=info msg="Stop container \"18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8\" with signal terminated" Jan 29 11:25:34.172604 systemd[1]: cri-containerd-18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8.scope: Deactivated successfully. Jan 29 11:25:34.173524 systemd[1]: cri-containerd-18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8.scope: Consumed 9.896s CPU time. Jan 29 11:25:34.204392 containerd[1470]: time="2025-01-29T11:25:34.203394411Z" level=info msg="shim disconnected" id=18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8 namespace=k8s.io Jan 29 11:25:34.204606 containerd[1470]: time="2025-01-29T11:25:34.204566744Z" level=warning msg="cleaning up after shim disconnected" id=18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8 namespace=k8s.io Jan 29 11:25:34.204727 containerd[1470]: time="2025-01-29T11:25:34.204702600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:25:34.207601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8-rootfs.mount: Deactivated successfully. Jan 29 11:25:34.264396 containerd[1470]: time="2025-01-29T11:25:34.264115248Z" level=info msg="StopContainer for \"18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8\" returns successfully" Jan 29 11:25:34.270394 containerd[1470]: time="2025-01-29T11:25:34.267528913Z" level=info msg="StopPodSandbox for \"8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a\"" Jan 29 11:25:34.270394 containerd[1470]: time="2025-01-29T11:25:34.267626668Z" level=info msg="Container to stop \"fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:25:34.270394 containerd[1470]: time="2025-01-29T11:25:34.267682501Z" level=info msg="Container to stop \"18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:25:34.270394 containerd[1470]: time="2025-01-29T11:25:34.267697777Z" level=info msg="Container to stop \"18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:25:34.273282 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a-shm.mount: Deactivated successfully. Jan 29 11:25:34.287827 systemd[1]: cri-containerd-8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a.scope: Deactivated successfully. Jan 29 11:25:34.316151 containerd[1470]: time="2025-01-29T11:25:34.315711962Z" level=info msg="shim disconnected" id=8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a namespace=k8s.io Jan 29 11:25:34.316151 containerd[1470]: time="2025-01-29T11:25:34.315776288Z" level=warning msg="cleaning up after shim disconnected" id=8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a namespace=k8s.io Jan 29 11:25:34.316151 containerd[1470]: time="2025-01-29T11:25:34.315784853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:25:34.320891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a-rootfs.mount: Deactivated successfully. Jan 29 11:25:34.344575 containerd[1470]: time="2025-01-29T11:25:34.344349151Z" level=info msg="TearDown network for sandbox \"8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a\" successfully" Jan 29 11:25:34.344575 containerd[1470]: time="2025-01-29T11:25:34.344402274Z" level=info msg="StopPodSandbox for \"8f471fc28b2880ba4ce34ae2335fb5bb2cd7f76757acab8354025ced55285b0a\" returns successfully" Jan 29 11:25:34.401560 kubelet[1783]: E0129 11:25:34.401491 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:34.416179 kubelet[1783]: I0129 11:25:34.416094 1783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d680fae3-bced-460c-a58b-8ebe48dfde4f-tigera-ca-bundle\") pod \"d680fae3-bced-460c-a58b-8ebe48dfde4f\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " Jan 29 11:25:34.416179 kubelet[1783]: I0129 11:25:34.416164 1783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbcjx\" (UniqueName: \"kubernetes.io/projected/d680fae3-bced-460c-a58b-8ebe48dfde4f-kube-api-access-sbcjx\") pod \"d680fae3-bced-460c-a58b-8ebe48dfde4f\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " Jan 29 11:25:34.416179 kubelet[1783]: I0129 11:25:34.416186 1783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-cni-net-dir\") pod \"d680fae3-bced-460c-a58b-8ebe48dfde4f\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " Jan 29 11:25:34.416440 kubelet[1783]: I0129 11:25:34.416265 1783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d680fae3-bced-460c-a58b-8ebe48dfde4f-node-certs\") pod \"d680fae3-bced-460c-a58b-8ebe48dfde4f\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " Jan 29 11:25:34.416440 kubelet[1783]: I0129 11:25:34.416289 1783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-lib-modules\") pod \"d680fae3-bced-460c-a58b-8ebe48dfde4f\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " Jan 29 11:25:34.416440 kubelet[1783]: I0129 11:25:34.416305 1783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-xtables-lock\") pod \"d680fae3-bced-460c-a58b-8ebe48dfde4f\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " Jan 29 11:25:34.416440 kubelet[1783]: I0129 11:25:34.416318 1783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-cni-bin-dir\") pod \"d680fae3-bced-460c-a58b-8ebe48dfde4f\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " Jan 29 11:25:34.416440 kubelet[1783]: I0129 11:25:34.416338 1783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-policysync\") pod \"d680fae3-bced-460c-a58b-8ebe48dfde4f\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " Jan 29 11:25:34.416440 kubelet[1783]: I0129 11:25:34.416362 1783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-cni-log-dir\") pod \"d680fae3-bced-460c-a58b-8ebe48dfde4f\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " Jan 29 11:25:34.416647 kubelet[1783]: I0129 11:25:34.416389 1783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-var-lib-calico\") pod \"d680fae3-bced-460c-a58b-8ebe48dfde4f\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " Jan 29 11:25:34.416647 kubelet[1783]: I0129 11:25:34.416405 1783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-flexvol-driver-host\") pod \"d680fae3-bced-460c-a58b-8ebe48dfde4f\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " Jan 29 11:25:34.416647 kubelet[1783]: I0129 11:25:34.416422 1783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-var-run-calico\") pod \"d680fae3-bced-460c-a58b-8ebe48dfde4f\" (UID: \"d680fae3-bced-460c-a58b-8ebe48dfde4f\") " Jan 29 11:25:34.417889 kubelet[1783]: I0129 11:25:34.416680 1783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d680fae3-bced-460c-a58b-8ebe48dfde4f" (UID: "d680fae3-bced-460c-a58b-8ebe48dfde4f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:25:34.420279 kubelet[1783]: I0129 11:25:34.416555 1783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "d680fae3-bced-460c-a58b-8ebe48dfde4f" (UID: "d680fae3-bced-460c-a58b-8ebe48dfde4f"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:25:34.420279 kubelet[1783]: I0129 11:25:34.418489 1783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "d680fae3-bced-460c-a58b-8ebe48dfde4f" (UID: "d680fae3-bced-460c-a58b-8ebe48dfde4f"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:25:34.420279 kubelet[1783]: I0129 11:25:34.418511 1783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-policysync" (OuterVolumeSpecName: "policysync") pod "d680fae3-bced-460c-a58b-8ebe48dfde4f" (UID: "d680fae3-bced-460c-a58b-8ebe48dfde4f"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:25:34.420279 kubelet[1783]: I0129 11:25:34.418556 1783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "d680fae3-bced-460c-a58b-8ebe48dfde4f" (UID: "d680fae3-bced-460c-a58b-8ebe48dfde4f"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:25:34.420279 kubelet[1783]: I0129 11:25:34.418576 1783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "d680fae3-bced-460c-a58b-8ebe48dfde4f" (UID: "d680fae3-bced-460c-a58b-8ebe48dfde4f"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:25:34.420593 kubelet[1783]: I0129 11:25:34.418596 1783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "d680fae3-bced-460c-a58b-8ebe48dfde4f" (UID: "d680fae3-bced-460c-a58b-8ebe48dfde4f"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:25:34.420593 kubelet[1783]: I0129 11:25:34.418617 1783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "d680fae3-bced-460c-a58b-8ebe48dfde4f" (UID: "d680fae3-bced-460c-a58b-8ebe48dfde4f"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:25:34.422168 kubelet[1783]: I0129 11:25:34.422119 1783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d680fae3-bced-460c-a58b-8ebe48dfde4f-kube-api-access-sbcjx" (OuterVolumeSpecName: "kube-api-access-sbcjx") pod "d680fae3-bced-460c-a58b-8ebe48dfde4f" (UID: "d680fae3-bced-460c-a58b-8ebe48dfde4f"). InnerVolumeSpecName "kube-api-access-sbcjx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 11:25:34.423900 kubelet[1783]: I0129 11:25:34.423853 1783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d680fae3-bced-460c-a58b-8ebe48dfde4f-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "d680fae3-bced-460c-a58b-8ebe48dfde4f" (UID: "d680fae3-bced-460c-a58b-8ebe48dfde4f"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 11:25:34.424122 kubelet[1783]: I0129 11:25:34.424101 1783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d680fae3-bced-460c-a58b-8ebe48dfde4f" (UID: "d680fae3-bced-460c-a58b-8ebe48dfde4f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:25:34.432290 kubelet[1783]: I0129 11:25:34.432207 1783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d680fae3-bced-460c-a58b-8ebe48dfde4f-node-certs" (OuterVolumeSpecName: "node-certs") pod "d680fae3-bced-460c-a58b-8ebe48dfde4f" (UID: "d680fae3-bced-460c-a58b-8ebe48dfde4f"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 29 11:25:34.464118 kubelet[1783]: I0129 11:25:34.464055 1783 memory_manager.go:355] "RemoveStaleState removing state" podUID="d680fae3-bced-460c-a58b-8ebe48dfde4f" containerName="calico-node" Jan 29 11:25:34.483387 systemd[1]: Created slice kubepods-besteffort-podf193060d_f96c_427a_b919_a684bf6682c7.slice - libcontainer container kubepods-besteffort-podf193060d_f96c_427a_b919_a684bf6682c7.slice. Jan 29 11:25:34.517574 kubelet[1783]: I0129 11:25:34.517410 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f193060d-f96c-427a-b919-a684bf6682c7-policysync\") pod \"calico-node-5l5c7\" (UID: \"f193060d-f96c-427a-b919-a684bf6682c7\") " pod="calico-system/calico-node-5l5c7" Jan 29 11:25:34.517574 kubelet[1783]: I0129 11:25:34.517461 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f193060d-f96c-427a-b919-a684bf6682c7-cni-net-dir\") pod \"calico-node-5l5c7\" (UID: \"f193060d-f96c-427a-b919-a684bf6682c7\") " pod="calico-system/calico-node-5l5c7" Jan 29 11:25:34.517574 kubelet[1783]: I0129 11:25:34.517485 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f4gj\" (UniqueName: \"kubernetes.io/projected/f193060d-f96c-427a-b919-a684bf6682c7-kube-api-access-9f4gj\") pod \"calico-node-5l5c7\" (UID: \"f193060d-f96c-427a-b919-a684bf6682c7\") " pod="calico-system/calico-node-5l5c7" Jan 29 11:25:34.517574 kubelet[1783]: I0129 11:25:34.517503 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f193060d-f96c-427a-b919-a684bf6682c7-lib-modules\") pod \"calico-node-5l5c7\" (UID: \"f193060d-f96c-427a-b919-a684bf6682c7\") " pod="calico-system/calico-node-5l5c7" Jan 29 11:25:34.517574 kubelet[1783]: I0129 11:25:34.517528 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f193060d-f96c-427a-b919-a684bf6682c7-var-run-calico\") pod \"calico-node-5l5c7\" (UID: \"f193060d-f96c-427a-b919-a684bf6682c7\") " pod="calico-system/calico-node-5l5c7" Jan 29 11:25:34.517823 kubelet[1783]: I0129 11:25:34.517550 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f193060d-f96c-427a-b919-a684bf6682c7-flexvol-driver-host\") pod \"calico-node-5l5c7\" (UID: \"f193060d-f96c-427a-b919-a684bf6682c7\") " pod="calico-system/calico-node-5l5c7" Jan 29 11:25:34.517823 kubelet[1783]: I0129 11:25:34.517565 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f193060d-f96c-427a-b919-a684bf6682c7-xtables-lock\") pod \"calico-node-5l5c7\" (UID: \"f193060d-f96c-427a-b919-a684bf6682c7\") " pod="calico-system/calico-node-5l5c7" Jan 29 11:25:34.517823 kubelet[1783]: I0129 11:25:34.517585 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f193060d-f96c-427a-b919-a684bf6682c7-tigera-ca-bundle\") pod \"calico-node-5l5c7\" (UID: \"f193060d-f96c-427a-b919-a684bf6682c7\") " pod="calico-system/calico-node-5l5c7" Jan 29 11:25:34.517823 kubelet[1783]: I0129 11:25:34.517610 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f193060d-f96c-427a-b919-a684bf6682c7-cni-bin-dir\") pod \"calico-node-5l5c7\" (UID: \"f193060d-f96c-427a-b919-a684bf6682c7\") " pod="calico-system/calico-node-5l5c7" Jan 29 11:25:34.517823 kubelet[1783]: I0129 11:25:34.517628 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f193060d-f96c-427a-b919-a684bf6682c7-var-lib-calico\") pod \"calico-node-5l5c7\" (UID: \"f193060d-f96c-427a-b919-a684bf6682c7\") " pod="calico-system/calico-node-5l5c7" Jan 29 11:25:34.517966 kubelet[1783]: I0129 11:25:34.517645 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f193060d-f96c-427a-b919-a684bf6682c7-node-certs\") pod \"calico-node-5l5c7\" (UID: \"f193060d-f96c-427a-b919-a684bf6682c7\") " pod="calico-system/calico-node-5l5c7" Jan 29 11:25:34.517966 kubelet[1783]: I0129 11:25:34.517664 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f193060d-f96c-427a-b919-a684bf6682c7-cni-log-dir\") pod \"calico-node-5l5c7\" (UID: \"f193060d-f96c-427a-b919-a684bf6682c7\") " pod="calico-system/calico-node-5l5c7" Jan 29 11:25:34.517966 kubelet[1783]: I0129 11:25:34.517688 1783 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbcjx\" (UniqueName: \"kubernetes.io/projected/d680fae3-bced-460c-a58b-8ebe48dfde4f-kube-api-access-sbcjx\") on node \"164.92.103.73\" DevicePath \"\"" Jan 29 11:25:34.517966 kubelet[1783]: I0129 11:25:34.517699 1783 reconciler_common.go:299] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-cni-net-dir\") on node \"164.92.103.73\" DevicePath \"\"" Jan 29 11:25:34.517966 kubelet[1783]: I0129 11:25:34.517710 1783 reconciler_common.go:299] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d680fae3-bced-460c-a58b-8ebe48dfde4f-node-certs\") on node \"164.92.103.73\" DevicePath \"\"" Jan 29 11:25:34.517966 kubelet[1783]: I0129 11:25:34.517720 1783 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-lib-modules\") on node \"164.92.103.73\" DevicePath \"\"" Jan 29 11:25:34.517966 kubelet[1783]: I0129 11:25:34.517728 1783 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-xtables-lock\") on node \"164.92.103.73\" DevicePath \"\"" Jan 29 11:25:34.518225 kubelet[1783]: I0129 11:25:34.517738 1783 reconciler_common.go:299] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-cni-bin-dir\") on node \"164.92.103.73\" DevicePath \"\"" Jan 29 11:25:34.518225 kubelet[1783]: I0129 11:25:34.517745 1783 reconciler_common.go:299] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-policysync\") on node \"164.92.103.73\" DevicePath \"\"" Jan 29 11:25:34.518225 kubelet[1783]: I0129 11:25:34.517756 1783 reconciler_common.go:299] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-cni-log-dir\") on node \"164.92.103.73\" DevicePath \"\"" Jan 29 11:25:34.518225 kubelet[1783]: I0129 11:25:34.517765 1783 reconciler_common.go:299] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-var-run-calico\") on node \"164.92.103.73\" DevicePath \"\"" Jan 29 11:25:34.518225 kubelet[1783]: I0129 11:25:34.517774 1783 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-var-lib-calico\") on node \"164.92.103.73\" DevicePath \"\"" Jan 29 11:25:34.518225 kubelet[1783]: I0129 11:25:34.517782 1783 reconciler_common.go:299] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d680fae3-bced-460c-a58b-8ebe48dfde4f-flexvol-driver-host\") on node \"164.92.103.73\" DevicePath \"\"" Jan 29 11:25:34.518225 kubelet[1783]: I0129 11:25:34.517802 1783 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d680fae3-bced-460c-a58b-8ebe48dfde4f-tigera-ca-bundle\") on node \"164.92.103.73\" DevicePath \"\"" Jan 29 11:25:34.791751 kubelet[1783]: E0129 11:25:34.791324 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:25:34.792189 containerd[1470]: time="2025-01-29T11:25:34.791976459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5l5c7,Uid:f193060d-f96c-427a-b919-a684bf6682c7,Namespace:calico-system,Attempt:0,}" Jan 29 11:25:34.827656 containerd[1470]: time="2025-01-29T11:25:34.827358138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:25:34.827656 containerd[1470]: time="2025-01-29T11:25:34.827456691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:25:34.828508 containerd[1470]: time="2025-01-29T11:25:34.828141738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:25:34.829608 containerd[1470]: time="2025-01-29T11:25:34.829407091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:25:34.860635 systemd[1]: Started cri-containerd-26ca4365d8c7975c4e86e8a61f3c36b9bb743a159f2e3dd9c65f3d745256cc07.scope - libcontainer container 26ca4365d8c7975c4e86e8a61f3c36b9bb743a159f2e3dd9c65f3d745256cc07. Jan 29 11:25:34.893641 containerd[1470]: time="2025-01-29T11:25:34.893345944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5l5c7,Uid:f193060d-f96c-427a-b919-a684bf6682c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"26ca4365d8c7975c4e86e8a61f3c36b9bb743a159f2e3dd9c65f3d745256cc07\"" Jan 29 11:25:34.895280 kubelet[1783]: E0129 11:25:34.894974 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:25:34.908989 containerd[1470]: time="2025-01-29T11:25:34.908770067Z" level=info msg="CreateContainer within sandbox \"26ca4365d8c7975c4e86e8a61f3c36b9bb743a159f2e3dd9c65f3d745256cc07\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:25:34.944146 kubelet[1783]: I0129 11:25:34.942733 1783 scope.go:117] "RemoveContainer" containerID="18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8" Jan 29 11:25:34.947123 systemd[1]: Removed slice kubepods-besteffort-podd680fae3_bced_460c_a58b_8ebe48dfde4f.slice - libcontainer container kubepods-besteffort-podd680fae3_bced_460c_a58b_8ebe48dfde4f.slice. Jan 29 11:25:34.947337 systemd[1]: kubepods-besteffort-podd680fae3_bced_460c_a58b_8ebe48dfde4f.slice: Consumed 10.693s CPU time. Jan 29 11:25:34.948774 containerd[1470]: time="2025-01-29T11:25:34.948647816Z" level=info msg="RemoveContainer for \"18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8\"" Jan 29 11:25:34.950368 containerd[1470]: time="2025-01-29T11:25:34.948693666Z" level=info msg="CreateContainer within sandbox \"26ca4365d8c7975c4e86e8a61f3c36b9bb743a159f2e3dd9c65f3d745256cc07\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8828925de7adcd3a79303ed47b7a33f107dfa1e5e4919d4308bcc60cf0e525f6\"" Jan 29 11:25:34.952179 containerd[1470]: time="2025-01-29T11:25:34.951752527Z" level=info msg="StartContainer for \"8828925de7adcd3a79303ed47b7a33f107dfa1e5e4919d4308bcc60cf0e525f6\"" Jan 29 11:25:34.956994 containerd[1470]: time="2025-01-29T11:25:34.956779715Z" level=info msg="RemoveContainer for \"18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8\" returns successfully" Jan 29 11:25:34.957545 kubelet[1783]: I0129 11:25:34.957359 1783 scope.go:117] "RemoveContainer" containerID="18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2" Jan 29 11:25:34.959548 containerd[1470]: time="2025-01-29T11:25:34.959088036Z" level=info msg="RemoveContainer for \"18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2\"" Jan 29 11:25:34.964848 containerd[1470]: time="2025-01-29T11:25:34.964795141Z" level=info msg="RemoveContainer for \"18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2\" returns successfully" Jan 29 11:25:34.965303 kubelet[1783]: I0129 11:25:34.965273 1783 scope.go:117] "RemoveContainer" containerID="fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93" Jan 29 11:25:34.967128 containerd[1470]: time="2025-01-29T11:25:34.967083691Z" level=info msg="RemoveContainer for \"fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93\"" Jan 29 11:25:34.974263 containerd[1470]: time="2025-01-29T11:25:34.974182007Z" level=info msg="RemoveContainer for \"fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93\" returns successfully" Jan 29 11:25:34.974852 kubelet[1783]: I0129 11:25:34.974591 1783 scope.go:117] "RemoveContainer" containerID="18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8" Jan 29 11:25:34.975354 containerd[1470]: time="2025-01-29T11:25:34.975288457Z" level=error msg="ContainerStatus for \"18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8\": not found" Jan 29 11:25:34.976998 kubelet[1783]: E0129 11:25:34.976645 1783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8\": not found" containerID="18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8" Jan 29 11:25:34.986269 kubelet[1783]: I0129 11:25:34.977973 1783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8"} err="failed to get container status \"18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8\": rpc error: code = NotFound desc = an error occurred when try to find container \"18bb2bc186ad330b76ebce924281b8b3a9f2fd188b3aca3b41c3d97da2d2fbf8\": not found" Jan 29 11:25:34.986269 kubelet[1783]: I0129 11:25:34.985624 1783 scope.go:117] "RemoveContainer" containerID="18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2" Jan 29 11:25:34.986854 containerd[1470]: time="2025-01-29T11:25:34.986603599Z" level=error msg="ContainerStatus for \"18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2\": not found" Jan 29 11:25:34.988378 kubelet[1783]: E0129 11:25:34.987909 1783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2\": not found" containerID="18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2" Jan 29 11:25:34.988378 kubelet[1783]: I0129 11:25:34.987959 1783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2"} err="failed to get container status \"18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"18c1698dd3be7b3028a2fe63d7784055e19391970b772dcc3459b015722559d2\": not found" Jan 29 11:25:34.988378 kubelet[1783]: I0129 11:25:34.987991 1783 scope.go:117] "RemoveContainer" containerID="fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93" Jan 29 11:25:34.989145 containerd[1470]: time="2025-01-29T11:25:34.988715014Z" level=error msg="ContainerStatus for \"fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93\": not found" Jan 29 11:25:34.990763 kubelet[1783]: E0129 11:25:34.989361 1783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93\": not found" containerID="fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93" Jan 29 11:25:34.990763 kubelet[1783]: I0129 11:25:34.989403 1783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93"} err="failed to get container status \"fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa83c7807e167c69743ed9dc7006d1e86c5109ef1e8de18df36eca675199cf93\": not found" Jan 29 11:25:35.009525 systemd[1]: Started cri-containerd-8828925de7adcd3a79303ed47b7a33f107dfa1e5e4919d4308bcc60cf0e525f6.scope - libcontainer container 8828925de7adcd3a79303ed47b7a33f107dfa1e5e4919d4308bcc60cf0e525f6. Jan 29 11:25:35.031559 systemd[1]: var-lib-kubelet-pods-d680fae3\x2dbced\x2d460c\x2da58b\x2d8ebe48dfde4f-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 29 11:25:35.031707 systemd[1]: var-lib-kubelet-pods-d680fae3\x2dbced\x2d460c\x2da58b\x2d8ebe48dfde4f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsbcjx.mount: Deactivated successfully. Jan 29 11:25:35.031793 systemd[1]: var-lib-kubelet-pods-d680fae3\x2dbced\x2d460c\x2da58b\x2d8ebe48dfde4f-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 29 11:25:35.065260 containerd[1470]: time="2025-01-29T11:25:35.064115384Z" level=info msg="StartContainer for \"8828925de7adcd3a79303ed47b7a33f107dfa1e5e4919d4308bcc60cf0e525f6\" returns successfully" Jan 29 11:25:35.120743 systemd[1]: cri-containerd-8828925de7adcd3a79303ed47b7a33f107dfa1e5e4919d4308bcc60cf0e525f6.scope: Deactivated successfully. Jan 29 11:25:35.155353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8828925de7adcd3a79303ed47b7a33f107dfa1e5e4919d4308bcc60cf0e525f6-rootfs.mount: Deactivated successfully. Jan 29 11:25:35.185524 containerd[1470]: time="2025-01-29T11:25:35.185453587Z" level=info msg="shim disconnected" id=8828925de7adcd3a79303ed47b7a33f107dfa1e5e4919d4308bcc60cf0e525f6 namespace=k8s.io Jan 29 11:25:35.185524 containerd[1470]: time="2025-01-29T11:25:35.185515541Z" level=warning msg="cleaning up after shim disconnected" id=8828925de7adcd3a79303ed47b7a33f107dfa1e5e4919d4308bcc60cf0e525f6 namespace=k8s.io Jan 29 11:25:35.185524 containerd[1470]: time="2025-01-29T11:25:35.185524254Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:25:35.401966 kubelet[1783]: E0129 11:25:35.401899 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:35.476970 kubelet[1783]: I0129 11:25:35.476266 1783 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d680fae3-bced-460c-a58b-8ebe48dfde4f" path="/var/lib/kubelet/pods/d680fae3-bced-460c-a58b-8ebe48dfde4f/volumes" Jan 29 11:25:35.944304 kubelet[1783]: E0129 11:25:35.943423 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:25:35.946300 containerd[1470]: time="2025-01-29T11:25:35.946123948Z" level=info msg="CreateContainer within sandbox \"26ca4365d8c7975c4e86e8a61f3c36b9bb743a159f2e3dd9c65f3d745256cc07\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:25:35.970165 containerd[1470]: time="2025-01-29T11:25:35.970075349Z" level=info msg="CreateContainer within sandbox \"26ca4365d8c7975c4e86e8a61f3c36b9bb743a159f2e3dd9c65f3d745256cc07\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8d8230dcaa8bb3b2e112b99129cf2870c2067d9c2313e83c82d0548570fcadaf\"" Jan 29 11:25:35.971298 containerd[1470]: time="2025-01-29T11:25:35.971066145Z" level=info msg="StartContainer for \"8d8230dcaa8bb3b2e112b99129cf2870c2067d9c2313e83c82d0548570fcadaf\"" Jan 29 11:25:36.031011 systemd[1]: Started cri-containerd-8d8230dcaa8bb3b2e112b99129cf2870c2067d9c2313e83c82d0548570fcadaf.scope - libcontainer container 8d8230dcaa8bb3b2e112b99129cf2870c2067d9c2313e83c82d0548570fcadaf. Jan 29 11:25:36.074483 containerd[1470]: time="2025-01-29T11:25:36.074298880Z" level=info msg="StartContainer for \"8d8230dcaa8bb3b2e112b99129cf2870c2067d9c2313e83c82d0548570fcadaf\" returns successfully" Jan 29 11:25:36.403937 kubelet[1783]: E0129 11:25:36.403854 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:36.940788 containerd[1470]: time="2025-01-29T11:25:36.940717003Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Jan 29 11:25:36.943889 systemd[1]: cri-containerd-8d8230dcaa8bb3b2e112b99129cf2870c2067d9c2313e83c82d0548570fcadaf.scope: Deactivated successfully. Jan 29 11:25:36.953871 kubelet[1783]: E0129 11:25:36.953806 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:25:36.982964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d8230dcaa8bb3b2e112b99129cf2870c2067d9c2313e83c82d0548570fcadaf-rootfs.mount: Deactivated successfully. Jan 29 11:25:36.989712 containerd[1470]: time="2025-01-29T11:25:36.989612961Z" level=info msg="shim disconnected" id=8d8230dcaa8bb3b2e112b99129cf2870c2067d9c2313e83c82d0548570fcadaf namespace=k8s.io Jan 29 11:25:36.989712 containerd[1470]: time="2025-01-29T11:25:36.989688768Z" level=warning msg="cleaning up after shim disconnected" id=8d8230dcaa8bb3b2e112b99129cf2870c2067d9c2313e83c82d0548570fcadaf namespace=k8s.io Jan 29 11:25:36.989712 containerd[1470]: time="2025-01-29T11:25:36.989697676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:25:37.404945 kubelet[1783]: E0129 11:25:37.404847 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:37.958585 kubelet[1783]: E0129 11:25:37.958535 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:25:37.975330 containerd[1470]: time="2025-01-29T11:25:37.974648140Z" level=info msg="CreateContainer within sandbox \"26ca4365d8c7975c4e86e8a61f3c36b9bb743a159f2e3dd9c65f3d745256cc07\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:25:38.003427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050916023.mount: Deactivated successfully. Jan 29 11:25:38.006684 containerd[1470]: time="2025-01-29T11:25:38.006621593Z" level=info msg="CreateContainer within sandbox \"26ca4365d8c7975c4e86e8a61f3c36b9bb743a159f2e3dd9c65f3d745256cc07\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"10ea9b7432442cf2542b25b0200a5a71defe74e456d0fc40b1f8ede2fc2bc804\"" Jan 29 11:25:38.007780 containerd[1470]: time="2025-01-29T11:25:38.007739335Z" level=info msg="StartContainer for \"10ea9b7432442cf2542b25b0200a5a71defe74e456d0fc40b1f8ede2fc2bc804\"" Jan 29 11:25:38.059876 systemd[1]: Started cri-containerd-10ea9b7432442cf2542b25b0200a5a71defe74e456d0fc40b1f8ede2fc2bc804.scope - libcontainer container 10ea9b7432442cf2542b25b0200a5a71defe74e456d0fc40b1f8ede2fc2bc804. Jan 29 11:25:38.109532 containerd[1470]: time="2025-01-29T11:25:38.109334972Z" level=info msg="StartContainer for \"10ea9b7432442cf2542b25b0200a5a71defe74e456d0fc40b1f8ede2fc2bc804\" returns successfully" Jan 29 11:25:38.405070 kubelet[1783]: E0129 11:25:38.405009 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:38.968291 kubelet[1783]: E0129 11:25:38.965557 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:25:38.982577 systemd[1]: run-containerd-runc-k8s.io-10ea9b7432442cf2542b25b0200a5a71defe74e456d0fc40b1f8ede2fc2bc804-runc.iXhNgY.mount: Deactivated successfully. Jan 29 11:25:39.406008 kubelet[1783]: E0129 11:25:39.405916 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:39.967955 kubelet[1783]: E0129 11:25:39.967889 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:25:40.406162 kubelet[1783]: E0129 11:25:40.406078 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:41.407288 kubelet[1783]: E0129 11:25:41.407214 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:42.408038 kubelet[1783]: E0129 11:25:42.407933 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:43.408854 kubelet[1783]: E0129 11:25:43.408764 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:43.502463 systemd[1]: cri-containerd-134b9816dd47734d625595ee87676e0ff3ac0e4fd68ff8fa6849b622eb0ec6eb.scope: Deactivated successfully. Jan 29 11:25:43.543805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-134b9816dd47734d625595ee87676e0ff3ac0e4fd68ff8fa6849b622eb0ec6eb-rootfs.mount: Deactivated successfully. Jan 29 11:25:43.548841 containerd[1470]: time="2025-01-29T11:25:43.548717196Z" level=info msg="shim disconnected" id=134b9816dd47734d625595ee87676e0ff3ac0e4fd68ff8fa6849b622eb0ec6eb namespace=k8s.io Jan 29 11:25:43.548841 containerd[1470]: time="2025-01-29T11:25:43.548842790Z" level=warning msg="cleaning up after shim disconnected" id=134b9816dd47734d625595ee87676e0ff3ac0e4fd68ff8fa6849b622eb0ec6eb namespace=k8s.io Jan 29 11:25:43.549748 containerd[1470]: time="2025-01-29T11:25:43.548862293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:25:43.983183 kubelet[1783]: I0129 11:25:43.983120 1783 scope.go:117] "RemoveContainer" containerID="134b9816dd47734d625595ee87676e0ff3ac0e4fd68ff8fa6849b622eb0ec6eb" Jan 29 11:25:43.988830 containerd[1470]: time="2025-01-29T11:25:43.988780760Z" level=info msg="CreateContainer within sandbox \"add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:1,}" Jan 29 11:25:44.018293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount322749303.mount: Deactivated successfully. Jan 29 11:25:44.020269 kubelet[1783]: I0129 11:25:44.019158 1783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5l5c7" podStartSLOduration=10.019134889 podStartE2EDuration="10.019134889s" podCreationTimestamp="2025-01-29 11:25:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:25:38.996703642 +0000 UTC m=+72.396022064" watchObservedRunningTime="2025-01-29 11:25:44.019134889 +0000 UTC m=+77.418453291" Jan 29 11:25:44.021636 containerd[1470]: time="2025-01-29T11:25:44.021591994Z" level=info msg="CreateContainer within sandbox \"add4cc22ffa749e86199dbf622064e4444b59ea617e4e891cabb40c0acfb8d63\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:1,} returns container id \"77d8ccd39252b1d1778f0f83c1f8a220d3603451d6aed16a9912ff8b4f1ac82e\"" Jan 29 11:25:44.024364 containerd[1470]: time="2025-01-29T11:25:44.023084376Z" level=info msg="StartContainer for \"77d8ccd39252b1d1778f0f83c1f8a220d3603451d6aed16a9912ff8b4f1ac82e\"" Jan 29 11:25:44.079140 systemd[1]: Started cri-containerd-77d8ccd39252b1d1778f0f83c1f8a220d3603451d6aed16a9912ff8b4f1ac82e.scope - libcontainer container 77d8ccd39252b1d1778f0f83c1f8a220d3603451d6aed16a9912ff8b4f1ac82e. Jan 29 11:25:44.118222 containerd[1470]: time="2025-01-29T11:25:44.118168617Z" level=info msg="StartContainer for \"77d8ccd39252b1d1778f0f83c1f8a220d3603451d6aed16a9912ff8b4f1ac82e\" returns successfully" Jan 29 11:25:44.410020 kubelet[1783]: E0129 11:25:44.409942 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:45.410489 kubelet[1783]: E0129 11:25:45.410414 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:46.410981 kubelet[1783]: E0129 11:25:46.410897 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:47.337888 kubelet[1783]: E0129 11:25:47.337823 1783 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:47.411318 kubelet[1783]: E0129 11:25:47.411204 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:48.412409 kubelet[1783]: E0129 11:25:48.412090 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:49.412807 kubelet[1783]: E0129 11:25:49.412721 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:50.417745 kubelet[1783]: E0129 11:25:50.417604 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:51.417974 kubelet[1783]: E0129 11:25:51.417892 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:51.449161 systemd[1]: run-containerd-runc-k8s.io-5670d5e9f3f26a35874df273ccfaa56d407ac9b1d5e55b5f187fe48ca3f1d8c2-runc.Gs1Oz4.mount: Deactivated successfully. Jan 29 11:25:52.260914 systemd[1]: Created slice kubepods-besteffort-pod312277db_15dc_4af1_b5b3_8b213e394a50.slice - libcontainer container kubepods-besteffort-pod312277db_15dc_4af1_b5b3_8b213e394a50.slice. Jan 29 11:25:52.369049 kubelet[1783]: I0129 11:25:52.368738 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eea5d50b-a130-4126-a11b-432293040cbe\" (UniqueName: \"kubernetes.io/nfs/312277db-15dc-4af1-b5b3-8b213e394a50-pvc-eea5d50b-a130-4126-a11b-432293040cbe\") pod \"test-pod-1\" (UID: \"312277db-15dc-4af1-b5b3-8b213e394a50\") " pod="default/test-pod-1" Jan 29 11:25:52.369049 kubelet[1783]: I0129 11:25:52.368787 1783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7d82\" (UniqueName: \"kubernetes.io/projected/312277db-15dc-4af1-b5b3-8b213e394a50-kube-api-access-t7d82\") pod \"test-pod-1\" (UID: \"312277db-15dc-4af1-b5b3-8b213e394a50\") " pod="default/test-pod-1" Jan 29 11:25:52.418659 kubelet[1783]: E0129 11:25:52.418590 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:52.524101 kernel: FS-Cache: Loaded Jan 29 11:25:52.597328 kernel: RPC: Registered named UNIX socket transport module. Jan 29 11:25:52.597462 kernel: RPC: Registered udp transport module. Jan 29 11:25:52.597496 kernel: RPC: Registered tcp transport module. Jan 29 11:25:52.597529 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 11:25:52.597548 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 11:25:52.866602 kernel: NFS: Registering the id_resolver key type Jan 29 11:25:52.866760 kernel: Key type id_resolver registered Jan 29 11:25:52.866794 kernel: Key type id_legacy registered Jan 29 11:25:52.913688 nfsidmap[5097]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-a-fee62db618' Jan 29 11:25:52.918326 nfsidmap[5098]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-a-fee62db618' Jan 29 11:25:53.165598 containerd[1470]: time="2025-01-29T11:25:53.165100574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:312277db-15dc-4af1-b5b3-8b213e394a50,Namespace:default,Attempt:0,}" Jan 29 11:25:53.349168 systemd-networkd[1376]: cali5ec59c6bf6e: Link UP Jan 29 11:25:53.350104 systemd-networkd[1376]: cali5ec59c6bf6e: Gained carrier Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.234 [INFO][5099] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {164.92.103.73-k8s-test--pod--1-eth0 default 312277db-15dc-4af1-b5b3-8b213e394a50 1689 0 2025-01-29 11:25:01 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 164.92.103.73 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.103.73-k8s-test--pod--1-" Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.234 [INFO][5099] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.103.73-k8s-test--pod--1-eth0" Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.272 [INFO][5110] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" HandleID="k8s-pod-network.5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" Workload="164.92.103.73-k8s-test--pod--1-eth0" Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.287 [INFO][5110] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" HandleID="k8s-pod-network.5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" Workload="164.92.103.73-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319930), Attrs:map[string]string{"namespace":"default", "node":"164.92.103.73", "pod":"test-pod-1", "timestamp":"2025-01-29 11:25:53.272633567 +0000 UTC"}, Hostname:"164.92.103.73", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.287 [INFO][5110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.287 [INFO][5110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.287 [INFO][5110] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '164.92.103.73' Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.291 [INFO][5110] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" host="164.92.103.73" Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.299 [INFO][5110] ipam/ipam.go 372: Looking up existing affinities for host host="164.92.103.73" Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.307 [INFO][5110] ipam/ipam.go 489: Trying affinity for 192.168.127.192/26 host="164.92.103.73" Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.311 [INFO][5110] ipam/ipam.go 155: Attempting to load block cidr=192.168.127.192/26 host="164.92.103.73" Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.316 [INFO][5110] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.127.192/26 host="164.92.103.73" Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.316 [INFO][5110] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.127.192/26 handle="k8s-pod-network.5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" host="164.92.103.73" Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.319 [INFO][5110] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561 Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.327 [INFO][5110] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.127.192/26 handle="k8s-pod-network.5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" host="164.92.103.73" Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.342 [INFO][5110] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.127.197/26] block=192.168.127.192/26 handle="k8s-pod-network.5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" host="164.92.103.73" Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.342 [INFO][5110] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.127.197/26] handle="k8s-pod-network.5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" host="164.92.103.73" Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.342 [INFO][5110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.343 [INFO][5110] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.127.197/26] IPv6=[] ContainerID="5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" HandleID="k8s-pod-network.5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" Workload="164.92.103.73-k8s-test--pod--1-eth0" Jan 29 11:25:53.371607 containerd[1470]: 2025-01-29 11:25:53.345 [INFO][5099] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.103.73-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.103.73-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"312277db-15dc-4af1-b5b3-8b213e394a50", ResourceVersion:"1689", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.103.73", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.127.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:25:53.372826 containerd[1470]: 2025-01-29 11:25:53.345 [INFO][5099] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.127.197/32] ContainerID="5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.103.73-k8s-test--pod--1-eth0" Jan 29 11:25:53.372826 containerd[1470]: 2025-01-29 11:25:53.345 [INFO][5099] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.103.73-k8s-test--pod--1-eth0" Jan 29 11:25:53.372826 containerd[1470]: 2025-01-29 11:25:53.350 [INFO][5099] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.103.73-k8s-test--pod--1-eth0" Jan 29 11:25:53.372826 containerd[1470]: 2025-01-29 11:25:53.350 [INFO][5099] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.103.73-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"164.92.103.73-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"312277db-15dc-4af1-b5b3-8b213e394a50", ResourceVersion:"1689", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"164.92.103.73", ContainerID:"5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.127.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"56:d0:ec:5c:40:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:25:53.372826 containerd[1470]: 2025-01-29 11:25:53.365 [INFO][5099] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="164.92.103.73-k8s-test--pod--1-eth0" Jan 29 11:25:53.405995 containerd[1470]: time="2025-01-29T11:25:53.405834113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:25:53.406729 containerd[1470]: time="2025-01-29T11:25:53.406449019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:25:53.406729 containerd[1470]: time="2025-01-29T11:25:53.406500000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:25:53.407635 containerd[1470]: time="2025-01-29T11:25:53.407592469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:25:53.419949 kubelet[1783]: E0129 11:25:53.419802 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:53.435401 systemd[1]: Started cri-containerd-5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561.scope - libcontainer container 5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561. Jan 29 11:25:53.494754 containerd[1470]: time="2025-01-29T11:25:53.494699367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:312277db-15dc-4af1-b5b3-8b213e394a50,Namespace:default,Attempt:0,} returns sandbox id \"5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561\"" Jan 29 11:25:53.499133 containerd[1470]: time="2025-01-29T11:25:53.499086945Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:25:53.989264 containerd[1470]: time="2025-01-29T11:25:53.988521116Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 11:25:53.991416 containerd[1470]: time="2025-01-29T11:25:53.991317417Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 492.195079ms" Jan 29 11:25:53.991665 containerd[1470]: time="2025-01-29T11:25:53.991637384Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 11:25:53.993459 containerd[1470]: time="2025-01-29T11:25:53.993402828Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:54.004576 containerd[1470]: time="2025-01-29T11:25:54.004513397Z" level=info msg="CreateContainer within sandbox \"5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 11:25:54.039269 containerd[1470]: time="2025-01-29T11:25:54.039195671Z" level=info msg="CreateContainer within sandbox \"5c05ce15ddf599bff5a9d32e9bac33bda62bb3ae37818ff4e7caaf9d3aebb561\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e046b5648e3eed66b3e5a79a605003bc11dbbc97926259785c224043299292d8\"" Jan 29 11:25:54.039900 containerd[1470]: time="2025-01-29T11:25:54.039839033Z" level=info msg="StartContainer for \"e046b5648e3eed66b3e5a79a605003bc11dbbc97926259785c224043299292d8\"" Jan 29 11:25:54.086554 systemd[1]: Started cri-containerd-e046b5648e3eed66b3e5a79a605003bc11dbbc97926259785c224043299292d8.scope - libcontainer container e046b5648e3eed66b3e5a79a605003bc11dbbc97926259785c224043299292d8. Jan 29 11:25:54.123575 containerd[1470]: time="2025-01-29T11:25:54.123526548Z" level=info msg="StartContainer for \"e046b5648e3eed66b3e5a79a605003bc11dbbc97926259785c224043299292d8\" returns successfully" Jan 29 11:25:54.420905 kubelet[1783]: E0129 11:25:54.420836 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:54.488923 systemd[1]: run-containerd-runc-k8s.io-e046b5648e3eed66b3e5a79a605003bc11dbbc97926259785c224043299292d8-runc.kkYhzI.mount: Deactivated successfully. Jan 29 11:25:54.817345 systemd-networkd[1376]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 11:25:55.032038 kubelet[1783]: I0129 11:25:55.031742 1783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=53.5333811 podStartE2EDuration="54.031710626s" podCreationTimestamp="2025-01-29 11:25:01 +0000 UTC" firstStartedPulling="2025-01-29 11:25:53.498660415 +0000 UTC m=+86.897978827" lastFinishedPulling="2025-01-29 11:25:53.99698994 +0000 UTC m=+87.396308353" observedRunningTime="2025-01-29 11:25:55.030383456 +0000 UTC m=+88.429701877" watchObservedRunningTime="2025-01-29 11:25:55.031710626 +0000 UTC m=+88.431029048" Jan 29 11:25:55.421929 kubelet[1783]: E0129 11:25:55.421840 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:56.422141 kubelet[1783]: E0129 11:25:56.422054 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:57.423093 kubelet[1783]: E0129 11:25:57.423028 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:58.424138 kubelet[1783]: E0129 11:25:58.424072 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:59.425269 kubelet[1783]: E0129 11:25:59.425190 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:59.471277 kubelet[1783]: E0129 11:25:59.470351 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"