Jan 29 16:20:46.190211 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:20:46.190263 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:20:46.190291 kernel: BIOS-provided physical RAM map: Jan 29 16:20:46.190312 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:20:46.190332 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:20:46.190349 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:20:46.190369 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jan 29 16:20:46.190386 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jan 29 16:20:46.190402 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:20:46.190418 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:20:46.190440 kernel: NX (Execute Disable) protection: active Jan 29 16:20:46.190456 kernel: APIC: Static calls initialized Jan 29 16:20:46.190473 kernel: SMBIOS 2.8 present. Jan 29 16:20:46.190492 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 29 16:20:46.190508 kernel: Hypervisor detected: KVM Jan 29 16:20:46.190524 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:20:46.190550 kernel: kvm-clock: using sched offset of 4639710773 cycles Jan 29 16:20:46.190569 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:20:46.190588 kernel: tsc: Detected 2294.608 MHz processor Jan 29 16:20:46.190608 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:20:46.190637 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:20:46.190654 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jan 29 16:20:46.190668 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:20:46.190684 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:20:46.190706 kernel: ACPI: Early table checksum verification disabled Jan 29 16:20:46.190728 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jan 29 16:20:46.190747 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:46.190766 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:46.190785 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:46.190804 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 29 16:20:46.190846 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:46.190865 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:46.190884 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:46.190908 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:46.190927 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 29 16:20:46.190945 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 29 16:20:46.190964 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 29 16:20:46.190983 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 29 16:20:46.191002 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 29 16:20:46.191021 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 29 16:20:46.191048 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 29 16:20:46.191071 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 16:20:46.191091 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 16:20:46.191111 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 16:20:46.191131 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 16:20:46.191153 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jan 29 16:20:46.191183 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jan 29 16:20:46.191204 kernel: Zone ranges: Jan 29 16:20:46.191219 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:20:46.191231 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jan 29 16:20:46.191243 kernel: Normal empty Jan 29 16:20:46.191256 kernel: Movable zone start for each node Jan 29 16:20:46.191279 kernel: Early memory node ranges Jan 29 16:20:46.191300 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:20:46.191320 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jan 29 16:20:46.191340 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jan 29 16:20:46.191364 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:20:46.191384 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:20:46.191405 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jan 29 16:20:46.191425 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 16:20:46.191446 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:20:46.191465 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:20:46.191485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 16:20:46.191506 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:20:46.191526 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:20:46.191550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:20:46.191570 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:20:46.191590 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:20:46.191609 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 16:20:46.191629 kernel: TSC deadline timer available Jan 29 16:20:46.191650 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 16:20:46.191669 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:20:46.191689 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 29 16:20:46.191709 kernel: Booting paravirtualized kernel on KVM Jan 29 16:20:46.191733 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:20:46.191753 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 16:20:46.191773 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 16:20:46.191793 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 16:20:46.191812 kernel: pcpu-alloc: [0] 0 1 Jan 29 16:20:46.192585 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 16:20:46.192609 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:20:46.192630 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:20:46.192658 kernel: random: crng init done Jan 29 16:20:46.192678 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:20:46.192698 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 16:20:46.192718 kernel: Fallback order for Node 0: 0 Jan 29 16:20:46.192738 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jan 29 16:20:46.192761 kernel: Policy zone: DMA32 Jan 29 16:20:46.192785 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:20:46.192801 kernel: Memory: 1969144K/2096600K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 127196K reserved, 0K cma-reserved) Jan 29 16:20:46.193371 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:20:46.193406 kernel: Kernel/User page tables isolation: enabled Jan 29 16:20:46.193422 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:20:46.193436 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:20:46.193451 kernel: Dynamic Preempt: voluntary Jan 29 16:20:46.193466 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:20:46.193491 kernel: rcu: RCU event tracing is enabled. Jan 29 16:20:46.193508 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:20:46.193522 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:20:46.193535 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:20:46.193555 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:20:46.193569 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:20:46.193583 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:20:46.193598 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 16:20:46.193612 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:20:46.193630 kernel: Console: colour VGA+ 80x25 Jan 29 16:20:46.193645 kernel: printk: console [tty0] enabled Jan 29 16:20:46.193668 kernel: printk: console [ttyS0] enabled Jan 29 16:20:46.193695 kernel: ACPI: Core revision 20230628 Jan 29 16:20:46.193718 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 16:20:46.193743 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:20:46.193764 kernel: x2apic enabled Jan 29 16:20:46.193784 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:20:46.193804 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 16:20:46.193844 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 29 16:20:46.193865 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jan 29 16:20:46.193885 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 16:20:46.193907 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 16:20:46.193947 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:20:46.193968 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:20:46.193990 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:20:46.194014 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:20:46.194030 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 29 16:20:46.194046 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:20:46.194065 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:20:46.194079 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 16:20:46.194105 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 16:20:46.194140 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:20:46.194169 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:20:46.194191 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:20:46.194212 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:20:46.194234 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 16:20:46.194256 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:20:46.194278 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:20:46.194299 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:20:46.194326 kernel: landlock: Up and running. Jan 29 16:20:46.194352 kernel: SELinux: Initializing. Jan 29 16:20:46.194372 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:20:46.194387 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:20:46.194404 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 29 16:20:46.194419 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:20:46.194436 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:20:46.194453 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:20:46.194476 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 29 16:20:46.194491 kernel: signal: max sigframe size: 1776 Jan 29 16:20:46.194506 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:20:46.194522 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:20:46.194540 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 16:20:46.194564 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:20:46.194585 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:20:46.194607 kernel: .... node #0, CPUs: #1 Jan 29 16:20:46.194628 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:20:46.194651 kernel: smpboot: Max logical packages: 1 Jan 29 16:20:46.194674 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jan 29 16:20:46.194689 kernel: devtmpfs: initialized Jan 29 16:20:46.194708 kernel: x86/mm: Memory block size: 128MB Jan 29 16:20:46.194723 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:20:46.194742 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:20:46.194759 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:20:46.194776 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:20:46.194792 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:20:46.194808 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:20:46.194991 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:20:46.195010 kernel: audit: type=2000 audit(1738167643.587:1): state=initialized audit_enabled=0 res=1 Jan 29 16:20:46.195032 kernel: cpuidle: using governor menu Jan 29 16:20:46.195060 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:20:46.195082 kernel: dca service started, version 1.12.1 Jan 29 16:20:46.195104 kernel: PCI: Using configuration type 1 for base access Jan 29 16:20:46.195126 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:20:46.195147 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:20:46.195170 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:20:46.195201 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:20:46.195224 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:20:46.195242 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:20:46.195258 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:20:46.195272 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:20:46.195290 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:20:46.195306 kernel: ACPI: Interpreter enabled Jan 29 16:20:46.195321 kernel: ACPI: PM: (supports S0 S5) Jan 29 16:20:46.195340 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:20:46.195368 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:20:46.195391 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:20:46.195412 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 16:20:46.195434 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:20:46.195790 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:20:46.195998 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 16:20:46.196179 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 16:20:46.196219 kernel: acpiphp: Slot [3] registered Jan 29 16:20:46.196242 kernel: acpiphp: Slot [4] registered Jan 29 16:20:46.196261 kernel: acpiphp: Slot [5] registered Jan 29 16:20:46.196284 kernel: acpiphp: Slot [6] registered Jan 29 16:20:46.196309 kernel: acpiphp: Slot [7] registered Jan 29 16:20:46.196336 kernel: acpiphp: Slot [8] registered Jan 29 16:20:46.196356 kernel: acpiphp: Slot [9] registered Jan 29 16:20:46.196382 kernel: acpiphp: Slot [10] registered Jan 29 16:20:46.196411 kernel: acpiphp: Slot [11] registered Jan 29 16:20:46.196437 kernel: acpiphp: Slot [12] registered Jan 29 16:20:46.196459 kernel: acpiphp: Slot [13] registered Jan 29 16:20:46.196481 kernel: acpiphp: Slot [14] registered Jan 29 16:20:46.196502 kernel: acpiphp: Slot [15] registered Jan 29 16:20:46.196524 kernel: acpiphp: Slot [16] registered Jan 29 16:20:46.196542 kernel: acpiphp: Slot [17] registered Jan 29 16:20:46.196558 kernel: acpiphp: Slot [18] registered Jan 29 16:20:46.196574 kernel: acpiphp: Slot [19] registered Jan 29 16:20:46.196589 kernel: acpiphp: Slot [20] registered Jan 29 16:20:46.196604 kernel: acpiphp: Slot [21] registered Jan 29 16:20:46.196626 kernel: acpiphp: Slot [22] registered Jan 29 16:20:46.196641 kernel: acpiphp: Slot [23] registered Jan 29 16:20:46.196655 kernel: acpiphp: Slot [24] registered Jan 29 16:20:46.196670 kernel: acpiphp: Slot [25] registered Jan 29 16:20:46.196699 kernel: acpiphp: Slot [26] registered Jan 29 16:20:46.196728 kernel: acpiphp: Slot [27] registered Jan 29 16:20:46.196756 kernel: acpiphp: Slot [28] registered Jan 29 16:20:46.196780 kernel: acpiphp: Slot [29] registered Jan 29 16:20:46.196795 kernel: acpiphp: Slot [30] registered Jan 29 16:20:46.196857 kernel: acpiphp: Slot [31] registered Jan 29 16:20:46.196884 kernel: PCI host bridge to bus 0000:00 Jan 29 16:20:46.197145 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:20:46.197330 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:20:46.197506 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:20:46.197683 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 29 16:20:46.198052 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 29 16:20:46.198237 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:20:46.198434 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 16:20:46.198637 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 16:20:46.198845 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 16:20:46.200397 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 29 16:20:46.200589 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 16:20:46.200806 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 16:20:46.202985 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 16:20:46.203207 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 16:20:46.203445 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 29 16:20:46.203634 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 29 16:20:46.203994 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 16:20:46.204221 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 16:20:46.204428 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 16:20:46.204645 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 16:20:46.204839 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 16:20:46.205058 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 29 16:20:46.205241 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 29 16:20:46.205401 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 29 16:20:46.205559 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:20:46.205743 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 16:20:46.208078 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 29 16:20:46.208312 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 29 16:20:46.208524 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 29 16:20:46.208767 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 16:20:46.209027 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 29 16:20:46.209217 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 29 16:20:46.209406 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 29 16:20:46.209632 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 29 16:20:46.209804 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 29 16:20:46.210108 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 29 16:20:46.210290 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 29 16:20:46.210494 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 29 16:20:46.210710 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 16:20:46.210913 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 29 16:20:46.211080 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 29 16:20:46.211252 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 29 16:20:46.211421 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 29 16:20:46.211579 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 29 16:20:46.211735 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 29 16:20:46.211914 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 16:20:46.212086 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 29 16:20:46.212244 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 29 16:20:46.212277 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:20:46.212299 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:20:46.212321 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:20:46.212343 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:20:46.212368 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 16:20:46.212390 kernel: iommu: Default domain type: Translated Jan 29 16:20:46.212412 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:20:46.212434 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:20:46.212455 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:20:46.212476 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:20:46.212498 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jan 29 16:20:46.212660 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 16:20:46.212963 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 16:20:46.215128 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:20:46.215176 kernel: vgaarb: loaded Jan 29 16:20:46.215200 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 16:20:46.215222 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 16:20:46.215244 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:20:46.215272 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:20:46.215297 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:20:46.215319 kernel: pnp: PnP ACPI init Jan 29 16:20:46.215341 kernel: pnp: PnP ACPI: found 4 devices Jan 29 16:20:46.215377 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:20:46.215399 kernel: NET: Registered PF_INET protocol family Jan 29 16:20:46.215421 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:20:46.215442 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 16:20:46.215464 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:20:46.215486 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:20:46.215514 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 16:20:46.215537 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 16:20:46.215562 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:20:46.215591 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:20:46.215616 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:20:46.215637 kernel: NET: Registered PF_XDP protocol family Jan 29 16:20:46.218087 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:20:46.218336 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:20:46.218524 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:20:46.218667 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 29 16:20:46.218803 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 29 16:20:46.219028 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 16:20:46.219209 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 16:20:46.219244 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 16:20:46.219408 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 54111 usecs Jan 29 16:20:46.219436 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:20:46.219458 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 16:20:46.219480 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 29 16:20:46.219501 kernel: Initialise system trusted keyrings Jan 29 16:20:46.219531 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 16:20:46.219553 kernel: Key type asymmetric registered Jan 29 16:20:46.219584 kernel: Asymmetric key parser 'x509' registered Jan 29 16:20:46.219603 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:20:46.219621 kernel: io scheduler mq-deadline registered Jan 29 16:20:46.219648 kernel: io scheduler kyber registered Jan 29 16:20:46.219670 kernel: io scheduler bfq registered Jan 29 16:20:46.219694 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:20:46.219716 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 16:20:46.219745 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 16:20:46.219767 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 16:20:46.219791 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:20:46.219813 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:20:46.220992 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:20:46.221022 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:20:46.221049 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:20:46.221363 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 16:20:46.221403 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:20:46.221593 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 16:20:46.221784 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T16:20:45 UTC (1738167645) Jan 29 16:20:46.223081 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 29 16:20:46.223132 kernel: intel_pstate: CPU model not supported Jan 29 16:20:46.223160 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:20:46.223189 kernel: Segment Routing with IPv6 Jan 29 16:20:46.223217 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:20:46.223244 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:20:46.223283 kernel: Key type dns_resolver registered Jan 29 16:20:46.223310 kernel: IPI shorthand broadcast: enabled Jan 29 16:20:46.223337 kernel: sched_clock: Marking stable (1476007937, 235160980)->(1791869883, -80700966) Jan 29 16:20:46.223364 kernel: registered taskstats version 1 Jan 29 16:20:46.223391 kernel: Loading compiled-in X.509 certificates Jan 29 16:20:46.223418 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:20:46.223445 kernel: Key type .fscrypt registered Jan 29 16:20:46.223472 kernel: Key type fscrypt-provisioning registered Jan 29 16:20:46.223499 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:20:46.223530 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:20:46.223557 kernel: ima: No architecture policies found Jan 29 16:20:46.223579 kernel: clk: Disabling unused clocks Jan 29 16:20:46.223602 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:20:46.223628 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:20:46.223696 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:20:46.223716 kernel: Run /init as init process Jan 29 16:20:46.223731 kernel: with arguments: Jan 29 16:20:46.223746 kernel: /init Jan 29 16:20:46.223768 kernel: with environment: Jan 29 16:20:46.223789 kernel: HOME=/ Jan 29 16:20:46.223811 kernel: TERM=linux Jan 29 16:20:46.225938 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:20:46.225974 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:20:46.226010 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:20:46.226028 systemd[1]: Detected virtualization kvm. Jan 29 16:20:46.226055 systemd[1]: Detected architecture x86-64. Jan 29 16:20:46.226071 systemd[1]: Running in initrd. Jan 29 16:20:46.226095 systemd[1]: No hostname configured, using default hostname. Jan 29 16:20:46.226110 systemd[1]: Hostname set to . Jan 29 16:20:46.226126 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:20:46.226149 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:20:46.226176 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:20:46.226206 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:20:46.226235 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:20:46.226272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:20:46.226297 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:20:46.226327 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:20:46.226355 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:20:46.226375 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:20:46.226395 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:20:46.226420 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:20:46.226435 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:20:46.226450 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:20:46.226469 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:20:46.226484 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:20:46.226499 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:20:46.226520 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:20:46.226535 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:20:46.226551 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:20:46.226567 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:20:46.226583 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:20:46.226607 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:20:46.226629 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:20:46.226647 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:20:46.226673 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:20:46.226688 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:20:46.226705 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:20:46.226720 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:20:46.226736 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:20:46.226752 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:20:46.226892 systemd-journald[183]: Collecting audit messages is disabled. Jan 29 16:20:46.226944 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:20:46.226967 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:20:46.226984 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:20:46.227006 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:20:46.227025 systemd-journald[183]: Journal started Jan 29 16:20:46.227070 systemd-journald[183]: Runtime Journal (/run/log/journal/d0d1b5648dff4a9eb6b2229eafd82e4d) is 4.9M, max 39.3M, 34.4M free. Jan 29 16:20:46.225549 systemd-modules-load[184]: Inserted module 'overlay' Jan 29 16:20:46.293369 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:20:46.293427 kernel: Bridge firewalling registered Jan 29 16:20:46.293451 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:20:46.266719 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 29 16:20:46.302246 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:20:46.311690 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:46.313541 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:20:46.324291 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:20:46.328220 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:20:46.338308 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:20:46.344083 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:20:46.360897 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:20:46.380983 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:20:46.387872 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:20:46.399123 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:20:46.400395 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:20:46.408315 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:20:46.432015 dracut-cmdline[220]: dracut-dracut-053 Jan 29 16:20:46.438467 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:20:46.481015 systemd-resolved[219]: Positive Trust Anchors: Jan 29 16:20:46.481037 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:20:46.481117 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:20:46.487805 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 29 16:20:46.490041 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:20:46.492475 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:20:46.613926 kernel: SCSI subsystem initialized Jan 29 16:20:46.631867 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:20:46.649878 kernel: iscsi: registered transport (tcp) Jan 29 16:20:46.678273 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:20:46.678375 kernel: QLogic iSCSI HBA Driver Jan 29 16:20:46.755872 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:20:46.762171 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:20:46.814507 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:20:46.814611 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:20:46.814658 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:20:46.873981 kernel: raid6: avx2x4 gen() 15084 MB/s Jan 29 16:20:46.890902 kernel: raid6: avx2x2 gen() 10668 MB/s Jan 29 16:20:46.909833 kernel: raid6: avx2x1 gen() 10141 MB/s Jan 29 16:20:46.909929 kernel: raid6: using algorithm avx2x4 gen() 15084 MB/s Jan 29 16:20:46.928644 kernel: raid6: .... xor() 8151 MB/s, rmw enabled Jan 29 16:20:46.928735 kernel: raid6: using avx2x2 recovery algorithm Jan 29 16:20:46.957130 kernel: xor: automatically using best checksumming function avx Jan 29 16:20:47.167876 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:20:47.187738 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:20:47.194370 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:20:47.245027 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 29 16:20:47.258724 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:20:47.269738 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:20:47.314365 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jan 29 16:20:47.394120 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:20:47.402263 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:20:47.518536 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:20:47.528189 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:20:47.565411 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:20:47.573262 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:20:47.575858 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:20:47.576722 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:20:47.584116 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:20:47.627877 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:20:47.717851 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 29 16:20:47.779489 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 29 16:20:47.779746 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:20:47.779772 kernel: scsi host0: Virtio SCSI HBA Jan 29 16:20:47.780174 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:20:47.780206 kernel: GPT:9289727 != 125829119 Jan 29 16:20:47.780227 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:20:47.780263 kernel: GPT:9289727 != 125829119 Jan 29 16:20:47.780281 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:20:47.780297 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:20:47.780317 kernel: libata version 3.00 loaded. Jan 29 16:20:47.780335 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 29 16:20:47.825203 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 16:20:47.830375 kernel: scsi host1: ata_piix Jan 29 16:20:47.830638 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) Jan 29 16:20:47.830995 kernel: scsi host2: ata_piix Jan 29 16:20:47.831217 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 29 16:20:47.831239 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 29 16:20:47.831258 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:20:47.831277 kernel: AES CTR mode by8 optimization enabled Jan 29 16:20:47.831295 kernel: ACPI: bus type USB registered Jan 29 16:20:47.831315 kernel: usbcore: registered new interface driver usbfs Jan 29 16:20:47.827278 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:20:47.838224 kernel: usbcore: registered new interface driver hub Jan 29 16:20:47.838320 kernel: usbcore: registered new device driver usb Jan 29 16:20:47.827486 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:20:47.839343 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:20:47.842531 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:20:47.842966 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:47.846948 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:20:47.856434 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:20:47.859596 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:20:47.968265 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:47.989099 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:20:48.069559 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:20:48.076565 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (452) Jan 29 16:20:48.083874 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (462) Jan 29 16:20:48.110317 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 29 16:20:48.116857 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 29 16:20:48.117118 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 29 16:20:48.117315 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 29 16:20:48.117523 kernel: hub 1-0:1.0: USB hub found Jan 29 16:20:48.117843 kernel: hub 1-0:1.0: 2 ports detected Jan 29 16:20:48.128223 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 16:20:48.147043 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 16:20:48.179995 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:20:48.194433 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 16:20:48.195573 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 16:20:48.203163 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:20:48.228298 disk-uuid[551]: Primary Header is updated. Jan 29 16:20:48.228298 disk-uuid[551]: Secondary Entries is updated. Jan 29 16:20:48.228298 disk-uuid[551]: Secondary Header is updated. Jan 29 16:20:48.236971 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:20:48.246899 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:20:49.260982 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:20:49.262524 disk-uuid[552]: The operation has completed successfully. Jan 29 16:20:49.345616 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:20:49.345878 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:20:49.432349 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:20:49.439885 sh[563]: Success Jan 29 16:20:49.467000 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 16:20:49.596288 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:20:49.606349 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:20:49.608023 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:20:49.645846 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:20:49.645981 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:20:49.646019 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:20:49.650129 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:20:49.650245 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:20:49.672398 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:20:49.674429 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:20:49.683265 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:20:49.687116 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:20:49.715900 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:20:49.715997 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:20:49.716036 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:20:49.729097 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:20:49.745321 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:20:49.749121 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:20:49.761330 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:20:49.769478 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:20:49.938914 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:20:49.961244 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:20:50.012398 ignition[662]: Ignition 2.20.0 Jan 29 16:20:50.012422 ignition[662]: Stage: fetch-offline Jan 29 16:20:50.012507 ignition[662]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:50.012523 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 16:20:50.015258 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:20:50.012732 ignition[662]: parsed url from cmdline: "" Jan 29 16:20:50.012738 ignition[662]: no config URL provided Jan 29 16:20:50.012765 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:20:50.012780 ignition[662]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:20:50.022998 systemd-networkd[753]: lo: Link UP Jan 29 16:20:50.012794 ignition[662]: failed to fetch config: resource requires networking Jan 29 16:20:50.023006 systemd-networkd[753]: lo: Gained carrier Jan 29 16:20:50.013144 ignition[662]: Ignition finished successfully Jan 29 16:20:50.027166 systemd-networkd[753]: Enumeration completed Jan 29 16:20:50.027575 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:20:50.027706 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 29 16:20:50.027713 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 29 16:20:50.029457 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:20:50.029464 systemd-networkd[753]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:20:50.030521 systemd-networkd[753]: eth0: Link UP Jan 29 16:20:50.030529 systemd-networkd[753]: eth0: Gained carrier Jan 29 16:20:50.030545 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 29 16:20:50.030565 systemd[1]: Reached target network.target - Network. Jan 29 16:20:50.034838 systemd-networkd[753]: eth1: Link UP Jan 29 16:20:50.034846 systemd-networkd[753]: eth1: Gained carrier Jan 29 16:20:50.034865 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:20:50.042301 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:20:50.057989 systemd-networkd[753]: eth0: DHCPv4 address 64.23.139.59/20, gateway 64.23.128.1 acquired from 169.254.169.253 Jan 29 16:20:50.061982 systemd-networkd[753]: eth1: DHCPv4 address 10.124.0.6/20 acquired from 169.254.169.253 Jan 29 16:20:50.082725 ignition[757]: Ignition 2.20.0 Jan 29 16:20:50.082745 ignition[757]: Stage: fetch Jan 29 16:20:50.083079 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:50.083097 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 16:20:50.083278 ignition[757]: parsed url from cmdline: "" Jan 29 16:20:50.083285 ignition[757]: no config URL provided Jan 29 16:20:50.083294 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:20:50.083309 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:20:50.083351 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 29 16:20:50.117071 ignition[757]: GET result: OK Jan 29 16:20:50.117271 ignition[757]: parsing config with SHA512: 52de372fafc25aa2831e2092b9ab226e806cce199f3d3f7ccf808676b6096d84d7bb596558c09c4e9f310a806504ddadada8d6a8cd432148ca02d999a528074a Jan 29 16:20:50.123842 unknown[757]: fetched base config from "system" Jan 29 16:20:50.124187 ignition[757]: fetch: fetch complete Jan 29 16:20:50.123859 unknown[757]: fetched base config from "system" Jan 29 16:20:50.124193 ignition[757]: fetch: fetch passed Jan 29 16:20:50.123867 unknown[757]: fetched user config from "digitalocean" Jan 29 16:20:50.124267 ignition[757]: Ignition finished successfully Jan 29 16:20:50.126935 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:20:50.134253 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:20:50.186925 ignition[764]: Ignition 2.20.0 Jan 29 16:20:50.186943 ignition[764]: Stage: kargs Jan 29 16:20:50.187288 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:50.190838 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:20:50.187305 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 16:20:50.188712 ignition[764]: kargs: kargs passed Jan 29 16:20:50.189011 ignition[764]: Ignition finished successfully Jan 29 16:20:50.204324 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:20:50.229238 ignition[771]: Ignition 2.20.0 Jan 29 16:20:50.229256 ignition[771]: Stage: disks Jan 29 16:20:50.229639 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:50.232637 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:20:50.229667 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 16:20:50.239111 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:20:50.230949 ignition[771]: disks: disks passed Jan 29 16:20:50.240483 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:20:50.231028 ignition[771]: Ignition finished successfully Jan 29 16:20:50.242205 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:20:50.243695 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:20:50.245090 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:20:50.253239 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:20:50.303610 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 16:20:50.309909 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:20:50.662089 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:20:50.822861 kernel: EXT4-fs (vda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:20:50.824224 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:20:50.826296 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:20:50.837076 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:20:50.841260 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:20:50.847316 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 29 16:20:50.861893 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (787) Jan 29 16:20:50.867871 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:20:50.870080 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 16:20:50.880315 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:20:50.880355 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:20:50.871637 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:20:50.871697 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:20:50.887828 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:20:50.894169 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:20:50.900288 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:20:50.906555 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:20:51.021295 coreos-metadata[789]: Jan 29 16:20:51.020 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 16:20:51.025131 coreos-metadata[790]: Jan 29 16:20:51.025 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 16:20:51.027391 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:20:51.038054 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:20:51.042207 coreos-metadata[789]: Jan 29 16:20:51.042 INFO Fetch successful Jan 29 16:20:51.045721 coreos-metadata[790]: Jan 29 16:20:51.043 INFO Fetch successful Jan 29 16:20:51.054119 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 29 16:20:51.054910 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 29 16:20:51.056946 coreos-metadata[790]: Jan 29 16:20:51.056 INFO wrote hostname ci-4230.0.0-5-94c51ad0b0 to /sysroot/etc/hostname Jan 29 16:20:51.060121 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:20:51.063920 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:20:51.071559 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:20:51.234213 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:20:51.242136 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:20:51.245462 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:20:51.279869 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:20:51.307853 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:20:51.329905 ignition[910]: INFO : Ignition 2.20.0 Jan 29 16:20:51.331260 ignition[910]: INFO : Stage: mount Jan 29 16:20:51.332097 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:51.332097 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 16:20:51.334373 ignition[910]: INFO : mount: mount passed Jan 29 16:20:51.334373 ignition[910]: INFO : Ignition finished successfully Jan 29 16:20:51.334479 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:20:51.346096 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:20:51.530133 systemd-networkd[753]: eth1: Gained IPv6LL Jan 29 16:20:51.644261 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:20:51.657272 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:20:51.673884 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (921) Jan 29 16:20:51.681507 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:20:51.681608 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:20:51.681636 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:20:51.690925 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:20:51.694587 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:20:51.751530 ignition[938]: INFO : Ignition 2.20.0 Jan 29 16:20:51.751530 ignition[938]: INFO : Stage: files Jan 29 16:20:51.753456 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:51.753456 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 16:20:51.753456 ignition[938]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:20:51.756432 ignition[938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:20:51.756432 ignition[938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:20:51.771004 ignition[938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:20:51.772685 ignition[938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:20:51.772685 ignition[938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:20:51.771637 unknown[938]: wrote ssh authorized keys file for user: core Jan 29 16:20:51.778953 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:20:51.778953 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:20:51.778953 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:20:51.778953 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:20:51.778953 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:20:51.778953 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:20:51.778953 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:20:51.778953 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 29 16:20:52.042151 systemd-networkd[753]: eth0: Gained IPv6LL Jan 29 16:20:52.260876 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 16:20:52.622216 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:20:52.623939 ignition[938]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:20:52.623939 ignition[938]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:20:52.623939 ignition[938]: INFO : files: files passed Jan 29 16:20:52.623939 ignition[938]: INFO : Ignition finished successfully Jan 29 16:20:52.625669 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:20:52.636339 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:20:52.640190 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:20:52.654313 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:20:52.655516 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:20:52.670194 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:20:52.670194 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:20:52.674093 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:20:52.676181 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:20:52.678568 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:20:52.686148 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:20:52.757205 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:20:52.757420 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:20:52.758650 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:20:52.760172 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:20:52.763400 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:20:52.776195 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:20:52.800206 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:20:52.810229 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:20:52.830790 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:20:52.831842 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:20:52.833663 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:20:52.835179 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:20:52.835416 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:20:52.837144 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:20:52.838215 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:20:52.840006 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:20:52.841739 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:20:52.842846 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:20:52.844490 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:20:52.846214 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:20:52.847856 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:20:52.849371 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:20:52.850836 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:20:52.852209 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:20:52.852494 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:20:52.854207 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:20:52.855268 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:20:52.856461 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:20:52.857022 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:20:52.858263 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:20:52.858573 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:20:52.860438 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:20:52.860799 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:20:52.862505 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:20:52.862795 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:20:52.864384 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 16:20:52.864646 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:20:52.873963 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:20:52.874647 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:20:52.877347 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:20:52.882419 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:20:52.883190 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:20:52.883502 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:20:52.885382 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:20:52.887109 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:20:52.912090 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:20:52.912284 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:20:52.919889 ignition[991]: INFO : Ignition 2.20.0 Jan 29 16:20:52.923127 ignition[991]: INFO : Stage: umount Jan 29 16:20:52.923127 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:52.923127 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 16:20:52.926726 ignition[991]: INFO : umount: umount passed Jan 29 16:20:52.926726 ignition[991]: INFO : Ignition finished successfully Jan 29 16:20:52.929586 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:20:52.929803 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:20:52.932204 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:20:52.932372 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:20:52.935257 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:20:52.935378 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:20:52.936769 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:20:52.936929 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:20:52.939230 systemd[1]: Stopped target network.target - Network. Jan 29 16:20:52.942285 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:20:52.942450 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:20:52.943961 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:20:52.945222 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:20:52.949132 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:20:52.955041 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:20:52.956278 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:20:52.958443 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:20:52.958539 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:20:52.960049 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:20:52.960151 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:20:52.961419 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:20:52.961521 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:20:52.963076 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:20:52.963183 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:20:52.997985 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:20:53.001439 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:20:53.032584 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:20:53.034080 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:20:53.034254 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:20:53.055957 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:20:53.056490 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:20:53.056677 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:20:53.061305 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:20:53.061507 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:20:53.067527 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:20:53.070460 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:20:53.070621 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:20:53.071871 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:20:53.071945 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:20:53.087227 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:20:53.088081 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:20:53.088202 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:20:53.092358 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:20:53.092462 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:20:53.093903 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:20:53.093989 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:20:53.095033 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:20:53.095125 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:20:53.097564 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:20:53.104501 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:20:53.104683 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:20:53.115675 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:20:53.115976 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:20:53.121449 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:20:53.122304 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:20:53.126125 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:20:53.126235 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:20:53.127083 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:20:53.127150 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:20:53.128513 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:20:53.128602 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:20:53.131329 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:20:53.131436 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:20:53.133208 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:20:53.133330 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:20:53.145205 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:20:53.147308 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:20:53.147431 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:20:53.149537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:20:53.149637 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:53.155050 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:20:53.155154 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:20:53.157238 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:20:53.157488 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:20:53.160357 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:20:53.169346 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:20:53.184881 systemd[1]: Switching root. Jan 29 16:20:53.328239 systemd-journald[183]: Journal stopped Jan 29 16:20:55.381000 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 29 16:20:55.381122 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:20:55.381147 kernel: SELinux: policy capability open_perms=1 Jan 29 16:20:55.381166 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:20:55.381187 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:20:55.381206 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:20:55.381225 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:20:55.381253 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:20:55.381274 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:20:55.381292 kernel: audit: type=1403 audit(1738167653.503:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:20:55.381333 systemd[1]: Successfully loaded SELinux policy in 62.452ms. Jan 29 16:20:55.381372 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.230ms. Jan 29 16:20:55.381404 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:20:55.381432 systemd[1]: Detected virtualization kvm. Jan 29 16:20:55.381459 systemd[1]: Detected architecture x86-64. Jan 29 16:20:55.381541 systemd[1]: Detected first boot. Jan 29 16:20:55.381570 systemd[1]: Hostname set to . Jan 29 16:20:55.381599 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:20:55.381632 zram_generator::config[1040]: No configuration found. Jan 29 16:20:55.381662 kernel: Guest personality initialized and is inactive Jan 29 16:20:55.381683 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:20:55.381703 kernel: Initialized host personality Jan 29 16:20:55.381733 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:20:55.381769 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:20:55.381808 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:20:55.381882 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:20:55.381911 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:20:55.381938 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:20:55.381965 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:20:55.381994 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:20:55.382021 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:20:55.382057 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:20:55.382088 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:20:55.382111 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:20:55.382131 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:20:55.382150 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:20:55.382170 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:20:55.382189 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:20:55.382209 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:20:55.382228 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:20:55.382255 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:20:55.382275 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:20:55.382296 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:20:55.382317 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:20:55.382338 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:20:55.382362 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:20:55.382385 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:20:55.382418 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:20:55.382444 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:20:55.382468 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:20:55.382490 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:20:55.382508 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:20:55.382531 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:20:55.382554 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:20:55.382586 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:20:55.382613 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:20:55.382641 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:20:55.382685 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:20:55.382710 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:20:55.382732 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:20:55.382757 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:20:55.382778 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:20:55.382802 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:55.384934 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:20:55.384989 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:20:55.385027 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:20:55.385056 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:20:55.385084 systemd[1]: Reached target machines.target - Containers. Jan 29 16:20:55.385111 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:20:55.385140 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:20:55.385175 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:20:55.385203 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:20:55.385230 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:20:55.385258 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:20:55.385291 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:20:55.385319 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:20:55.385347 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:20:55.385375 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:20:55.385403 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:20:55.385430 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:20:55.385460 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:20:55.385488 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:20:55.385521 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:20:55.385552 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:20:55.385583 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:20:55.385611 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:20:55.385637 kernel: fuse: init (API version 7.39) Jan 29 16:20:55.385674 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:20:55.385697 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:20:55.385717 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:20:55.385744 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:20:55.385763 systemd[1]: Stopped verity-setup.service. Jan 29 16:20:55.387932 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:55.387994 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:20:55.388024 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:20:55.388052 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:20:55.388081 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:20:55.388108 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:20:55.388136 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:20:55.388165 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:20:55.388193 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:20:55.388229 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:20:55.388307 systemd-journald[1117]: Collecting audit messages is disabled. Jan 29 16:20:55.388356 systemd-journald[1117]: Journal started Jan 29 16:20:55.388411 systemd-journald[1117]: Runtime Journal (/run/log/journal/d0d1b5648dff4a9eb6b2229eafd82e4d) is 4.9M, max 39.3M, 34.4M free. Jan 29 16:20:54.794334 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:20:54.809213 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 16:20:54.810112 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:20:55.400861 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:20:55.400979 kernel: ACPI: bus type drm_connector registered Jan 29 16:20:55.401478 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:20:55.404774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:20:55.405211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:20:55.406717 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:20:55.407102 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:20:55.407846 kernel: loop: module loaded Jan 29 16:20:55.411174 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:20:55.411522 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:20:55.414410 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:20:55.414737 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:20:55.416235 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:20:55.416549 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:20:55.418314 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:20:55.421169 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:20:55.423203 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:20:55.451716 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:20:55.463415 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:20:55.476171 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:20:55.477276 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:20:55.477328 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:20:55.483675 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:20:55.491085 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:20:55.499986 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:20:55.502450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:20:55.512138 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:20:55.521055 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:20:55.523003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:20:55.529951 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:20:55.532051 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:20:55.535011 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:20:55.540084 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:20:55.550237 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:20:55.556981 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:20:55.559443 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:20:55.561120 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:20:55.563443 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:20:55.593264 systemd-journald[1117]: Time spent on flushing to /var/log/journal/d0d1b5648dff4a9eb6b2229eafd82e4d is 146.628ms for 985 entries. Jan 29 16:20:55.593264 systemd-journald[1117]: System Journal (/var/log/journal/d0d1b5648dff4a9eb6b2229eafd82e4d) is 8M, max 195.6M, 187.6M free. Jan 29 16:20:55.782195 systemd-journald[1117]: Received client request to flush runtime journal. Jan 29 16:20:55.782279 kernel: loop0: detected capacity change from 0 to 8 Jan 29 16:20:55.782306 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:20:55.782343 kernel: loop1: detected capacity change from 0 to 138176 Jan 29 16:20:55.619769 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:20:55.625871 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:20:55.630206 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:20:55.642213 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:20:55.653203 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:20:55.717644 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:20:55.760333 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 16:20:55.761986 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:20:55.784998 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:20:55.800332 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:20:55.816868 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:20:55.827926 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:20:55.841037 kernel: loop2: detected capacity change from 0 to 218376 Jan 29 16:20:55.906020 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jan 29 16:20:55.915090 kernel: loop3: detected capacity change from 0 to 147912 Jan 29 16:20:55.906052 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jan 29 16:20:55.944915 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:20:55.976917 kernel: loop4: detected capacity change from 0 to 8 Jan 29 16:20:55.987963 kernel: loop5: detected capacity change from 0 to 138176 Jan 29 16:20:56.042904 kernel: loop6: detected capacity change from 0 to 218376 Jan 29 16:20:56.064869 kernel: loop7: detected capacity change from 0 to 147912 Jan 29 16:20:56.089407 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 29 16:20:56.092909 (sd-merge)[1187]: Merged extensions into '/usr'. Jan 29 16:20:56.101637 systemd[1]: Reload requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:20:56.101667 systemd[1]: Reloading... Jan 29 16:20:56.380415 zram_generator::config[1214]: No configuration found. Jan 29 16:20:56.648862 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:20:56.742447 ldconfig[1156]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:20:56.791306 systemd[1]: Reloading finished in 688 ms. Jan 29 16:20:56.820161 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:20:56.823206 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:20:56.842240 systemd[1]: Starting ensure-sysext.service... Jan 29 16:20:56.847973 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:20:56.872126 systemd[1]: Reload requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:20:56.872151 systemd[1]: Reloading... Jan 29 16:20:56.898989 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:20:56.899445 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:20:56.901591 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:20:56.902151 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 29 16:20:56.902323 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 29 16:20:56.908435 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:20:56.908459 systemd-tmpfiles[1259]: Skipping /boot Jan 29 16:20:56.932212 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:20:56.932232 systemd-tmpfiles[1259]: Skipping /boot Jan 29 16:20:57.095866 zram_generator::config[1288]: No configuration found. Jan 29 16:20:57.347253 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:20:57.522875 systemd[1]: Reloading finished in 649 ms. Jan 29 16:20:57.569015 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:20:57.570723 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:20:57.592532 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:20:57.598406 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:20:57.609457 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:20:57.620069 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:20:57.626993 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:20:57.633429 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:20:57.641946 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:57.642381 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:20:57.651521 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:20:57.662394 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:20:57.682382 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:20:57.683712 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:20:57.684077 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:20:57.684300 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:57.702226 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:20:57.708482 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:57.709029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:20:57.709433 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:20:57.709673 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:20:57.709939 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:57.720439 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:57.721174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:20:57.730811 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:20:57.733322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:20:57.733675 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:20:57.734000 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:57.752937 systemd[1]: Finished ensure-sysext.service. Jan 29 16:20:57.768794 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:20:57.787837 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:20:57.791165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:20:57.791528 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:20:57.799597 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:20:57.801573 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:20:57.814000 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:20:57.817645 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:20:57.829222 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:20:57.830563 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:20:57.830907 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:20:57.834091 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Jan 29 16:20:57.837191 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:20:57.837932 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:20:57.842144 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:20:57.868578 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:20:57.873153 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:20:57.901527 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:20:57.914281 augenrules[1374]: No rules Jan 29 16:20:57.916058 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:20:57.916565 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:20:57.935395 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:20:57.941070 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:20:57.954276 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:20:58.178044 systemd-resolved[1337]: Positive Trust Anchors: Jan 29 16:20:58.178752 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:20:58.178866 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:20:58.202770 systemd-resolved[1337]: Using system hostname 'ci-4230.0.0-5-94c51ad0b0'. Jan 29 16:20:58.206679 systemd-networkd[1388]: lo: Link UP Jan 29 16:20:58.206695 systemd-networkd[1388]: lo: Gained carrier Jan 29 16:20:58.208294 systemd-networkd[1388]: Enumeration completed Jan 29 16:20:58.208532 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:20:58.217231 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:20:58.225232 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:20:58.226469 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:20:58.227627 systemd[1]: Reached target network.target - Network. Jan 29 16:20:58.228464 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:20:58.229451 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:20:58.230366 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:20:58.266740 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:20:58.278277 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:20:58.316401 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Jan 29 16:20:58.321995 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1399) Jan 29 16:20:58.326152 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 29 16:20:58.327082 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:58.327380 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:20:58.337256 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:20:58.347259 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:20:58.352197 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:20:58.355111 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:20:58.355184 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:20:58.355253 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:20:58.355292 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:58.356437 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:20:58.357321 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:20:58.377202 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:20:58.377963 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:20:58.382916 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:20:58.404863 kernel: ISO 9660 Extensions: RRIP_1991A Jan 29 16:20:58.407107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:20:58.409037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:20:58.418799 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 29 16:20:58.424443 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:20:58.435469 systemd-networkd[1388]: eth1: Configuring with /run/systemd/network/10-0a:7d:18:82:f2:5c.network. Jan 29 16:20:58.438025 systemd-networkd[1388]: eth1: Link UP Jan 29 16:20:58.438041 systemd-networkd[1388]: eth1: Gained carrier Jan 29 16:20:58.442388 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Jan 29 16:20:58.471113 systemd-networkd[1388]: eth0: Configuring with /run/systemd/network/10-e2:19:da:d8:f3:8f.network. Jan 29 16:20:58.473373 systemd-networkd[1388]: eth0: Link UP Jan 29 16:20:58.473391 systemd-networkd[1388]: eth0: Gained carrier Jan 29 16:20:58.554922 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 16:20:58.562907 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:20:58.607869 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 16:20:58.632869 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 16:20:58.633975 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:20:58.645423 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:20:58.693986 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:20:58.707878 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 16:20:58.713370 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 16:20:58.727475 kernel: Console: switching to colour dummy device 80x25 Jan 29 16:20:58.729863 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 16:20:58.729992 kernel: [drm] features: -context_init Jan 29 16:20:58.744888 kernel: [drm] number of scanouts: 1 Jan 29 16:20:58.758411 kernel: [drm] number of cap sets: 0 Jan 29 16:20:58.769861 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 16:20:58.792117 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:20:58.800872 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 16:20:58.801040 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 16:20:58.803413 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:20:58.824898 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 16:20:58.909472 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:20:58.911038 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:58.924726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:20:58.998266 kernel: EDAC MC: Ver: 3.0.0 Jan 29 16:20:59.031547 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:20:59.041305 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:20:59.070273 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:20:59.100710 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:59.113567 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:20:59.117675 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:20:59.119428 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:20:59.119929 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:20:59.120906 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:20:59.121497 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:20:59.121869 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:20:59.122005 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:20:59.122130 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:20:59.122185 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:20:59.122278 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:20:59.126020 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:20:59.129590 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:20:59.137145 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:20:59.140077 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:20:59.141121 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:20:59.152944 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:20:59.155929 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:20:59.167366 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:20:59.175419 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:20:59.178802 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:20:59.181556 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:20:59.182595 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:20:59.182638 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:20:59.185802 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:20:59.192246 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:20:59.210247 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:20:59.218333 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:20:59.233278 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:20:59.248402 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:20:59.251111 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:20:59.257304 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:20:59.270239 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:20:59.277686 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:20:59.296144 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:20:59.303252 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:20:59.306627 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:20:59.310226 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:20:59.327165 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:20:59.338203 jq[1454]: false Jan 29 16:20:59.336804 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:20:59.350862 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:20:59.351796 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:20:59.383799 extend-filesystems[1455]: Found loop4 Jan 29 16:20:59.383799 extend-filesystems[1455]: Found loop5 Jan 29 16:20:59.383799 extend-filesystems[1455]: Found loop6 Jan 29 16:20:59.383799 extend-filesystems[1455]: Found loop7 Jan 29 16:20:59.383799 extend-filesystems[1455]: Found vda Jan 29 16:20:59.383799 extend-filesystems[1455]: Found vda1 Jan 29 16:20:59.383799 extend-filesystems[1455]: Found vda2 Jan 29 16:20:59.383799 extend-filesystems[1455]: Found vda3 Jan 29 16:20:59.383799 extend-filesystems[1455]: Found usr Jan 29 16:20:59.383799 extend-filesystems[1455]: Found vda4 Jan 29 16:20:59.383799 extend-filesystems[1455]: Found vda6 Jan 29 16:20:59.383799 extend-filesystems[1455]: Found vda7 Jan 29 16:20:59.383799 extend-filesystems[1455]: Found vda9 Jan 29 16:20:59.383799 extend-filesystems[1455]: Checking size of /dev/vda9 Jan 29 16:20:59.548286 coreos-metadata[1452]: Jan 29 16:20:59.493 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 16:20:59.548286 coreos-metadata[1452]: Jan 29 16:20:59.507 INFO Fetch successful Jan 29 16:20:59.424390 dbus-daemon[1453]: [system] SELinux support is enabled Jan 29 16:20:59.422382 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:20:59.552805 extend-filesystems[1455]: Resized partition /dev/vda9 Jan 29 16:20:59.566073 update_engine[1462]: I20250129 16:20:59.404284 1462 main.cc:92] Flatcar Update Engine starting Jan 29 16:20:59.566073 update_engine[1462]: I20250129 16:20:59.455379 1462 update_check_scheduler.cc:74] Next update check in 7m11s Jan 29 16:20:59.425214 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:20:59.575966 extend-filesystems[1488]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:20:59.428198 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:20:59.589769 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 29 16:20:59.436615 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:20:59.590386 jq[1463]: true Jan 29 16:20:59.436669 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:20:59.447364 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:20:59.596301 jq[1481]: true Jan 29 16:20:59.447577 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 29 16:20:59.447624 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:20:59.484761 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:20:59.507865 (ntainerd)[1482]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:20:59.512396 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:20:59.538216 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:20:59.613368 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:20:59.614394 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:20:59.738399 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:20:59.923729 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1397) Jan 29 16:20:59.745624 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:20:59.794190 systemd-networkd[1388]: eth1: Gained IPv6LL Jan 29 16:20:59.838857 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:20:59.852078 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:20:59.876201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:20:59.893170 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:20:59.950292 systemd-logind[1461]: New seat seat0. Jan 29 16:20:59.971472 systemd-logind[1461]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 16:20:59.971519 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:20:59.972154 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:20:59.987511 bash[1510]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:20:59.992722 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:21:00.014364 systemd[1]: Starting sshkeys.service... Jan 29 16:21:00.152394 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:21:00.197875 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 29 16:21:00.196899 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:21:00.212765 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:21:00.226157 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:21:00.235777 systemd-networkd[1388]: eth0: Gained IPv6LL Jan 29 16:21:00.269535 extend-filesystems[1488]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 16:21:00.269535 extend-filesystems[1488]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 29 16:21:00.269535 extend-filesystems[1488]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 29 16:21:00.303388 extend-filesystems[1455]: Resized filesystem in /dev/vda9 Jan 29 16:21:00.303388 extend-filesystems[1455]: Found vdb Jan 29 16:21:00.282586 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:21:00.323698 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:21:00.283132 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:21:00.328723 coreos-metadata[1535]: Jan 29 16:21:00.325 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 16:21:00.341248 coreos-metadata[1535]: Jan 29 16:21:00.341 INFO Fetch successful Jan 29 16:21:00.371176 unknown[1535]: wrote ssh authorized keys file for user: core Jan 29 16:21:00.395356 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:21:00.421735 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:21:00.434143 systemd[1]: Started sshd@0-64.23.139.59:22-139.178.89.65:51752.service - OpenSSH per-connection server daemon (139.178.89.65:51752). Jan 29 16:21:00.448315 containerd[1482]: time="2025-01-29T16:21:00.448010377Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:21:00.465138 update-ssh-keys[1547]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:21:00.468600 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:21:00.486814 systemd[1]: Finished sshkeys.service. Jan 29 16:21:00.532216 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:21:00.532844 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:21:00.551532 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:21:00.598020 containerd[1482]: time="2025-01-29T16:21:00.597920332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:00.607711 containerd[1482]: time="2025-01-29T16:21:00.607595997Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:21:00.608201 containerd[1482]: time="2025-01-29T16:21:00.608160346Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:21:00.610044 containerd[1482]: time="2025-01-29T16:21:00.608303843Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:21:00.610044 containerd[1482]: time="2025-01-29T16:21:00.608871633Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:21:00.610044 containerd[1482]: time="2025-01-29T16:21:00.608914988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:00.610044 containerd[1482]: time="2025-01-29T16:21:00.609046085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:21:00.610044 containerd[1482]: time="2025-01-29T16:21:00.609073790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:00.610044 containerd[1482]: time="2025-01-29T16:21:00.609491551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:21:00.610044 containerd[1482]: time="2025-01-29T16:21:00.609522892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:00.610044 containerd[1482]: time="2025-01-29T16:21:00.609549602Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:21:00.610044 containerd[1482]: time="2025-01-29T16:21:00.609565683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:00.610044 containerd[1482]: time="2025-01-29T16:21:00.609710527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:00.610928 containerd[1482]: time="2025-01-29T16:21:00.610882813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:00.611418 containerd[1482]: time="2025-01-29T16:21:00.611379321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:21:00.611586 containerd[1482]: time="2025-01-29T16:21:00.611561511Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:21:00.611855 containerd[1482]: time="2025-01-29T16:21:00.611791718Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:21:00.612065 containerd[1482]: time="2025-01-29T16:21:00.612036513Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:21:00.621397 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:21:00.763935 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:21:00.828732 containerd[1482]: time="2025-01-29T16:21:00.817934012Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:21:00.828732 containerd[1482]: time="2025-01-29T16:21:00.818058443Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:21:00.828732 containerd[1482]: time="2025-01-29T16:21:00.818095572Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:21:00.828732 containerd[1482]: time="2025-01-29T16:21:00.818128646Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:21:00.828732 containerd[1482]: time="2025-01-29T16:21:00.818157643Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:21:00.828732 containerd[1482]: time="2025-01-29T16:21:00.818462246Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:21:00.828732 containerd[1482]: time="2025-01-29T16:21:00.826369154Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:21:00.828732 containerd[1482]: time="2025-01-29T16:21:00.826716063Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:21:00.828732 containerd[1482]: time="2025-01-29T16:21:00.826752311Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:21:00.828732 containerd[1482]: time="2025-01-29T16:21:00.826792206Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:21:00.828732 containerd[1482]: time="2025-01-29T16:21:00.826854132Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:21:00.828732 containerd[1482]: time="2025-01-29T16:21:00.826878992Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:21:00.828732 containerd[1482]: time="2025-01-29T16:21:00.826906126Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:21:00.828732 containerd[1482]: time="2025-01-29T16:21:00.826935122Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:21:00.835982 containerd[1482]: time="2025-01-29T16:21:00.826967811Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:21:00.835982 containerd[1482]: time="2025-01-29T16:21:00.827035710Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:21:00.835982 containerd[1482]: time="2025-01-29T16:21:00.827065456Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:21:00.835982 containerd[1482]: time="2025-01-29T16:21:00.827092741Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:21:00.835982 containerd[1482]: time="2025-01-29T16:21:00.827561827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.835982 containerd[1482]: time="2025-01-29T16:21:00.832984281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.835982 containerd[1482]: time="2025-01-29T16:21:00.833041733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.835982 containerd[1482]: time="2025-01-29T16:21:00.833073709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.835982 containerd[1482]: time="2025-01-29T16:21:00.833110623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.835982 containerd[1482]: time="2025-01-29T16:21:00.833141691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.835982 containerd[1482]: time="2025-01-29T16:21:00.833174606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.835982 containerd[1482]: time="2025-01-29T16:21:00.833217375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.835982 containerd[1482]: time="2025-01-29T16:21:00.833252810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.835982 containerd[1482]: time="2025-01-29T16:21:00.833296871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.836801 containerd[1482]: time="2025-01-29T16:21:00.833336063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.836801 containerd[1482]: time="2025-01-29T16:21:00.833362328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.836801 containerd[1482]: time="2025-01-29T16:21:00.833388519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.836801 containerd[1482]: time="2025-01-29T16:21:00.833422587Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:21:00.836801 containerd[1482]: time="2025-01-29T16:21:00.833505210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.836801 containerd[1482]: time="2025-01-29T16:21:00.833538875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.836801 containerd[1482]: time="2025-01-29T16:21:00.833561343Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:21:00.836801 containerd[1482]: time="2025-01-29T16:21:00.833646984Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:21:00.836801 containerd[1482]: time="2025-01-29T16:21:00.833690576Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:21:00.836801 containerd[1482]: time="2025-01-29T16:21:00.833711850Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:21:00.836801 containerd[1482]: time="2025-01-29T16:21:00.833777957Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:21:00.853191 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:21:00.870444 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:21:00.884185 containerd[1482]: time="2025-01-29T16:21:00.834554707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.884185 containerd[1482]: time="2025-01-29T16:21:00.874075564Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:21:00.884185 containerd[1482]: time="2025-01-29T16:21:00.874168892Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:21:00.940198 containerd[1482]: time="2025-01-29T16:21:00.923979118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:21:00.955728 containerd[1482]: time="2025-01-29T16:21:00.937528946Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:21:00.955728 containerd[1482]: time="2025-01-29T16:21:00.937700156Z" level=info msg="Connect containerd service" Jan 29 16:21:00.955728 containerd[1482]: time="2025-01-29T16:21:00.937881517Z" level=info msg="using legacy CRI server" Jan 29 16:21:00.955728 containerd[1482]: time="2025-01-29T16:21:00.937915253Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:21:00.955728 containerd[1482]: time="2025-01-29T16:21:00.940701186Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:21:00.979414 containerd[1482]: time="2025-01-29T16:21:00.979342482Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:21:00.982930 containerd[1482]: time="2025-01-29T16:21:00.982202533Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:21:00.982930 containerd[1482]: time="2025-01-29T16:21:00.982405041Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:21:00.982930 containerd[1482]: time="2025-01-29T16:21:00.982492407Z" level=info msg="Start subscribing containerd event" Jan 29 16:21:00.982930 containerd[1482]: time="2025-01-29T16:21:00.982579783Z" level=info msg="Start recovering state" Jan 29 16:21:00.982930 containerd[1482]: time="2025-01-29T16:21:00.982798544Z" level=info msg="Start event monitor" Jan 29 16:21:00.986070 containerd[1482]: time="2025-01-29T16:21:00.986000159Z" level=info msg="Start snapshots syncer" Jan 29 16:21:00.986314 containerd[1482]: time="2025-01-29T16:21:00.986285794Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:21:00.988904 containerd[1482]: time="2025-01-29T16:21:00.988269495Z" level=info msg="Start streaming server" Jan 29 16:21:00.989056 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:21:01.018172 containerd[1482]: time="2025-01-29T16:21:01.017974933Z" level=info msg="containerd successfully booted in 0.588942s" Jan 29 16:21:01.362398 sshd[1551]: Accepted publickey for core from 139.178.89.65 port 51752 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:01.364749 sshd-session[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:01.389294 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:21:01.398413 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:21:01.410061 systemd-logind[1461]: New session 1 of user core. Jan 29 16:21:01.441653 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:21:01.453388 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:21:01.472190 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:21:01.478442 systemd-logind[1461]: New session c1 of user core. Jan 29 16:21:01.778695 systemd[1569]: Queued start job for default target default.target. Jan 29 16:21:01.793531 systemd[1569]: Created slice app.slice - User Application Slice. Jan 29 16:21:01.793662 systemd[1569]: Reached target paths.target - Paths. Jan 29 16:21:01.793752 systemd[1569]: Reached target timers.target - Timers. Jan 29 16:21:01.799083 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:21:01.837027 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:21:01.837296 systemd[1569]: Reached target sockets.target - Sockets. Jan 29 16:21:01.837407 systemd[1569]: Reached target basic.target - Basic System. Jan 29 16:21:01.837485 systemd[1569]: Reached target default.target - Main User Target. Jan 29 16:21:01.837541 systemd[1569]: Startup finished in 343ms. Jan 29 16:21:01.837751 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:21:01.849265 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:21:01.939388 systemd[1]: Started sshd@1-64.23.139.59:22-139.178.89.65:43158.service - OpenSSH per-connection server daemon (139.178.89.65:43158). Jan 29 16:21:02.030885 sshd[1580]: Accepted publickey for core from 139.178.89.65 port 43158 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:02.034129 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:02.045934 systemd-logind[1461]: New session 2 of user core. Jan 29 16:21:02.052286 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:21:02.129453 sshd[1582]: Connection closed by 139.178.89.65 port 43158 Jan 29 16:21:02.129933 sshd-session[1580]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:02.147956 systemd[1]: sshd@1-64.23.139.59:22-139.178.89.65:43158.service: Deactivated successfully. Jan 29 16:21:02.151949 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:21:02.157167 systemd-logind[1461]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:21:02.164493 systemd[1]: Started sshd@2-64.23.139.59:22-139.178.89.65:43168.service - OpenSSH per-connection server daemon (139.178.89.65:43168). Jan 29 16:21:02.174504 systemd-logind[1461]: Removed session 2. Jan 29 16:21:02.246217 sshd[1587]: Accepted publickey for core from 139.178.89.65 port 43168 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:02.249638 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:02.260992 systemd-logind[1461]: New session 3 of user core. Jan 29 16:21:02.269207 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:21:02.361073 sshd[1590]: Connection closed by 139.178.89.65 port 43168 Jan 29 16:21:02.362637 sshd-session[1587]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:02.369537 systemd[1]: sshd@2-64.23.139.59:22-139.178.89.65:43168.service: Deactivated successfully. Jan 29 16:21:02.374167 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:21:02.378926 systemd-logind[1461]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:21:02.381396 systemd-logind[1461]: Removed session 3. Jan 29 16:21:02.828320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:02.836126 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:21:02.838934 systemd[1]: Startup finished in 1.674s (kernel) + 7.700s (initrd) + 9.393s (userspace) = 18.769s. Jan 29 16:21:02.843417 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:21:03.976224 kubelet[1599]: E0129 16:21:03.976072 1599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:21:03.980159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:21:03.980428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:21:03.981352 systemd[1]: kubelet.service: Consumed 1.710s CPU time, 256M memory peak. Jan 29 16:21:04.761538 systemd-timesyncd[1354]: Contacted time server 168.235.89.132:123 (1.flatcar.pool.ntp.org). Jan 29 16:21:04.761644 systemd-timesyncd[1354]: Initial clock synchronization to Wed 2025-01-29 16:21:04.425329 UTC. Jan 29 16:21:12.180489 systemd[1]: Started sshd@3-64.23.139.59:22-139.178.89.65:57408.service - OpenSSH per-connection server daemon (139.178.89.65:57408). Jan 29 16:21:12.236859 sshd[1612]: Accepted publickey for core from 139.178.89.65 port 57408 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:12.239706 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:12.250082 systemd-logind[1461]: New session 4 of user core. Jan 29 16:21:12.258232 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:21:12.324902 sshd[1614]: Connection closed by 139.178.89.65 port 57408 Jan 29 16:21:12.326040 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:12.345483 systemd[1]: sshd@3-64.23.139.59:22-139.178.89.65:57408.service: Deactivated successfully. Jan 29 16:21:12.348552 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:21:12.352175 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:21:12.366488 systemd[1]: Started sshd@4-64.23.139.59:22-139.178.89.65:57422.service - OpenSSH per-connection server daemon (139.178.89.65:57422). Jan 29 16:21:12.369611 systemd-logind[1461]: Removed session 4. Jan 29 16:21:12.428900 sshd[1619]: Accepted publickey for core from 139.178.89.65 port 57422 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:12.431650 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:12.443179 systemd-logind[1461]: New session 5 of user core. Jan 29 16:21:12.449209 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:21:12.511015 sshd[1622]: Connection closed by 139.178.89.65 port 57422 Jan 29 16:21:12.510636 sshd-session[1619]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:12.535052 systemd[1]: sshd@4-64.23.139.59:22-139.178.89.65:57422.service: Deactivated successfully. Jan 29 16:21:12.538146 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:21:12.540201 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:21:12.552480 systemd[1]: Started sshd@5-64.23.139.59:22-139.178.89.65:57428.service - OpenSSH per-connection server daemon (139.178.89.65:57428). Jan 29 16:21:12.555946 systemd-logind[1461]: Removed session 5. Jan 29 16:21:12.612114 sshd[1627]: Accepted publickey for core from 139.178.89.65 port 57428 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:12.614662 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:12.625539 systemd-logind[1461]: New session 6 of user core. Jan 29 16:21:12.641190 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:21:12.707794 sshd[1630]: Connection closed by 139.178.89.65 port 57428 Jan 29 16:21:12.707572 sshd-session[1627]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:12.721080 systemd[1]: sshd@5-64.23.139.59:22-139.178.89.65:57428.service: Deactivated successfully. Jan 29 16:21:12.723625 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:21:12.726308 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:21:12.735523 systemd[1]: Started sshd@6-64.23.139.59:22-139.178.89.65:57442.service - OpenSSH per-connection server daemon (139.178.89.65:57442). Jan 29 16:21:12.738398 systemd-logind[1461]: Removed session 6. Jan 29 16:21:12.793263 sshd[1635]: Accepted publickey for core from 139.178.89.65 port 57442 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:12.795502 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:12.805118 systemd-logind[1461]: New session 7 of user core. Jan 29 16:21:12.811224 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:21:12.973291 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:21:12.973836 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:21:12.992795 sudo[1639]: pam_unix(sudo:session): session closed for user root Jan 29 16:21:12.997262 sshd[1638]: Connection closed by 139.178.89.65 port 57442 Jan 29 16:21:12.998649 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:13.012874 systemd[1]: sshd@6-64.23.139.59:22-139.178.89.65:57442.service: Deactivated successfully. Jan 29 16:21:13.015697 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:21:13.019144 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:21:13.024455 systemd[1]: Started sshd@7-64.23.139.59:22-139.178.89.65:57452.service - OpenSSH per-connection server daemon (139.178.89.65:57452). Jan 29 16:21:13.027204 systemd-logind[1461]: Removed session 7. Jan 29 16:21:13.090002 sshd[1644]: Accepted publickey for core from 139.178.89.65 port 57452 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:13.091434 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:13.102919 systemd-logind[1461]: New session 8 of user core. Jan 29 16:21:13.110210 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:21:13.177699 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:21:13.178317 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:21:13.185180 sudo[1649]: pam_unix(sudo:session): session closed for user root Jan 29 16:21:13.197036 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:21:13.197630 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:21:13.221715 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:21:13.281092 augenrules[1671]: No rules Jan 29 16:21:13.283177 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:21:13.283769 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:21:13.285583 sudo[1648]: pam_unix(sudo:session): session closed for user root Jan 29 16:21:13.290148 sshd[1647]: Connection closed by 139.178.89.65 port 57452 Jan 29 16:21:13.291162 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:13.304372 systemd[1]: sshd@7-64.23.139.59:22-139.178.89.65:57452.service: Deactivated successfully. Jan 29 16:21:13.307201 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:21:13.309619 systemd-logind[1461]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:21:13.315367 systemd[1]: Started sshd@8-64.23.139.59:22-139.178.89.65:57454.service - OpenSSH per-connection server daemon (139.178.89.65:57454). Jan 29 16:21:13.317588 systemd-logind[1461]: Removed session 8. Jan 29 16:21:13.385583 sshd[1679]: Accepted publickey for core from 139.178.89.65 port 57454 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:13.388216 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:13.399214 systemd-logind[1461]: New session 9 of user core. Jan 29 16:21:13.407220 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:21:13.475244 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:21:13.476730 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:21:14.231933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:21:14.240222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:14.426049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:14.437565 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:21:14.534083 kubelet[1705]: E0129 16:21:14.533191 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:21:14.539299 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:21:14.539539 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:21:14.540339 systemd[1]: kubelet.service: Consumed 245ms CPU time, 103.7M memory peak. Jan 29 16:21:15.145160 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:15.145699 systemd[1]: kubelet.service: Consumed 245ms CPU time, 103.7M memory peak. Jan 29 16:21:15.155378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:15.216111 systemd[1]: Reload requested from client PID 1731 ('systemctl') (unit session-9.scope)... Jan 29 16:21:15.216341 systemd[1]: Reloading... Jan 29 16:21:15.439858 zram_generator::config[1780]: No configuration found. Jan 29 16:21:15.641974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:21:15.840740 systemd[1]: Reloading finished in 623 ms. Jan 29 16:21:15.926173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:15.927735 (kubelet)[1819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:21:15.934066 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:15.936343 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:21:15.936902 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:15.936987 systemd[1]: kubelet.service: Consumed 144ms CPU time, 92.9M memory peak. Jan 29 16:21:15.944594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:16.227220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:16.242556 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:21:16.318093 kubelet[1831]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:21:16.318093 kubelet[1831]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 16:21:16.318093 kubelet[1831]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:21:16.318589 kubelet[1831]: I0129 16:21:16.318289 1831 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:21:16.870502 kubelet[1831]: I0129 16:21:16.870446 1831 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 16:21:16.870724 kubelet[1831]: I0129 16:21:16.870710 1831 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:21:16.871125 kubelet[1831]: I0129 16:21:16.871107 1831 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 16:21:16.915182 kubelet[1831]: I0129 16:21:16.915114 1831 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:21:16.932183 kubelet[1831]: E0129 16:21:16.931995 1831 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:21:16.932183 kubelet[1831]: I0129 16:21:16.932053 1831 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:21:16.939859 kubelet[1831]: I0129 16:21:16.939742 1831 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:21:16.941254 kubelet[1831]: I0129 16:21:16.941111 1831 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:21:16.941479 kubelet[1831]: I0129 16:21:16.941200 1831 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"64.23.139.59","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:21:16.941687 kubelet[1831]: I0129 16:21:16.941480 1831 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:21:16.941687 kubelet[1831]: I0129 16:21:16.941497 1831 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 16:21:16.941687 kubelet[1831]: I0129 16:21:16.941679 1831 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:21:16.949615 kubelet[1831]: I0129 16:21:16.949538 1831 kubelet.go:446] "Attempting to sync node with API server" Jan 29 16:21:16.949615 kubelet[1831]: I0129 16:21:16.949598 1831 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:21:16.949615 kubelet[1831]: I0129 16:21:16.949638 1831 kubelet.go:352] "Adding apiserver pod source" Jan 29 16:21:16.949877 kubelet[1831]: I0129 16:21:16.949657 1831 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:21:16.951499 kubelet[1831]: E0129 16:21:16.950814 1831 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:16.951499 kubelet[1831]: E0129 16:21:16.951480 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:16.953967 kubelet[1831]: I0129 16:21:16.953924 1831 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:21:16.954882 kubelet[1831]: I0129 16:21:16.954850 1831 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:21:16.956146 kubelet[1831]: W0129 16:21:16.956094 1831 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:21:16.959278 kubelet[1831]: I0129 16:21:16.959159 1831 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 16:21:16.959278 kubelet[1831]: I0129 16:21:16.959244 1831 server.go:1287] "Started kubelet" Jan 29 16:21:16.959772 kubelet[1831]: I0129 16:21:16.959528 1831 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:21:16.959772 kubelet[1831]: I0129 16:21:16.959675 1831 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:21:16.960530 kubelet[1831]: I0129 16:21:16.960162 1831 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:21:16.961658 kubelet[1831]: I0129 16:21:16.961408 1831 server.go:490] "Adding debug handlers to kubelet server" Jan 29 16:21:16.971862 kubelet[1831]: I0129 16:21:16.970510 1831 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:21:16.981520 kubelet[1831]: I0129 16:21:16.981462 1831 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:21:16.984449 kubelet[1831]: I0129 16:21:16.984389 1831 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 16:21:16.987193 kubelet[1831]: E0129 16:21:16.987145 1831 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"64.23.139.59\" not found" Jan 29 16:21:16.989071 kubelet[1831]: I0129 16:21:16.988880 1831 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:21:16.990592 kubelet[1831]: I0129 16:21:16.990463 1831 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:21:16.992846 kubelet[1831]: I0129 16:21:16.990940 1831 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:21:16.993330 kubelet[1831]: E0129 16:21:16.993005 1831 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:21:16.993509 kubelet[1831]: I0129 16:21:16.993201 1831 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:21:16.993613 kubelet[1831]: I0129 16:21:16.993596 1831 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:21:17.004722 kubelet[1831]: E0129 16:21:17.004627 1831 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"64.23.139.59\" not found" node="64.23.139.59" Jan 29 16:21:17.032857 kubelet[1831]: I0129 16:21:17.030850 1831 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 16:21:17.032857 kubelet[1831]: I0129 16:21:17.030878 1831 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 16:21:17.032857 kubelet[1831]: I0129 16:21:17.030909 1831 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:21:17.040676 kubelet[1831]: I0129 16:21:17.040630 1831 policy_none.go:49] "None policy: Start" Jan 29 16:21:17.040676 kubelet[1831]: I0129 16:21:17.040677 1831 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 16:21:17.040930 kubelet[1831]: I0129 16:21:17.040703 1831 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:21:17.062760 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:21:17.088295 kubelet[1831]: E0129 16:21:17.088251 1831 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"64.23.139.59\" not found" Jan 29 16:21:17.090032 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:21:17.097908 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:21:17.108878 kubelet[1831]: I0129 16:21:17.108168 1831 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:21:17.108878 kubelet[1831]: I0129 16:21:17.108487 1831 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:21:17.108878 kubelet[1831]: I0129 16:21:17.108507 1831 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:21:17.114641 kubelet[1831]: I0129 16:21:17.113967 1831 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:21:17.114805 kubelet[1831]: E0129 16:21:17.114773 1831 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 16:21:17.116017 kubelet[1831]: E0129 16:21:17.115979 1831 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"64.23.139.59\" not found" Jan 29 16:21:17.121966 kubelet[1831]: I0129 16:21:17.121733 1831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:21:17.125432 kubelet[1831]: I0129 16:21:17.125367 1831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:21:17.125432 kubelet[1831]: I0129 16:21:17.125416 1831 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 16:21:17.125696 kubelet[1831]: I0129 16:21:17.125450 1831 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 16:21:17.125696 kubelet[1831]: I0129 16:21:17.125466 1831 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 16:21:17.125696 kubelet[1831]: E0129 16:21:17.125680 1831 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 16:21:17.211285 kubelet[1831]: I0129 16:21:17.210785 1831 kubelet_node_status.go:76] "Attempting to register node" node="64.23.139.59" Jan 29 16:21:17.216884 kubelet[1831]: I0129 16:21:17.216805 1831 kubelet_node_status.go:79] "Successfully registered node" node="64.23.139.59" Jan 29 16:21:17.217564 kubelet[1831]: E0129 16:21:17.217137 1831 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"64.23.139.59\": node \"64.23.139.59\" not found" Jan 29 16:21:17.238843 kubelet[1831]: I0129 16:21:17.238157 1831 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 16:21:17.239022 containerd[1482]: time="2025-01-29T16:21:17.238583008Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:21:17.239559 kubelet[1831]: I0129 16:21:17.239017 1831 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 16:21:17.289099 sudo[1683]: pam_unix(sudo:session): session closed for user root Jan 29 16:21:17.292903 sshd[1682]: Connection closed by 139.178.89.65 port 57454 Jan 29 16:21:17.293943 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:17.301066 systemd[1]: sshd@8-64.23.139.59:22-139.178.89.65:57454.service: Deactivated successfully. Jan 29 16:21:17.305205 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:21:17.305524 systemd[1]: session-9.scope: Consumed 794ms CPU time, 76.5M memory peak. Jan 29 16:21:17.307699 systemd-logind[1461]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:21:17.310057 systemd-logind[1461]: Removed session 9. Jan 29 16:21:17.879562 kubelet[1831]: I0129 16:21:17.879464 1831 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 16:21:17.880799 kubelet[1831]: W0129 16:21:17.879753 1831 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:21:17.880799 kubelet[1831]: W0129 16:21:17.879809 1831 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:21:17.880799 kubelet[1831]: W0129 16:21:17.879968 1831 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:21:17.951980 kubelet[1831]: I0129 16:21:17.951916 1831 apiserver.go:52] "Watching apiserver" Jan 29 16:21:17.952206 kubelet[1831]: E0129 16:21:17.951909 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:17.959754 kubelet[1831]: E0129 16:21:17.959570 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j225r" podUID="ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf" Jan 29 16:21:17.971900 systemd[1]: Created slice kubepods-besteffort-pod510d14af_089e_4f77_98f7_dd807805cdfe.slice - libcontainer container kubepods-besteffort-pod510d14af_089e_4f77_98f7_dd807805cdfe.slice. Jan 29 16:21:17.991798 kubelet[1831]: I0129 16:21:17.991687 1831 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:21:17.999138 systemd[1]: Created slice kubepods-besteffort-pod541c5c63_2955_44c3_95b6_3767a1e2cc25.slice - libcontainer container kubepods-besteffort-pod541c5c63_2955_44c3_95b6_3767a1e2cc25.slice. Jan 29 16:21:18.000539 kubelet[1831]: I0129 16:21:18.000180 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/541c5c63-2955-44c3-95b6-3767a1e2cc25-xtables-lock\") pod \"calico-node-978t8\" (UID: \"541c5c63-2955-44c3-95b6-3767a1e2cc25\") " pod="calico-system/calico-node-978t8" Jan 29 16:21:18.000539 kubelet[1831]: I0129 16:21:18.000240 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/541c5c63-2955-44c3-95b6-3767a1e2cc25-policysync\") pod \"calico-node-978t8\" (UID: \"541c5c63-2955-44c3-95b6-3767a1e2cc25\") " pod="calico-system/calico-node-978t8" Jan 29 16:21:18.000783 kubelet[1831]: I0129 16:21:18.000752 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/541c5c63-2955-44c3-95b6-3767a1e2cc25-cni-log-dir\") pod \"calico-node-978t8\" (UID: \"541c5c63-2955-44c3-95b6-3767a1e2cc25\") " pod="calico-system/calico-node-978t8" Jan 29 16:21:18.000965 kubelet[1831]: I0129 16:21:18.000938 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/541c5c63-2955-44c3-95b6-3767a1e2cc25-flexvol-driver-host\") pod \"calico-node-978t8\" (UID: \"541c5c63-2955-44c3-95b6-3767a1e2cc25\") " pod="calico-system/calico-node-978t8" Jan 29 16:21:18.001161 kubelet[1831]: I0129 16:21:18.001138 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf-varrun\") pod \"csi-node-driver-j225r\" (UID: \"ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf\") " pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:18.001282 kubelet[1831]: I0129 16:21:18.001263 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc2hf\" (UniqueName: \"kubernetes.io/projected/ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf-kube-api-access-cc2hf\") pod \"csi-node-driver-j225r\" (UID: \"ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf\") " pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:18.001397 kubelet[1831]: I0129 16:21:18.001379 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65jlq\" (UniqueName: \"kubernetes.io/projected/510d14af-089e-4f77-98f7-dd807805cdfe-kube-api-access-65jlq\") pod \"kube-proxy-qbsql\" (UID: \"510d14af-089e-4f77-98f7-dd807805cdfe\") " pod="kube-system/kube-proxy-qbsql" Jan 29 16:21:18.001937 kubelet[1831]: I0129 16:21:18.001899 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/541c5c63-2955-44c3-95b6-3767a1e2cc25-tigera-ca-bundle\") pod \"calico-node-978t8\" (UID: \"541c5c63-2955-44c3-95b6-3767a1e2cc25\") " pod="calico-system/calico-node-978t8" Jan 29 16:21:18.002043 kubelet[1831]: I0129 16:21:18.001972 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/541c5c63-2955-44c3-95b6-3767a1e2cc25-cni-bin-dir\") pod \"calico-node-978t8\" (UID: \"541c5c63-2955-44c3-95b6-3767a1e2cc25\") " pod="calico-system/calico-node-978t8" Jan 29 16:21:18.002043 kubelet[1831]: I0129 16:21:18.002001 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/541c5c63-2955-44c3-95b6-3767a1e2cc25-cni-net-dir\") pod \"calico-node-978t8\" (UID: \"541c5c63-2955-44c3-95b6-3767a1e2cc25\") " pod="calico-system/calico-node-978t8" Jan 29 16:21:18.002167 kubelet[1831]: I0129 16:21:18.002048 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf-kubelet-dir\") pod \"csi-node-driver-j225r\" (UID: \"ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf\") " pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:18.002167 kubelet[1831]: I0129 16:21:18.002074 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/510d14af-089e-4f77-98f7-dd807805cdfe-kube-proxy\") pod \"kube-proxy-qbsql\" (UID: \"510d14af-089e-4f77-98f7-dd807805cdfe\") " pod="kube-system/kube-proxy-qbsql" Jan 29 16:21:18.002167 kubelet[1831]: I0129 16:21:18.002112 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/541c5c63-2955-44c3-95b6-3767a1e2cc25-lib-modules\") pod \"calico-node-978t8\" (UID: \"541c5c63-2955-44c3-95b6-3767a1e2cc25\") " pod="calico-system/calico-node-978t8" Jan 29 16:21:18.002167 kubelet[1831]: I0129 16:21:18.002136 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/541c5c63-2955-44c3-95b6-3767a1e2cc25-var-run-calico\") pod \"calico-node-978t8\" (UID: \"541c5c63-2955-44c3-95b6-3767a1e2cc25\") " pod="calico-system/calico-node-978t8" Jan 29 16:21:18.002359 kubelet[1831]: I0129 16:21:18.002166 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/541c5c63-2955-44c3-95b6-3767a1e2cc25-var-lib-calico\") pod \"calico-node-978t8\" (UID: \"541c5c63-2955-44c3-95b6-3767a1e2cc25\") " pod="calico-system/calico-node-978t8" Jan 29 16:21:18.002359 kubelet[1831]: I0129 16:21:18.002208 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf-socket-dir\") pod \"csi-node-driver-j225r\" (UID: \"ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf\") " pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:18.002359 kubelet[1831]: I0129 16:21:18.002232 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/541c5c63-2955-44c3-95b6-3767a1e2cc25-node-certs\") pod \"calico-node-978t8\" (UID: \"541c5c63-2955-44c3-95b6-3767a1e2cc25\") " pod="calico-system/calico-node-978t8" Jan 29 16:21:18.002359 kubelet[1831]: I0129 16:21:18.002272 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqc5n\" (UniqueName: \"kubernetes.io/projected/541c5c63-2955-44c3-95b6-3767a1e2cc25-kube-api-access-cqc5n\") pod \"calico-node-978t8\" (UID: \"541c5c63-2955-44c3-95b6-3767a1e2cc25\") " pod="calico-system/calico-node-978t8" Jan 29 16:21:18.002359 kubelet[1831]: I0129 16:21:18.002305 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf-registration-dir\") pod \"csi-node-driver-j225r\" (UID: \"ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf\") " pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:18.002623 kubelet[1831]: I0129 16:21:18.002350 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/510d14af-089e-4f77-98f7-dd807805cdfe-xtables-lock\") pod \"kube-proxy-qbsql\" (UID: \"510d14af-089e-4f77-98f7-dd807805cdfe\") " pod="kube-system/kube-proxy-qbsql" Jan 29 16:21:18.002623 kubelet[1831]: I0129 16:21:18.002394 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/510d14af-089e-4f77-98f7-dd807805cdfe-lib-modules\") pod \"kube-proxy-qbsql\" (UID: \"510d14af-089e-4f77-98f7-dd807805cdfe\") " pod="kube-system/kube-proxy-qbsql" Jan 29 16:21:18.107104 kubelet[1831]: E0129 16:21:18.106974 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.107309 kubelet[1831]: W0129 16:21:18.107291 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.107508 kubelet[1831]: E0129 16:21:18.107443 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.108341 kubelet[1831]: E0129 16:21:18.107980 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.108341 kubelet[1831]: W0129 16:21:18.108000 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.111271 kubelet[1831]: E0129 16:21:18.111116 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.112034 kubelet[1831]: E0129 16:21:18.111911 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.112254 kubelet[1831]: W0129 16:21:18.112170 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.112434 kubelet[1831]: E0129 16:21:18.112358 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.115567 kubelet[1831]: E0129 16:21:18.115468 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.115567 kubelet[1831]: W0129 16:21:18.115491 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.116013 kubelet[1831]: E0129 16:21:18.115786 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.117023 kubelet[1831]: E0129 16:21:18.116992 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.117240 kubelet[1831]: W0129 16:21:18.117134 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.117240 kubelet[1831]: E0129 16:21:18.117216 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.119145 kubelet[1831]: E0129 16:21:18.119073 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.119145 kubelet[1831]: W0129 16:21:18.119110 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.119810 kubelet[1831]: E0129 16:21:18.119623 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.119810 kubelet[1831]: W0129 16:21:18.119649 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.120211 kubelet[1831]: E0129 16:21:18.120146 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.120211 kubelet[1831]: E0129 16:21:18.120188 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.120558 kubelet[1831]: E0129 16:21:18.120479 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.120558 kubelet[1831]: W0129 16:21:18.120504 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.120990 kubelet[1831]: E0129 16:21:18.120811 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.121299 kubelet[1831]: E0129 16:21:18.121180 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.121299 kubelet[1831]: W0129 16:21:18.121193 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.121299 kubelet[1831]: E0129 16:21:18.121242 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.121756 kubelet[1831]: E0129 16:21:18.121666 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.121756 kubelet[1831]: W0129 16:21:18.121682 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.121756 kubelet[1831]: E0129 16:21:18.121748 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.122470 kubelet[1831]: E0129 16:21:18.122311 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.122470 kubelet[1831]: W0129 16:21:18.122339 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.122470 kubelet[1831]: E0129 16:21:18.122380 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.122945 kubelet[1831]: E0129 16:21:18.122694 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.122945 kubelet[1831]: W0129 16:21:18.122859 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.122945 kubelet[1831]: E0129 16:21:18.122902 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.123571 kubelet[1831]: E0129 16:21:18.123432 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.123571 kubelet[1831]: W0129 16:21:18.123451 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.123571 kubelet[1831]: E0129 16:21:18.123487 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.123986 kubelet[1831]: E0129 16:21:18.123910 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.123986 kubelet[1831]: W0129 16:21:18.123924 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.124248 kubelet[1831]: E0129 16:21:18.124165 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.124335 kubelet[1831]: E0129 16:21:18.124256 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.124335 kubelet[1831]: W0129 16:21:18.124274 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.124678 kubelet[1831]: E0129 16:21:18.124362 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.124792 kubelet[1831]: E0129 16:21:18.124762 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.124792 kubelet[1831]: W0129 16:21:18.124776 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.124949 kubelet[1831]: E0129 16:21:18.124893 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.125379 kubelet[1831]: E0129 16:21:18.125356 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.125379 kubelet[1831]: W0129 16:21:18.125372 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.125702 kubelet[1831]: E0129 16:21:18.125619 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.125702 kubelet[1831]: E0129 16:21:18.125628 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.125702 kubelet[1831]: W0129 16:21:18.125640 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.125979 kubelet[1831]: E0129 16:21:18.125887 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.126115 kubelet[1831]: E0129 16:21:18.126100 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.126115 kubelet[1831]: W0129 16:21:18.126114 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.126401 kubelet[1831]: E0129 16:21:18.126310 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.126401 kubelet[1831]: E0129 16:21:18.126391 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.126608 kubelet[1831]: W0129 16:21:18.126422 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.126608 kubelet[1831]: E0129 16:21:18.126514 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.126785 kubelet[1831]: E0129 16:21:18.126763 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.126785 kubelet[1831]: W0129 16:21:18.126780 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.127017 kubelet[1831]: E0129 16:21:18.126959 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.127122 kubelet[1831]: E0129 16:21:18.127102 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.127172 kubelet[1831]: W0129 16:21:18.127123 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.127228 kubelet[1831]: E0129 16:21:18.127213 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.128324 kubelet[1831]: E0129 16:21:18.128294 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.128324 kubelet[1831]: W0129 16:21:18.128317 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.128808 kubelet[1831]: E0129 16:21:18.128764 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.128946 kubelet[1831]: E0129 16:21:18.128922 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.128946 kubelet[1831]: W0129 16:21:18.128933 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.129041 kubelet[1831]: E0129 16:21:18.128947 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.465738 kubelet[1831]: E0129 16:21:18.464806 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.465738 kubelet[1831]: W0129 16:21:18.464932 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.465738 kubelet[1831]: E0129 16:21:18.464983 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.470630 kubelet[1831]: E0129 16:21:18.470588 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.470872 kubelet[1831]: W0129 16:21:18.470835 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.471063 kubelet[1831]: E0129 16:21:18.471043 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.475235 kubelet[1831]: E0129 16:21:18.475183 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:18.475235 kubelet[1831]: W0129 16:21:18.475236 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:18.475447 kubelet[1831]: E0129 16:21:18.475270 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:18.597452 kubelet[1831]: E0129 16:21:18.597404 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 16:21:18.599395 containerd[1482]: time="2025-01-29T16:21:18.599341867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbsql,Uid:510d14af-089e-4f77-98f7-dd807805cdfe,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:18.605393 kubelet[1831]: E0129 16:21:18.605335 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 16:21:18.606008 containerd[1482]: time="2025-01-29T16:21:18.605962239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-978t8,Uid:541c5c63-2955-44c3-95b6-3767a1e2cc25,Namespace:calico-system,Attempt:0,}" Jan 29 16:21:18.611255 systemd-resolved[1337]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 29 16:21:18.952443 kubelet[1831]: E0129 16:21:18.952370 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:19.212098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288102444.mount: Deactivated successfully. Jan 29 16:21:19.227529 containerd[1482]: time="2025-01-29T16:21:19.226672486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:21:19.232116 containerd[1482]: time="2025-01-29T16:21:19.232055051Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 16:21:19.238110 containerd[1482]: time="2025-01-29T16:21:19.238023512Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:21:19.240637 containerd[1482]: time="2025-01-29T16:21:19.240543984Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:21:19.241515 containerd[1482]: time="2025-01-29T16:21:19.241370050Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:21:19.249192 containerd[1482]: time="2025-01-29T16:21:19.249110279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:21:19.251877 containerd[1482]: time="2025-01-29T16:21:19.250989094Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 651.455534ms" Jan 29 16:21:19.252984 containerd[1482]: time="2025-01-29T16:21:19.252926754Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 646.829259ms" Jan 29 16:21:19.490098 containerd[1482]: time="2025-01-29T16:21:19.488489110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:19.490098 containerd[1482]: time="2025-01-29T16:21:19.489862038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:19.490359 containerd[1482]: time="2025-01-29T16:21:19.489883810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:19.490359 containerd[1482]: time="2025-01-29T16:21:19.490046714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:19.507129 containerd[1482]: time="2025-01-29T16:21:19.505175834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:19.507129 containerd[1482]: time="2025-01-29T16:21:19.505457024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:19.507129 containerd[1482]: time="2025-01-29T16:21:19.505486868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:19.507129 containerd[1482]: time="2025-01-29T16:21:19.506179270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:19.622685 systemd[1]: Started cri-containerd-a82ffd1ccb102b0eefe9bfba8599bb97113ae4de78e5d9fef28c2b3d92770358.scope - libcontainer container a82ffd1ccb102b0eefe9bfba8599bb97113ae4de78e5d9fef28c2b3d92770358. Jan 29 16:21:19.630411 systemd[1]: Started cri-containerd-fc1802fb106fdb51fbe5a4b9d9293b34b743d8e83c29fed2181adef39e517510.scope - libcontainer container fc1802fb106fdb51fbe5a4b9d9293b34b743d8e83c29fed2181adef39e517510. Jan 29 16:21:19.702612 containerd[1482]: time="2025-01-29T16:21:19.702544233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbsql,Uid:510d14af-089e-4f77-98f7-dd807805cdfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"a82ffd1ccb102b0eefe9bfba8599bb97113ae4de78e5d9fef28c2b3d92770358\"" Jan 29 16:21:19.706109 kubelet[1831]: E0129 16:21:19.705533 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 16:21:19.709017 containerd[1482]: time="2025-01-29T16:21:19.708643295Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 16:21:19.715192 containerd[1482]: time="2025-01-29T16:21:19.714813797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-978t8,Uid:541c5c63-2955-44c3-95b6-3767a1e2cc25,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc1802fb106fdb51fbe5a4b9d9293b34b743d8e83c29fed2181adef39e517510\"" Jan 29 16:21:19.716392 kubelet[1831]: E0129 16:21:19.716035 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 16:21:19.953381 kubelet[1831]: E0129 16:21:19.953269 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:20.126388 kubelet[1831]: E0129 16:21:20.126273 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j225r" podUID="ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf" Jan 29 16:21:20.955071 kubelet[1831]: E0129 16:21:20.954989 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:21.176373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount142344147.mount: Deactivated successfully. Jan 29 16:21:21.674065 systemd-resolved[1337]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 29 16:21:21.953098 containerd[1482]: time="2025-01-29T16:21:21.952005181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:21.954739 containerd[1482]: time="2025-01-29T16:21:21.954544325Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 29 16:21:21.955565 kubelet[1831]: E0129 16:21:21.955498 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:21.956462 containerd[1482]: time="2025-01-29T16:21:21.956210778Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:21.962136 containerd[1482]: time="2025-01-29T16:21:21.962000482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:21.963128 containerd[1482]: time="2025-01-29T16:21:21.962852898Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.254153505s" Jan 29 16:21:21.963128 containerd[1482]: time="2025-01-29T16:21:21.962905902Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 29 16:21:21.964712 containerd[1482]: time="2025-01-29T16:21:21.964646427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 16:21:21.967387 containerd[1482]: time="2025-01-29T16:21:21.967190008Z" level=info msg="CreateContainer within sandbox \"a82ffd1ccb102b0eefe9bfba8599bb97113ae4de78e5d9fef28c2b3d92770358\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:21:21.996249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3069850191.mount: Deactivated successfully. Jan 29 16:21:22.124568 containerd[1482]: time="2025-01-29T16:21:22.124131365Z" level=info msg="CreateContainer within sandbox \"a82ffd1ccb102b0eefe9bfba8599bb97113ae4de78e5d9fef28c2b3d92770358\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4f43366870747ed7585ac05996d5eb4394735a728ad8eab7554e928c77a86b77\"" Jan 29 16:21:22.125868 containerd[1482]: time="2025-01-29T16:21:22.125484554Z" level=info msg="StartContainer for \"4f43366870747ed7585ac05996d5eb4394735a728ad8eab7554e928c77a86b77\"" Jan 29 16:21:22.126049 kubelet[1831]: E0129 16:21:22.125772 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j225r" podUID="ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf" Jan 29 16:21:22.198186 systemd[1]: Started cri-containerd-4f43366870747ed7585ac05996d5eb4394735a728ad8eab7554e928c77a86b77.scope - libcontainer container 4f43366870747ed7585ac05996d5eb4394735a728ad8eab7554e928c77a86b77. Jan 29 16:21:22.268712 containerd[1482]: time="2025-01-29T16:21:22.268638601Z" level=info msg="StartContainer for \"4f43366870747ed7585ac05996d5eb4394735a728ad8eab7554e928c77a86b77\" returns successfully" Jan 29 16:21:22.955928 kubelet[1831]: E0129 16:21:22.955846 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:23.158504 kubelet[1831]: E0129 16:21:23.158459 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 16:21:23.173724 kubelet[1831]: I0129 16:21:23.173630 1831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qbsql" podStartSLOduration=3.917261066 podStartE2EDuration="6.173606001s" podCreationTimestamp="2025-01-29 16:21:17 +0000 UTC" firstStartedPulling="2025-01-29 16:21:19.707963716 +0000 UTC m=+3.456921163" lastFinishedPulling="2025-01-29 16:21:21.964308636 +0000 UTC m=+5.713266098" observedRunningTime="2025-01-29 16:21:23.173415005 +0000 UTC m=+6.922372464" watchObservedRunningTime="2025-01-29 16:21:23.173606001 +0000 UTC m=+6.922563459" Jan 29 16:21:23.234776 kubelet[1831]: E0129 16:21:23.234119 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.234776 kubelet[1831]: W0129 16:21:23.234178 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.234776 kubelet[1831]: E0129 16:21:23.234214 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.234776 kubelet[1831]: E0129 16:21:23.234741 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.234776 kubelet[1831]: W0129 16:21:23.234766 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.236455 kubelet[1831]: E0129 16:21:23.234792 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.236455 kubelet[1831]: E0129 16:21:23.235711 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.236455 kubelet[1831]: W0129 16:21:23.235734 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.236455 kubelet[1831]: E0129 16:21:23.235778 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.236455 kubelet[1831]: E0129 16:21:23.236342 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.236455 kubelet[1831]: W0129 16:21:23.236422 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.236455 kubelet[1831]: E0129 16:21:23.236460 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.237690 kubelet[1831]: E0129 16:21:23.236937 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.237690 kubelet[1831]: W0129 16:21:23.236954 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.237690 kubelet[1831]: E0129 16:21:23.236973 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.237690 kubelet[1831]: E0129 16:21:23.237365 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.237690 kubelet[1831]: W0129 16:21:23.237385 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.237690 kubelet[1831]: E0129 16:21:23.237403 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.238466 kubelet[1831]: E0129 16:21:23.237866 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.238466 kubelet[1831]: W0129 16:21:23.237884 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.238466 kubelet[1831]: E0129 16:21:23.237903 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.238466 kubelet[1831]: E0129 16:21:23.238202 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.238466 kubelet[1831]: W0129 16:21:23.238249 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.238466 kubelet[1831]: E0129 16:21:23.238268 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.240287 kubelet[1831]: E0129 16:21:23.238626 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.240287 kubelet[1831]: W0129 16:21:23.238642 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.240287 kubelet[1831]: E0129 16:21:23.238658 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.240287 kubelet[1831]: E0129 16:21:23.239008 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.240287 kubelet[1831]: W0129 16:21:23.239024 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.240287 kubelet[1831]: E0129 16:21:23.239042 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.240287 kubelet[1831]: E0129 16:21:23.239415 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.240287 kubelet[1831]: W0129 16:21:23.239431 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.240287 kubelet[1831]: E0129 16:21:23.239447 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.240287 kubelet[1831]: E0129 16:21:23.239742 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.240832 kubelet[1831]: W0129 16:21:23.239756 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.240832 kubelet[1831]: E0129 16:21:23.239770 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.240832 kubelet[1831]: E0129 16:21:23.240070 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.240832 kubelet[1831]: W0129 16:21:23.240084 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.240832 kubelet[1831]: E0129 16:21:23.240098 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.240832 kubelet[1831]: E0129 16:21:23.240380 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.240832 kubelet[1831]: W0129 16:21:23.240395 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.240832 kubelet[1831]: E0129 16:21:23.240411 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.240832 kubelet[1831]: E0129 16:21:23.240720 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.240832 kubelet[1831]: W0129 16:21:23.240735 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.242390 kubelet[1831]: E0129 16:21:23.240752 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.242390 kubelet[1831]: E0129 16:21:23.241120 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.242390 kubelet[1831]: W0129 16:21:23.241137 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.242390 kubelet[1831]: E0129 16:21:23.241154 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.242390 kubelet[1831]: E0129 16:21:23.241446 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.242390 kubelet[1831]: W0129 16:21:23.241460 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.242390 kubelet[1831]: E0129 16:21:23.241476 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.242390 kubelet[1831]: E0129 16:21:23.241732 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.242390 kubelet[1831]: W0129 16:21:23.241745 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.242390 kubelet[1831]: E0129 16:21:23.241760 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.243682 kubelet[1831]: E0129 16:21:23.242041 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.243682 kubelet[1831]: W0129 16:21:23.242056 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.243682 kubelet[1831]: E0129 16:21:23.242072 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.243682 kubelet[1831]: E0129 16:21:23.242362 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.243682 kubelet[1831]: W0129 16:21:23.242378 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.243682 kubelet[1831]: E0129 16:21:23.242394 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.243682 kubelet[1831]: E0129 16:21:23.242784 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.243682 kubelet[1831]: W0129 16:21:23.242798 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.243682 kubelet[1831]: E0129 16:21:23.242998 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.243682 kubelet[1831]: E0129 16:21:23.243345 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.245253 kubelet[1831]: W0129 16:21:23.243365 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.245253 kubelet[1831]: E0129 16:21:23.243417 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.245253 kubelet[1831]: E0129 16:21:23.243765 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.245253 kubelet[1831]: W0129 16:21:23.243780 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.245253 kubelet[1831]: E0129 16:21:23.243805 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.245253 kubelet[1831]: E0129 16:21:23.244112 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.245253 kubelet[1831]: W0129 16:21:23.244132 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.245253 kubelet[1831]: E0129 16:21:23.244206 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.245253 kubelet[1831]: E0129 16:21:23.244573 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.245253 kubelet[1831]: W0129 16:21:23.244593 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.247924 kubelet[1831]: E0129 16:21:23.244624 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.247924 kubelet[1831]: E0129 16:21:23.244950 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.247924 kubelet[1831]: W0129 16:21:23.244965 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.247924 kubelet[1831]: E0129 16:21:23.244987 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.247924 kubelet[1831]: E0129 16:21:23.245329 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.247924 kubelet[1831]: W0129 16:21:23.245343 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.247924 kubelet[1831]: E0129 16:21:23.245368 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.247924 kubelet[1831]: E0129 16:21:23.245705 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.247924 kubelet[1831]: W0129 16:21:23.245719 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.247924 kubelet[1831]: E0129 16:21:23.245738 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.248478 kubelet[1831]: E0129 16:21:23.246010 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.248478 kubelet[1831]: W0129 16:21:23.246023 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.248478 kubelet[1831]: E0129 16:21:23.246044 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.249126 kubelet[1831]: E0129 16:21:23.248768 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.249126 kubelet[1831]: W0129 16:21:23.248796 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.249126 kubelet[1831]: E0129 16:21:23.248843 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.250305 kubelet[1831]: E0129 16:21:23.250243 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.250305 kubelet[1831]: W0129 16:21:23.250292 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.250720 kubelet[1831]: E0129 16:21:23.250329 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.251122 kubelet[1831]: E0129 16:21:23.251025 1831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:21:23.251122 kubelet[1831]: W0129 16:21:23.251050 1831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:21:23.251122 kubelet[1831]: E0129 16:21:23.251074 1831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:21:23.377010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1680529776.mount: Deactivated successfully. Jan 29 16:21:23.542927 containerd[1482]: time="2025-01-29T16:21:23.542769614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:23.546906 containerd[1482]: time="2025-01-29T16:21:23.546767696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 16:21:23.548887 containerd[1482]: time="2025-01-29T16:21:23.548764315Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:23.553233 containerd[1482]: time="2025-01-29T16:21:23.553115541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:23.555005 containerd[1482]: time="2025-01-29T16:21:23.554714789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.590018596s" Jan 29 16:21:23.555005 containerd[1482]: time="2025-01-29T16:21:23.554783264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 16:21:23.558503 containerd[1482]: time="2025-01-29T16:21:23.558375951Z" level=info msg="CreateContainer within sandbox \"fc1802fb106fdb51fbe5a4b9d9293b34b743d8e83c29fed2181adef39e517510\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 16:21:23.596358 containerd[1482]: time="2025-01-29T16:21:23.596286395Z" level=info msg="CreateContainer within sandbox \"fc1802fb106fdb51fbe5a4b9d9293b34b743d8e83c29fed2181adef39e517510\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"18fbe006d2b71483cb44ea3009113783c2f388b461b6c188d19370709b7220ff\"" Jan 29 16:21:23.598284 containerd[1482]: time="2025-01-29T16:21:23.597230168Z" level=info msg="StartContainer for \"18fbe006d2b71483cb44ea3009113783c2f388b461b6c188d19370709b7220ff\"" Jan 29 16:21:23.651250 systemd[1]: Started cri-containerd-18fbe006d2b71483cb44ea3009113783c2f388b461b6c188d19370709b7220ff.scope - libcontainer container 18fbe006d2b71483cb44ea3009113783c2f388b461b6c188d19370709b7220ff. Jan 29 16:21:23.710071 containerd[1482]: time="2025-01-29T16:21:23.709917374Z" level=info msg="StartContainer for \"18fbe006d2b71483cb44ea3009113783c2f388b461b6c188d19370709b7220ff\" returns successfully" Jan 29 16:21:23.734724 systemd[1]: cri-containerd-18fbe006d2b71483cb44ea3009113783c2f388b461b6c188d19370709b7220ff.scope: Deactivated successfully. Jan 29 16:21:23.896850 containerd[1482]: time="2025-01-29T16:21:23.896392819Z" level=info msg="shim disconnected" id=18fbe006d2b71483cb44ea3009113783c2f388b461b6c188d19370709b7220ff namespace=k8s.io Jan 29 16:21:23.896850 containerd[1482]: time="2025-01-29T16:21:23.896474395Z" level=warning msg="cleaning up after shim disconnected" id=18fbe006d2b71483cb44ea3009113783c2f388b461b6c188d19370709b7220ff namespace=k8s.io Jan 29 16:21:23.896850 containerd[1482]: time="2025-01-29T16:21:23.896488922Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:23.920251 containerd[1482]: time="2025-01-29T16:21:23.919874533Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:21:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:21:23.957015 kubelet[1831]: E0129 16:21:23.956950 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:24.125966 kubelet[1831]: E0129 16:21:24.125870 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j225r" podUID="ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf" Jan 29 16:21:24.164003 kubelet[1831]: E0129 16:21:24.163641 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 16:21:24.165469 kubelet[1831]: E0129 16:21:24.164688 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 16:21:24.166400 containerd[1482]: time="2025-01-29T16:21:24.166361249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 16:21:24.319244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18fbe006d2b71483cb44ea3009113783c2f388b461b6c188d19370709b7220ff-rootfs.mount: Deactivated successfully. Jan 29 16:21:24.746522 systemd-resolved[1337]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 29 16:21:24.957807 kubelet[1831]: E0129 16:21:24.957731 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:25.957984 kubelet[1831]: E0129 16:21:25.957904 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:26.126881 kubelet[1831]: E0129 16:21:26.126651 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j225r" podUID="ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf" Jan 29 16:21:26.958519 kubelet[1831]: E0129 16:21:26.958460 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:27.961349 kubelet[1831]: E0129 16:21:27.961292 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:28.126849 kubelet[1831]: E0129 16:21:28.126729 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j225r" podUID="ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf" Jan 29 16:21:28.962230 kubelet[1831]: E0129 16:21:28.962174 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:29.162483 containerd[1482]: time="2025-01-29T16:21:29.162316409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:29.165459 containerd[1482]: time="2025-01-29T16:21:29.165133409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 16:21:29.167592 containerd[1482]: time="2025-01-29T16:21:29.167479368Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:29.172883 containerd[1482]: time="2025-01-29T16:21:29.172742901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:29.176888 containerd[1482]: time="2025-01-29T16:21:29.176702532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.0102767s" Jan 29 16:21:29.176888 containerd[1482]: time="2025-01-29T16:21:29.176766964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 16:21:29.180384 containerd[1482]: time="2025-01-29T16:21:29.180302916Z" level=info msg="CreateContainer within sandbox \"fc1802fb106fdb51fbe5a4b9d9293b34b743d8e83c29fed2181adef39e517510\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 16:21:29.215791 containerd[1482]: time="2025-01-29T16:21:29.215587017Z" level=info msg="CreateContainer within sandbox \"fc1802fb106fdb51fbe5a4b9d9293b34b743d8e83c29fed2181adef39e517510\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5f466ef2666c027eea617a2fb224740e6f20d555d540c7104feb2a554a6db62c\"" Jan 29 16:21:29.216762 containerd[1482]: time="2025-01-29T16:21:29.216705335Z" level=info msg="StartContainer for \"5f466ef2666c027eea617a2fb224740e6f20d555d540c7104feb2a554a6db62c\"" Jan 29 16:21:29.279389 systemd[1]: Started cri-containerd-5f466ef2666c027eea617a2fb224740e6f20d555d540c7104feb2a554a6db62c.scope - libcontainer container 5f466ef2666c027eea617a2fb224740e6f20d555d540c7104feb2a554a6db62c. Jan 29 16:21:29.339164 containerd[1482]: time="2025-01-29T16:21:29.339080763Z" level=info msg="StartContainer for \"5f466ef2666c027eea617a2fb224740e6f20d555d540c7104feb2a554a6db62c\" returns successfully" Jan 29 16:21:29.966215 kubelet[1831]: E0129 16:21:29.966121 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:30.126170 kubelet[1831]: E0129 16:21:30.126081 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j225r" podUID="ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf" Jan 29 16:21:30.187899 kubelet[1831]: E0129 16:21:30.185941 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 16:21:30.496845 containerd[1482]: time="2025-01-29T16:21:30.496752793Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:21:30.501499 systemd[1]: cri-containerd-5f466ef2666c027eea617a2fb224740e6f20d555d540c7104feb2a554a6db62c.scope: Deactivated successfully. Jan 29 16:21:30.502327 systemd[1]: cri-containerd-5f466ef2666c027eea617a2fb224740e6f20d555d540c7104feb2a554a6db62c.scope: Consumed 998ms CPU time, 174.4M memory peak, 151M written to disk. Jan 29 16:21:30.543413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f466ef2666c027eea617a2fb224740e6f20d555d540c7104feb2a554a6db62c-rootfs.mount: Deactivated successfully. Jan 29 16:21:30.564507 kubelet[1831]: I0129 16:21:30.564466 1831 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 16:21:30.787462 containerd[1482]: time="2025-01-29T16:21:30.787286419Z" level=info msg="shim disconnected" id=5f466ef2666c027eea617a2fb224740e6f20d555d540c7104feb2a554a6db62c namespace=k8s.io Jan 29 16:21:30.787462 containerd[1482]: time="2025-01-29T16:21:30.787393939Z" level=warning msg="cleaning up after shim disconnected" id=5f466ef2666c027eea617a2fb224740e6f20d555d540c7104feb2a554a6db62c namespace=k8s.io Jan 29 16:21:30.787462 containerd[1482]: time="2025-01-29T16:21:30.787409525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:30.811316 containerd[1482]: time="2025-01-29T16:21:30.811243342Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:21:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:21:30.967386 kubelet[1831]: E0129 16:21:30.967283 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:31.192216 kubelet[1831]: E0129 16:21:31.190846 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 16:21:31.193619 containerd[1482]: time="2025-01-29T16:21:31.193562180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 16:21:31.968091 kubelet[1831]: E0129 16:21:31.968033 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:32.140963 systemd[1]: Created slice kubepods-besteffort-podad7fc0a9_ad4e_43cb_a88c_afc4b29ddcaf.slice - libcontainer container kubepods-besteffort-podad7fc0a9_ad4e_43cb_a88c_afc4b29ddcaf.slice. Jan 29 16:21:32.148459 containerd[1482]: time="2025-01-29T16:21:32.148391612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:0,}" Jan 29 16:21:32.319861 containerd[1482]: time="2025-01-29T16:21:32.317719675Z" level=error msg="Failed to destroy network for sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:32.320518 containerd[1482]: time="2025-01-29T16:21:32.320346862Z" level=error msg="encountered an error cleaning up failed sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:32.320640 containerd[1482]: time="2025-01-29T16:21:32.320579970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:32.322887 kubelet[1831]: E0129 16:21:32.321062 1831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:32.322887 kubelet[1831]: E0129 16:21:32.321181 1831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:32.322887 kubelet[1831]: E0129 16:21:32.321217 1831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:32.323219 kubelet[1831]: E0129 16:21:32.321290 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j225r" podUID="ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf" Jan 29 16:21:32.323768 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c-shm.mount: Deactivated successfully. Jan 29 16:21:32.969470 kubelet[1831]: E0129 16:21:32.969060 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:33.214272 kubelet[1831]: I0129 16:21:33.212745 1831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c" Jan 29 16:21:33.215358 containerd[1482]: time="2025-01-29T16:21:33.214842080Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\"" Jan 29 16:21:33.215358 containerd[1482]: time="2025-01-29T16:21:33.215154787Z" level=info msg="Ensure that sandbox eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c in task-service has been cleanup successfully" Jan 29 16:21:33.220628 containerd[1482]: time="2025-01-29T16:21:33.218503209Z" level=info msg="TearDown network for sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" successfully" Jan 29 16:21:33.220628 containerd[1482]: time="2025-01-29T16:21:33.218574700Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" returns successfully" Jan 29 16:21:33.222175 containerd[1482]: time="2025-01-29T16:21:33.221176523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:1,}" Jan 29 16:21:33.223560 systemd[1]: run-netns-cni\x2d1cf15f65\x2d4c06\x2d1f7c\x2dcc5b\x2da2221fd20e41.mount: Deactivated successfully. Jan 29 16:21:33.390898 containerd[1482]: time="2025-01-29T16:21:33.390790181Z" level=error msg="Failed to destroy network for sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:33.393065 containerd[1482]: time="2025-01-29T16:21:33.391566532Z" level=error msg="encountered an error cleaning up failed sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:33.395004 containerd[1482]: time="2025-01-29T16:21:33.393944866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:33.395198 kubelet[1831]: E0129 16:21:33.394367 1831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:33.395198 kubelet[1831]: E0129 16:21:33.394447 1831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:33.395198 kubelet[1831]: E0129 16:21:33.394522 1831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:33.395431 kubelet[1831]: E0129 16:21:33.394587 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j225r" podUID="ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf" Jan 29 16:21:33.396380 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17-shm.mount: Deactivated successfully. Jan 29 16:21:33.969594 kubelet[1831]: E0129 16:21:33.969492 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:34.217863 kubelet[1831]: I0129 16:21:34.217521 1831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17" Jan 29 16:21:34.221995 containerd[1482]: time="2025-01-29T16:21:34.221172872Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\"" Jan 29 16:21:34.221995 containerd[1482]: time="2025-01-29T16:21:34.221671852Z" level=info msg="Ensure that sandbox 59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17 in task-service has been cleanup successfully" Jan 29 16:21:34.223669 containerd[1482]: time="2025-01-29T16:21:34.223623592Z" level=info msg="TearDown network for sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" successfully" Jan 29 16:21:34.224949 containerd[1482]: time="2025-01-29T16:21:34.224884452Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" returns successfully" Jan 29 16:21:34.229548 systemd[1]: run-netns-cni\x2d740bc98d\x2d5a88\x2da525\x2d37e1\x2d6e6763c3c176.mount: Deactivated successfully. Jan 29 16:21:34.231549 containerd[1482]: time="2025-01-29T16:21:34.230511496Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\"" Jan 29 16:21:34.231549 containerd[1482]: time="2025-01-29T16:21:34.230809734Z" level=info msg="TearDown network for sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" successfully" Jan 29 16:21:34.231549 containerd[1482]: time="2025-01-29T16:21:34.230993994Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" returns successfully" Jan 29 16:21:34.235720 containerd[1482]: time="2025-01-29T16:21:34.235647689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:2,}" Jan 29 16:21:34.440479 containerd[1482]: time="2025-01-29T16:21:34.440399294Z" level=error msg="Failed to destroy network for sandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:34.441140 containerd[1482]: time="2025-01-29T16:21:34.441092390Z" level=error msg="encountered an error cleaning up failed sandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:34.441281 containerd[1482]: time="2025-01-29T16:21:34.441175398Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:34.443071 kubelet[1831]: E0129 16:21:34.442043 1831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:34.443071 kubelet[1831]: E0129 16:21:34.442118 1831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:34.443071 kubelet[1831]: E0129 16:21:34.442173 1831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:34.443289 kubelet[1831]: E0129 16:21:34.442235 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j225r" podUID="ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf" Jan 29 16:21:34.446799 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb-shm.mount: Deactivated successfully. Jan 29 16:21:34.711089 systemd[1]: Created slice kubepods-besteffort-podfab8c22d_5eb2_44c6_a21e_243ba8bafb34.slice - libcontainer container kubepods-besteffort-podfab8c22d_5eb2_44c6_a21e_243ba8bafb34.slice. Jan 29 16:21:34.758708 kubelet[1831]: I0129 16:21:34.758228 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xw9x\" (UniqueName: \"kubernetes.io/projected/fab8c22d-5eb2-44c6-a21e-243ba8bafb34-kube-api-access-4xw9x\") pod \"nginx-deployment-7fcdb87857-lt79c\" (UID: \"fab8c22d-5eb2-44c6-a21e-243ba8bafb34\") " pod="default/nginx-deployment-7fcdb87857-lt79c" Jan 29 16:21:34.969786 kubelet[1831]: E0129 16:21:34.969643 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:35.022628 containerd[1482]: time="2025-01-29T16:21:35.022124334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lt79c,Uid:fab8c22d-5eb2-44c6-a21e-243ba8bafb34,Namespace:default,Attempt:0,}" Jan 29 16:21:35.227385 kubelet[1831]: I0129 16:21:35.226184 1831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb" Jan 29 16:21:35.229171 containerd[1482]: time="2025-01-29T16:21:35.229123175Z" level=info msg="StopPodSandbox for \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\"" Jan 29 16:21:35.234840 containerd[1482]: time="2025-01-29T16:21:35.233721703Z" level=info msg="Ensure that sandbox 4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb in task-service has been cleanup successfully" Jan 29 16:21:35.241679 systemd[1]: run-netns-cni\x2d0dd2c475\x2d66db\x2d9457\x2daf26\x2dc48c3a5cde81.mount: Deactivated successfully. Jan 29 16:21:35.243733 containerd[1482]: time="2025-01-29T16:21:35.243261243Z" level=info msg="TearDown network for sandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" successfully" Jan 29 16:21:35.244421 containerd[1482]: time="2025-01-29T16:21:35.244376014Z" level=info msg="StopPodSandbox for \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" returns successfully" Jan 29 16:21:35.248119 containerd[1482]: time="2025-01-29T16:21:35.247449026Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\"" Jan 29 16:21:35.248119 containerd[1482]: time="2025-01-29T16:21:35.247638182Z" level=info msg="TearDown network for sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" successfully" Jan 29 16:21:35.248119 containerd[1482]: time="2025-01-29T16:21:35.247659260Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" returns successfully" Jan 29 16:21:35.249750 containerd[1482]: time="2025-01-29T16:21:35.249468518Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\"" Jan 29 16:21:35.249750 containerd[1482]: time="2025-01-29T16:21:35.249629752Z" level=info msg="TearDown network for sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" successfully" Jan 29 16:21:35.249750 containerd[1482]: time="2025-01-29T16:21:35.249648795Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" returns successfully" Jan 29 16:21:35.251657 containerd[1482]: time="2025-01-29T16:21:35.251006209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:3,}" Jan 29 16:21:35.261606 containerd[1482]: time="2025-01-29T16:21:35.261513929Z" level=error msg="Failed to destroy network for sandbox \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:35.264686 containerd[1482]: time="2025-01-29T16:21:35.264601741Z" level=error msg="encountered an error cleaning up failed sandbox \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:35.266927 containerd[1482]: time="2025-01-29T16:21:35.264737860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lt79c,Uid:fab8c22d-5eb2-44c6-a21e-243ba8bafb34,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:35.266678 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d-shm.mount: Deactivated successfully. Jan 29 16:21:35.268097 kubelet[1831]: E0129 16:21:35.268025 1831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:35.270895 kubelet[1831]: E0129 16:21:35.270744 1831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-lt79c" Jan 29 16:21:35.271209 kubelet[1831]: E0129 16:21:35.271064 1831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-lt79c" Jan 29 16:21:35.271851 kubelet[1831]: E0129 16:21:35.271599 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-lt79c_default(fab8c22d-5eb2-44c6-a21e-243ba8bafb34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-lt79c_default(fab8c22d-5eb2-44c6-a21e-243ba8bafb34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-lt79c" podUID="fab8c22d-5eb2-44c6-a21e-243ba8bafb34" Jan 29 16:21:35.458247 containerd[1482]: time="2025-01-29T16:21:35.457740990Z" level=error msg="Failed to destroy network for sandbox \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:35.458439 containerd[1482]: time="2025-01-29T16:21:35.458299863Z" level=error msg="encountered an error cleaning up failed sandbox \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:35.458515 containerd[1482]: time="2025-01-29T16:21:35.458457313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:35.459983 kubelet[1831]: E0129 16:21:35.459927 1831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:35.459983 kubelet[1831]: E0129 16:21:35.460005 1831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:35.460366 kubelet[1831]: E0129 16:21:35.460036 1831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:35.460366 kubelet[1831]: E0129 16:21:35.460097 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j225r" podUID="ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf" Jan 29 16:21:35.971152 kubelet[1831]: E0129 16:21:35.970745 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:36.227679 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619-shm.mount: Deactivated successfully. Jan 29 16:21:36.236639 kubelet[1831]: I0129 16:21:36.236594 1831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619" Jan 29 16:21:36.237776 containerd[1482]: time="2025-01-29T16:21:36.237527813Z" level=info msg="StopPodSandbox for \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\"" Jan 29 16:21:36.240456 containerd[1482]: time="2025-01-29T16:21:36.237869127Z" level=info msg="Ensure that sandbox c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619 in task-service has been cleanup successfully" Jan 29 16:21:36.242306 systemd[1]: run-netns-cni\x2dc4b8c105\x2d844c\x2d64a8\x2d8c97\x2dbc29cbb6fea2.mount: Deactivated successfully. Jan 29 16:21:36.244431 containerd[1482]: time="2025-01-29T16:21:36.243447524Z" level=info msg="TearDown network for sandbox \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\" successfully" Jan 29 16:21:36.244431 containerd[1482]: time="2025-01-29T16:21:36.243483143Z" level=info msg="StopPodSandbox for \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\" returns successfully" Jan 29 16:21:36.247711 kubelet[1831]: I0129 16:21:36.246684 1831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d" Jan 29 16:21:36.247932 containerd[1482]: time="2025-01-29T16:21:36.247185499Z" level=info msg="StopPodSandbox for \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\"" Jan 29 16:21:36.247932 containerd[1482]: time="2025-01-29T16:21:36.247331145Z" level=info msg="TearDown network for sandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" successfully" Jan 29 16:21:36.247932 containerd[1482]: time="2025-01-29T16:21:36.247350107Z" level=info msg="StopPodSandbox for \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" returns successfully" Jan 29 16:21:36.249666 containerd[1482]: time="2025-01-29T16:21:36.249612912Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\"" Jan 29 16:21:36.249866 containerd[1482]: time="2025-01-29T16:21:36.249778312Z" level=info msg="TearDown network for sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" successfully" Jan 29 16:21:36.249866 containerd[1482]: time="2025-01-29T16:21:36.249796311Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" returns successfully" Jan 29 16:21:36.250739 containerd[1482]: time="2025-01-29T16:21:36.250692410Z" level=info msg="StopPodSandbox for \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\"" Jan 29 16:21:36.253909 containerd[1482]: time="2025-01-29T16:21:36.251096733Z" level=info msg="Ensure that sandbox 2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d in task-service has been cleanup successfully" Jan 29 16:21:36.253909 containerd[1482]: time="2025-01-29T16:21:36.252139783Z" level=info msg="TearDown network for sandbox \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\" successfully" Jan 29 16:21:36.253909 containerd[1482]: time="2025-01-29T16:21:36.252173831Z" level=info msg="StopPodSandbox for \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\" returns successfully" Jan 29 16:21:36.256860 systemd[1]: run-netns-cni\x2d38675801\x2d418e\x2d73a7\x2dede7\x2d90e531dd6526.mount: Deactivated successfully. Jan 29 16:21:36.258428 containerd[1482]: time="2025-01-29T16:21:36.258172885Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\"" Jan 29 16:21:36.258428 containerd[1482]: time="2025-01-29T16:21:36.258350137Z" level=info msg="TearDown network for sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" successfully" Jan 29 16:21:36.258428 containerd[1482]: time="2025-01-29T16:21:36.258372249Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" returns successfully" Jan 29 16:21:36.258676 containerd[1482]: time="2025-01-29T16:21:36.258555831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lt79c,Uid:fab8c22d-5eb2-44c6-a21e-243ba8bafb34,Namespace:default,Attempt:1,}" Jan 29 16:21:36.262265 containerd[1482]: time="2025-01-29T16:21:36.261845356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:4,}" Jan 29 16:21:36.578563 containerd[1482]: time="2025-01-29T16:21:36.578466809Z" level=error msg="Failed to destroy network for sandbox \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:36.580476 containerd[1482]: time="2025-01-29T16:21:36.580085350Z" level=error msg="encountered an error cleaning up failed sandbox \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:36.580476 containerd[1482]: time="2025-01-29T16:21:36.580221552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:36.580885 kubelet[1831]: E0129 16:21:36.580809 1831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:36.580981 kubelet[1831]: E0129 16:21:36.580927 1831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:36.580981 kubelet[1831]: E0129 16:21:36.580962 1831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:36.581169 kubelet[1831]: E0129 16:21:36.581019 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j225r" podUID="ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf" Jan 29 16:21:36.584249 containerd[1482]: time="2025-01-29T16:21:36.584073874Z" level=error msg="Failed to destroy network for sandbox \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:36.584891 containerd[1482]: time="2025-01-29T16:21:36.584839867Z" level=error msg="encountered an error cleaning up failed sandbox \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:36.585906 containerd[1482]: time="2025-01-29T16:21:36.585732407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lt79c,Uid:fab8c22d-5eb2-44c6-a21e-243ba8bafb34,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:36.586112 kubelet[1831]: E0129 16:21:36.586029 1831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:36.586182 kubelet[1831]: E0129 16:21:36.586125 1831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-lt79c" Jan 29 16:21:36.586182 kubelet[1831]: E0129 16:21:36.586162 1831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-lt79c" Jan 29 16:21:36.586271 kubelet[1831]: E0129 16:21:36.586230 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-lt79c_default(fab8c22d-5eb2-44c6-a21e-243ba8bafb34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-lt79c_default(fab8c22d-5eb2-44c6-a21e-243ba8bafb34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-lt79c" podUID="fab8c22d-5eb2-44c6-a21e-243ba8bafb34" Jan 29 16:21:36.950944 kubelet[1831]: E0129 16:21:36.950193 1831 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:36.971108 kubelet[1831]: E0129 16:21:36.971022 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:37.231615 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d-shm.mount: Deactivated successfully. Jan 29 16:21:37.232667 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a-shm.mount: Deactivated successfully. Jan 29 16:21:37.258081 kubelet[1831]: I0129 16:21:37.258001 1831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d" Jan 29 16:21:37.260944 containerd[1482]: time="2025-01-29T16:21:37.259602239Z" level=info msg="StopPodSandbox for \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\"" Jan 29 16:21:37.263514 containerd[1482]: time="2025-01-29T16:21:37.263278049Z" level=info msg="Ensure that sandbox d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d in task-service has been cleanup successfully" Jan 29 16:21:37.265293 kubelet[1831]: I0129 16:21:37.265241 1831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a" Jan 29 16:21:37.267236 containerd[1482]: time="2025-01-29T16:21:37.267115808Z" level=info msg="StopPodSandbox for \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\"" Jan 29 16:21:37.267480 containerd[1482]: time="2025-01-29T16:21:37.267436970Z" level=info msg="Ensure that sandbox cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a in task-service has been cleanup successfully" Jan 29 16:21:37.269280 systemd[1]: run-netns-cni\x2deef5f927\x2d8af7\x2d62f3\x2d0231\x2de450536cd37e.mount: Deactivated successfully. Jan 29 16:21:37.272714 containerd[1482]: time="2025-01-29T16:21:37.270235355Z" level=info msg="TearDown network for sandbox \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\" successfully" Jan 29 16:21:37.272714 containerd[1482]: time="2025-01-29T16:21:37.270285185Z" level=info msg="StopPodSandbox for \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\" returns successfully" Jan 29 16:21:37.272714 containerd[1482]: time="2025-01-29T16:21:37.272021930Z" level=info msg="TearDown network for sandbox \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\" successfully" Jan 29 16:21:37.272714 containerd[1482]: time="2025-01-29T16:21:37.272077416Z" level=info msg="StopPodSandbox for \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\" returns successfully" Jan 29 16:21:37.274770 containerd[1482]: time="2025-01-29T16:21:37.274182250Z" level=info msg="StopPodSandbox for \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\"" Jan 29 16:21:37.274770 containerd[1482]: time="2025-01-29T16:21:37.274435233Z" level=info msg="TearDown network for sandbox \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\" successfully" Jan 29 16:21:37.274770 containerd[1482]: time="2025-01-29T16:21:37.274537643Z" level=info msg="StopPodSandbox for \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\" returns successfully" Jan 29 16:21:37.279125 containerd[1482]: time="2025-01-29T16:21:37.277594971Z" level=info msg="StopPodSandbox for \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\"" Jan 29 16:21:37.279125 containerd[1482]: time="2025-01-29T16:21:37.277765692Z" level=info msg="TearDown network for sandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" successfully" Jan 29 16:21:37.279125 containerd[1482]: time="2025-01-29T16:21:37.277784171Z" level=info msg="StopPodSandbox for \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" returns successfully" Jan 29 16:21:37.279125 containerd[1482]: time="2025-01-29T16:21:37.277901799Z" level=info msg="StopPodSandbox for \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\"" Jan 29 16:21:37.279125 containerd[1482]: time="2025-01-29T16:21:37.277998739Z" level=info msg="TearDown network for sandbox \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\" successfully" Jan 29 16:21:37.279125 containerd[1482]: time="2025-01-29T16:21:37.278017599Z" level=info msg="StopPodSandbox for \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\" returns successfully" Jan 29 16:21:37.278298 systemd[1]: run-netns-cni\x2d831351ff\x2dbf75\x2d25d4\x2d709c\x2d5b5439184084.mount: Deactivated successfully. Jan 29 16:21:37.280559 containerd[1482]: time="2025-01-29T16:21:37.280511773Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\"" Jan 29 16:21:37.282662 containerd[1482]: time="2025-01-29T16:21:37.282608123Z" level=info msg="TearDown network for sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" successfully" Jan 29 16:21:37.283022 containerd[1482]: time="2025-01-29T16:21:37.281618597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lt79c,Uid:fab8c22d-5eb2-44c6-a21e-243ba8bafb34,Namespace:default,Attempt:2,}" Jan 29 16:21:37.283725 containerd[1482]: time="2025-01-29T16:21:37.283057137Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" returns successfully" Jan 29 16:21:37.284626 containerd[1482]: time="2025-01-29T16:21:37.284574687Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\"" Jan 29 16:21:37.284808 containerd[1482]: time="2025-01-29T16:21:37.284753279Z" level=info msg="TearDown network for sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" successfully" Jan 29 16:21:37.284808 containerd[1482]: time="2025-01-29T16:21:37.284789454Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" returns successfully" Jan 29 16:21:37.286042 containerd[1482]: time="2025-01-29T16:21:37.285966818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:5,}" Jan 29 16:21:37.581757 containerd[1482]: time="2025-01-29T16:21:37.581675391Z" level=error msg="Failed to destroy network for sandbox \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:37.585688 containerd[1482]: time="2025-01-29T16:21:37.584296560Z" level=error msg="encountered an error cleaning up failed sandbox \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:37.585688 containerd[1482]: time="2025-01-29T16:21:37.584421594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lt79c,Uid:fab8c22d-5eb2-44c6-a21e-243ba8bafb34,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:37.586075 kubelet[1831]: E0129 16:21:37.584869 1831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:37.586075 kubelet[1831]: E0129 16:21:37.584960 1831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-lt79c" Jan 29 16:21:37.586075 kubelet[1831]: E0129 16:21:37.585130 1831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-lt79c" Jan 29 16:21:37.588743 kubelet[1831]: E0129 16:21:37.585414 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-lt79c_default(fab8c22d-5eb2-44c6-a21e-243ba8bafb34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-lt79c_default(fab8c22d-5eb2-44c6-a21e-243ba8bafb34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-lt79c" podUID="fab8c22d-5eb2-44c6-a21e-243ba8bafb34" Jan 29 16:21:37.622762 containerd[1482]: time="2025-01-29T16:21:37.622689996Z" level=error msg="Failed to destroy network for sandbox \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:37.626227 containerd[1482]: time="2025-01-29T16:21:37.626153296Z" level=error msg="encountered an error cleaning up failed sandbox \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:37.626446 containerd[1482]: time="2025-01-29T16:21:37.626272461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:37.626782 kubelet[1831]: E0129 16:21:37.626714 1831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:37.627155 kubelet[1831]: E0129 16:21:37.627108 1831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:37.627438 kubelet[1831]: E0129 16:21:37.627400 1831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:37.628430 kubelet[1831]: E0129 16:21:37.627735 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j225r" podUID="ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf" Jan 29 16:21:37.973249 kubelet[1831]: E0129 16:21:37.971727 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:38.297059 kubelet[1831]: I0129 16:21:38.271676 1831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0" Jan 29 16:21:38.297059 kubelet[1831]: I0129 16:21:38.277784 1831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d" Jan 29 16:21:38.297800 containerd[1482]: time="2025-01-29T16:21:38.273366786Z" level=info msg="StopPodSandbox for \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\"" Jan 29 16:21:38.297800 containerd[1482]: time="2025-01-29T16:21:38.273667586Z" level=info msg="Ensure that sandbox d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0 in task-service has been cleanup successfully" Jan 29 16:21:38.297800 containerd[1482]: time="2025-01-29T16:21:38.281158887Z" level=info msg="StopPodSandbox for \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\"" Jan 29 16:21:38.297800 containerd[1482]: time="2025-01-29T16:21:38.281489516Z" level=info msg="Ensure that sandbox 4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d in task-service has been cleanup successfully" Jan 29 16:21:38.297800 containerd[1482]: time="2025-01-29T16:21:38.296902733Z" level=info msg="TearDown network for sandbox \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\" successfully" Jan 29 16:21:38.297800 containerd[1482]: time="2025-01-29T16:21:38.296947878Z" level=info msg="StopPodSandbox for \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\" returns successfully" Jan 29 16:21:38.297800 containerd[1482]: time="2025-01-29T16:21:38.297153811Z" level=info msg="TearDown network for sandbox \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\" successfully" Jan 29 16:21:38.297800 containerd[1482]: time="2025-01-29T16:21:38.297169715Z" level=info msg="StopPodSandbox for \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\" returns successfully" Jan 29 16:21:38.297800 containerd[1482]: time="2025-01-29T16:21:38.297728679Z" level=info msg="StopPodSandbox for \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\"" Jan 29 16:21:38.228962 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d-shm.mount: Deactivated successfully. Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.297952729Z" level=info msg="TearDown network for sandbox \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\" successfully" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.297974370Z" level=info msg="StopPodSandbox for \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\" returns successfully" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.298074617Z" level=info msg="StopPodSandbox for \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\"" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.298158147Z" level=info msg="TearDown network for sandbox \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\" successfully" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.298170706Z" level=info msg="StopPodSandbox for \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\" returns successfully" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.298718007Z" level=info msg="StopPodSandbox for \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\"" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.298893481Z" level=info msg="TearDown network for sandbox \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\" successfully" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.298908505Z" level=info msg="StopPodSandbox for \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\" returns successfully" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.299016404Z" level=info msg="StopPodSandbox for \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\"" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.299113236Z" level=info msg="TearDown network for sandbox \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\" successfully" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.299130342Z" level=info msg="StopPodSandbox for \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\" returns successfully" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.299705795Z" level=info msg="StopPodSandbox for \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\"" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.299798542Z" level=info msg="TearDown network for sandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" successfully" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.299833408Z" level=info msg="StopPodSandbox for \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" returns successfully" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.300006238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lt79c,Uid:fab8c22d-5eb2-44c6-a21e-243ba8bafb34,Namespace:default,Attempt:3,}" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.305955190Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\"" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.306080633Z" level=info msg="TearDown network for sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" successfully" Jan 29 16:21:38.306107 containerd[1482]: time="2025-01-29T16:21:38.306094329Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" returns successfully" Jan 29 16:21:38.280317 systemd[1]: run-netns-cni\x2d44d78265\x2dec35\x2d2440\x2d9243\x2ddfdd5655cb78.mount: Deactivated successfully. Jan 29 16:21:38.311198 containerd[1482]: time="2025-01-29T16:21:38.310373272Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\"" Jan 29 16:21:38.311198 containerd[1482]: time="2025-01-29T16:21:38.310522858Z" level=info msg="TearDown network for sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" successfully" Jan 29 16:21:38.311198 containerd[1482]: time="2025-01-29T16:21:38.310542282Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" returns successfully" Jan 29 16:21:38.288247 systemd[1]: run-netns-cni\x2d37899c77\x2d0672\x2d1ff4\x2d1ee6\x2d3ae3efe06318.mount: Deactivated successfully. Jan 29 16:21:38.312891 containerd[1482]: time="2025-01-29T16:21:38.312784491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:6,}" Jan 29 16:21:38.569440 containerd[1482]: time="2025-01-29T16:21:38.569272572Z" level=error msg="Failed to destroy network for sandbox \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:38.572260 containerd[1482]: time="2025-01-29T16:21:38.569964999Z" level=error msg="encountered an error cleaning up failed sandbox \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:38.572260 containerd[1482]: time="2025-01-29T16:21:38.570114943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lt79c,Uid:fab8c22d-5eb2-44c6-a21e-243ba8bafb34,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:38.572545 kubelet[1831]: E0129 16:21:38.570576 1831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:38.572545 kubelet[1831]: E0129 16:21:38.570677 1831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-lt79c" Jan 29 16:21:38.576993 kubelet[1831]: E0129 16:21:38.572195 1831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-lt79c" Jan 29 16:21:38.576993 kubelet[1831]: E0129 16:21:38.572991 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-lt79c_default(fab8c22d-5eb2-44c6-a21e-243ba8bafb34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-lt79c_default(fab8c22d-5eb2-44c6-a21e-243ba8bafb34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-lt79c" podUID="fab8c22d-5eb2-44c6-a21e-243ba8bafb34" Jan 29 16:21:38.593473 containerd[1482]: time="2025-01-29T16:21:38.593382233Z" level=error msg="Failed to destroy network for sandbox \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:38.594471 containerd[1482]: time="2025-01-29T16:21:38.594127046Z" level=error msg="encountered an error cleaning up failed sandbox \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:38.594471 containerd[1482]: time="2025-01-29T16:21:38.594206632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:38.595989 kubelet[1831]: E0129 16:21:38.594481 1831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:38.595989 kubelet[1831]: E0129 16:21:38.594576 1831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:38.595989 kubelet[1831]: E0129 16:21:38.594613 1831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:38.596187 kubelet[1831]: E0129 16:21:38.594697 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j225r" podUID="ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf" Jan 29 16:21:38.972382 kubelet[1831]: E0129 16:21:38.972249 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:39.229163 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e-shm.mount: Deactivated successfully. Jan 29 16:21:39.285990 kubelet[1831]: I0129 16:21:39.285953 1831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8" Jan 29 16:21:39.288348 containerd[1482]: time="2025-01-29T16:21:39.288272769Z" level=info msg="StopPodSandbox for \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\"" Jan 29 16:21:39.288635 containerd[1482]: time="2025-01-29T16:21:39.288597337Z" level=info msg="Ensure that sandbox 39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8 in task-service has been cleanup successfully" Jan 29 16:21:39.289662 containerd[1482]: time="2025-01-29T16:21:39.289610831Z" level=info msg="TearDown network for sandbox \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\" successfully" Jan 29 16:21:39.289662 containerd[1482]: time="2025-01-29T16:21:39.289652146Z" level=info msg="StopPodSandbox for \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\" returns successfully" Jan 29 16:21:39.295229 containerd[1482]: time="2025-01-29T16:21:39.292542936Z" level=info msg="StopPodSandbox for \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\"" Jan 29 16:21:39.295229 containerd[1482]: time="2025-01-29T16:21:39.292671172Z" level=info msg="TearDown network for sandbox \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\" successfully" Jan 29 16:21:39.295229 containerd[1482]: time="2025-01-29T16:21:39.292773294Z" level=info msg="StopPodSandbox for \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\" returns successfully" Jan 29 16:21:39.294372 systemd[1]: run-netns-cni\x2d5d8c3ab6\x2d8a50\x2d3235\x2dae55\x2dbfdf684229c3.mount: Deactivated successfully. Jan 29 16:21:39.298234 containerd[1482]: time="2025-01-29T16:21:39.297488413Z" level=info msg="StopPodSandbox for \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\"" Jan 29 16:21:39.298234 containerd[1482]: time="2025-01-29T16:21:39.297599018Z" level=info msg="TearDown network for sandbox \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\" successfully" Jan 29 16:21:39.298234 containerd[1482]: time="2025-01-29T16:21:39.297650453Z" level=info msg="StopPodSandbox for \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\" returns successfully" Jan 29 16:21:39.301341 containerd[1482]: time="2025-01-29T16:21:39.301270866Z" level=info msg="StopPodSandbox for \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\"" Jan 29 16:21:39.302235 containerd[1482]: time="2025-01-29T16:21:39.301622249Z" level=info msg="TearDown network for sandbox \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\" successfully" Jan 29 16:21:39.302516 containerd[1482]: time="2025-01-29T16:21:39.302403979Z" level=info msg="StopPodSandbox for \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\" returns successfully" Jan 29 16:21:39.305336 containerd[1482]: time="2025-01-29T16:21:39.305246199Z" level=info msg="StopPodSandbox for \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\"" Jan 29 16:21:39.305526 containerd[1482]: time="2025-01-29T16:21:39.305401532Z" level=info msg="TearDown network for sandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" successfully" Jan 29 16:21:39.305526 containerd[1482]: time="2025-01-29T16:21:39.305426518Z" level=info msg="StopPodSandbox for \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" returns successfully" Jan 29 16:21:39.306278 kubelet[1831]: I0129 16:21:39.306083 1831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e" Jan 29 16:21:39.307269 containerd[1482]: time="2025-01-29T16:21:39.306696170Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\"" Jan 29 16:21:39.308443 containerd[1482]: time="2025-01-29T16:21:39.307930697Z" level=info msg="TearDown network for sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" successfully" Jan 29 16:21:39.308443 containerd[1482]: time="2025-01-29T16:21:39.307968449Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" returns successfully" Jan 29 16:21:39.308443 containerd[1482]: time="2025-01-29T16:21:39.308212966Z" level=info msg="StopPodSandbox for \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\"" Jan 29 16:21:39.308672 containerd[1482]: time="2025-01-29T16:21:39.308491338Z" level=info msg="Ensure that sandbox 170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e in task-service has been cleanup successfully" Jan 29 16:21:39.312419 containerd[1482]: time="2025-01-29T16:21:39.308834150Z" level=info msg="TearDown network for sandbox \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\" successfully" Jan 29 16:21:39.312419 containerd[1482]: time="2025-01-29T16:21:39.308875005Z" level=info msg="StopPodSandbox for \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\" returns successfully" Jan 29 16:21:39.312419 containerd[1482]: time="2025-01-29T16:21:39.309064803Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\"" Jan 29 16:21:39.312419 containerd[1482]: time="2025-01-29T16:21:39.309169313Z" level=info msg="TearDown network for sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" successfully" Jan 29 16:21:39.312419 containerd[1482]: time="2025-01-29T16:21:39.309185163Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" returns successfully" Jan 29 16:21:39.315671 systemd[1]: run-netns-cni\x2d1f3a283a\x2d791c\x2dc328\x2d2386\x2d0f0fb1eb7c29.mount: Deactivated successfully. Jan 29 16:21:39.317055 containerd[1482]: time="2025-01-29T16:21:39.314864096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:7,}" Jan 29 16:21:39.325744 containerd[1482]: time="2025-01-29T16:21:39.325407637Z" level=info msg="StopPodSandbox for \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\"" Jan 29 16:21:39.325744 containerd[1482]: time="2025-01-29T16:21:39.325545295Z" level=info msg="TearDown network for sandbox \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\" successfully" Jan 29 16:21:39.325744 containerd[1482]: time="2025-01-29T16:21:39.325561675Z" level=info msg="StopPodSandbox for \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\" returns successfully" Jan 29 16:21:39.326835 containerd[1482]: time="2025-01-29T16:21:39.326238431Z" level=info msg="StopPodSandbox for \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\"" Jan 29 16:21:39.326835 containerd[1482]: time="2025-01-29T16:21:39.326397448Z" level=info msg="TearDown network for sandbox \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\" successfully" Jan 29 16:21:39.326835 containerd[1482]: time="2025-01-29T16:21:39.326419805Z" level=info msg="StopPodSandbox for \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\" returns successfully" Jan 29 16:21:39.327154 containerd[1482]: time="2025-01-29T16:21:39.327110890Z" level=info msg="StopPodSandbox for \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\"" Jan 29 16:21:39.327845 containerd[1482]: time="2025-01-29T16:21:39.327238432Z" level=info msg="TearDown network for sandbox \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\" successfully" Jan 29 16:21:39.327845 containerd[1482]: time="2025-01-29T16:21:39.327278280Z" level=info msg="StopPodSandbox for \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\" returns successfully" Jan 29 16:21:39.328033 containerd[1482]: time="2025-01-29T16:21:39.327983056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lt79c,Uid:fab8c22d-5eb2-44c6-a21e-243ba8bafb34,Namespace:default,Attempt:4,}" Jan 29 16:21:39.611633 containerd[1482]: time="2025-01-29T16:21:39.611472176Z" level=error msg="Failed to destroy network for sandbox \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:39.612923 containerd[1482]: time="2025-01-29T16:21:39.611919127Z" level=error msg="encountered an error cleaning up failed sandbox \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:39.612923 containerd[1482]: time="2025-01-29T16:21:39.612017335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:39.613187 kubelet[1831]: E0129 16:21:39.612402 1831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:39.613187 kubelet[1831]: E0129 16:21:39.612494 1831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:39.613187 kubelet[1831]: E0129 16:21:39.612528 1831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j225r" Jan 29 16:21:39.613394 kubelet[1831]: E0129 16:21:39.612585 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j225r_calico-system(ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j225r" podUID="ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf" Jan 29 16:21:39.615482 containerd[1482]: time="2025-01-29T16:21:39.614812768Z" level=error msg="Failed to destroy network for sandbox \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:39.615482 containerd[1482]: time="2025-01-29T16:21:39.615278347Z" level=error msg="encountered an error cleaning up failed sandbox \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:39.615482 containerd[1482]: time="2025-01-29T16:21:39.615355244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lt79c,Uid:fab8c22d-5eb2-44c6-a21e-243ba8bafb34,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:39.616668 kubelet[1831]: E0129 16:21:39.616042 1831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:21:39.616668 kubelet[1831]: E0129 16:21:39.616112 1831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-lt79c" Jan 29 16:21:39.616668 kubelet[1831]: E0129 16:21:39.616145 1831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-lt79c" Jan 29 16:21:39.617053 kubelet[1831]: E0129 16:21:39.616208 1831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-lt79c_default(fab8c22d-5eb2-44c6-a21e-243ba8bafb34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-lt79c_default(fab8c22d-5eb2-44c6-a21e-243ba8bafb34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-lt79c" podUID="fab8c22d-5eb2-44c6-a21e-243ba8bafb34" Jan 29 16:21:39.848109 containerd[1482]: time="2025-01-29T16:21:39.848004458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:39.849631 containerd[1482]: time="2025-01-29T16:21:39.849556425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 16:21:39.852516 containerd[1482]: time="2025-01-29T16:21:39.852415727Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:39.856314 containerd[1482]: time="2025-01-29T16:21:39.856220963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:39.857432 containerd[1482]: time="2025-01-29T16:21:39.857171003Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.66354259s" Jan 29 16:21:39.857432 containerd[1482]: time="2025-01-29T16:21:39.857255050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 16:21:39.893334 containerd[1482]: time="2025-01-29T16:21:39.892256061Z" level=info msg="CreateContainer within sandbox \"fc1802fb106fdb51fbe5a4b9d9293b34b743d8e83c29fed2181adef39e517510\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 16:21:39.929773 containerd[1482]: time="2025-01-29T16:21:39.929673148Z" level=info msg="CreateContainer within sandbox \"fc1802fb106fdb51fbe5a4b9d9293b34b743d8e83c29fed2181adef39e517510\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"44f811ae85822349a5ae67447137e2b29e8ac9c5574692ef339d73ad34854548\"" Jan 29 16:21:39.931229 containerd[1482]: time="2025-01-29T16:21:39.931145336Z" level=info msg="StartContainer for \"44f811ae85822349a5ae67447137e2b29e8ac9c5574692ef339d73ad34854548\"" Jan 29 16:21:39.973275 kubelet[1831]: E0129 16:21:39.973171 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:40.070205 systemd[1]: Started cri-containerd-44f811ae85822349a5ae67447137e2b29e8ac9c5574692ef339d73ad34854548.scope - libcontainer container 44f811ae85822349a5ae67447137e2b29e8ac9c5574692ef339d73ad34854548. Jan 29 16:21:40.175349 containerd[1482]: time="2025-01-29T16:21:40.175098943Z" level=info msg="StartContainer for \"44f811ae85822349a5ae67447137e2b29e8ac9c5574692ef339d73ad34854548\" returns successfully" Jan 29 16:21:40.246328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98-shm.mount: Deactivated successfully. Jan 29 16:21:40.246502 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886-shm.mount: Deactivated successfully. Jan 29 16:21:40.246621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1362467260.mount: Deactivated successfully. Jan 29 16:21:40.284953 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 16:21:40.285146 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 16:21:40.317292 kubelet[1831]: E0129 16:21:40.316863 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 16:21:40.327136 kubelet[1831]: I0129 16:21:40.327047 1831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98" Jan 29 16:21:40.328079 containerd[1482]: time="2025-01-29T16:21:40.328002460Z" level=info msg="StopPodSandbox for \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\"" Jan 29 16:21:40.328600 containerd[1482]: time="2025-01-29T16:21:40.328336014Z" level=info msg="Ensure that sandbox 25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98 in task-service has been cleanup successfully" Jan 29 16:21:40.331714 containerd[1482]: time="2025-01-29T16:21:40.331572225Z" level=info msg="TearDown network for sandbox \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\" successfully" Jan 29 16:21:40.333147 containerd[1482]: time="2025-01-29T16:21:40.331710084Z" level=info msg="StopPodSandbox for \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\" returns successfully" Jan 29 16:21:40.334088 containerd[1482]: time="2025-01-29T16:21:40.334014531Z" level=info msg="StopPodSandbox for \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\"" Jan 29 16:21:40.334211 containerd[1482]: time="2025-01-29T16:21:40.334146148Z" level=info msg="TearDown network for sandbox \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\" successfully" Jan 29 16:21:40.334211 containerd[1482]: time="2025-01-29T16:21:40.334162072Z" level=info msg="StopPodSandbox for \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\" returns successfully" Jan 29 16:21:40.335412 containerd[1482]: time="2025-01-29T16:21:40.335372621Z" level=info msg="StopPodSandbox for \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\"" Jan 29 16:21:40.335844 containerd[1482]: time="2025-01-29T16:21:40.335662856Z" level=info msg="TearDown network for sandbox \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\" successfully" Jan 29 16:21:40.335844 containerd[1482]: time="2025-01-29T16:21:40.335705928Z" level=info msg="StopPodSandbox for \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\" returns successfully" Jan 29 16:21:40.336258 systemd[1]: run-netns-cni\x2d49fe4c9a\x2dc0f3\x2df064\x2d6a3d\x2da87cf9d9ea78.mount: Deactivated successfully. Jan 29 16:21:40.338883 containerd[1482]: time="2025-01-29T16:21:40.337946214Z" level=info msg="StopPodSandbox for \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\"" Jan 29 16:21:40.338883 containerd[1482]: time="2025-01-29T16:21:40.338066854Z" level=info msg="TearDown network for sandbox \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\" successfully" Jan 29 16:21:40.338883 containerd[1482]: time="2025-01-29T16:21:40.338082071Z" level=info msg="StopPodSandbox for \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\" returns successfully" Jan 29 16:21:40.339756 kubelet[1831]: I0129 16:21:40.339245 1831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-978t8" podStartSLOduration=3.197274614 podStartE2EDuration="23.339215537s" podCreationTimestamp="2025-01-29 16:21:17 +0000 UTC" firstStartedPulling="2025-01-29 16:21:19.717309603 +0000 UTC m=+3.466267038" lastFinishedPulling="2025-01-29 16:21:39.859250515 +0000 UTC m=+23.608207961" observedRunningTime="2025-01-29 16:21:40.338792698 +0000 UTC m=+24.087750159" watchObservedRunningTime="2025-01-29 16:21:40.339215537 +0000 UTC m=+24.088172995" Jan 29 16:21:40.341525 containerd[1482]: time="2025-01-29T16:21:40.341238226Z" level=info msg="StopPodSandbox for \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\"" Jan 29 16:21:40.341525 containerd[1482]: time="2025-01-29T16:21:40.341412669Z" level=info msg="TearDown network for sandbox \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\" successfully" Jan 29 16:21:40.341525 containerd[1482]: time="2025-01-29T16:21:40.341432966Z" level=info msg="StopPodSandbox for \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\" returns successfully" Jan 29 16:21:40.342602 containerd[1482]: time="2025-01-29T16:21:40.342565799Z" level=info msg="StopPodSandbox for \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\"" Jan 29 16:21:40.342731 containerd[1482]: time="2025-01-29T16:21:40.342712057Z" level=info msg="TearDown network for sandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" successfully" Jan 29 16:21:40.342987 containerd[1482]: time="2025-01-29T16:21:40.342731088Z" level=info msg="StopPodSandbox for \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" returns successfully" Jan 29 16:21:40.343698 containerd[1482]: time="2025-01-29T16:21:40.343649826Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\"" Jan 29 16:21:40.344008 kubelet[1831]: I0129 16:21:40.343971 1831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886" Jan 29 16:21:40.345180 containerd[1482]: time="2025-01-29T16:21:40.345129803Z" level=info msg="TearDown network for sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" successfully" Jan 29 16:21:40.345533 containerd[1482]: time="2025-01-29T16:21:40.345376740Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" returns successfully" Jan 29 16:21:40.347312 containerd[1482]: time="2025-01-29T16:21:40.345510308Z" level=info msg="StopPodSandbox for \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\"" Jan 29 16:21:40.348307 containerd[1482]: time="2025-01-29T16:21:40.348060662Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\"" Jan 29 16:21:40.349153 containerd[1482]: time="2025-01-29T16:21:40.349121119Z" level=info msg="TearDown network for sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" successfully" Jan 29 16:21:40.350134 containerd[1482]: time="2025-01-29T16:21:40.349487153Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" returns successfully" Jan 29 16:21:40.352024 containerd[1482]: time="2025-01-29T16:21:40.350358601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:8,}" Jan 29 16:21:40.352394 containerd[1482]: time="2025-01-29T16:21:40.350426360Z" level=info msg="Ensure that sandbox 8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886 in task-service has been cleanup successfully" Jan 29 16:21:40.354275 containerd[1482]: time="2025-01-29T16:21:40.354008830Z" level=info msg="TearDown network for sandbox \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\" successfully" Jan 29 16:21:40.354275 containerd[1482]: time="2025-01-29T16:21:40.354075843Z" level=info msg="StopPodSandbox for \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\" returns successfully" Jan 29 16:21:40.358169 containerd[1482]: time="2025-01-29T16:21:40.356195101Z" level=info msg="StopPodSandbox for \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\"" Jan 29 16:21:40.359004 containerd[1482]: time="2025-01-29T16:21:40.358674312Z" level=info msg="TearDown network for sandbox \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\" successfully" Jan 29 16:21:40.359360 containerd[1482]: time="2025-01-29T16:21:40.358746199Z" level=info msg="StopPodSandbox for \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\" returns successfully" Jan 29 16:21:40.360535 systemd[1]: run-netns-cni\x2dea365f0d\x2d29ef\x2df050\x2ddbe0\x2dba7414865a7c.mount: Deactivated successfully. Jan 29 16:21:40.364483 containerd[1482]: time="2025-01-29T16:21:40.362891099Z" level=info msg="StopPodSandbox for \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\"" Jan 29 16:21:40.365340 containerd[1482]: time="2025-01-29T16:21:40.364625654Z" level=info msg="TearDown network for sandbox \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\" successfully" Jan 29 16:21:40.365340 containerd[1482]: time="2025-01-29T16:21:40.364655046Z" level=info msg="StopPodSandbox for \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\" returns successfully" Jan 29 16:21:40.366418 containerd[1482]: time="2025-01-29T16:21:40.366371988Z" level=info msg="StopPodSandbox for \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\"" Jan 29 16:21:40.366702 containerd[1482]: time="2025-01-29T16:21:40.366582280Z" level=info msg="TearDown network for sandbox \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\" successfully" Jan 29 16:21:40.366702 containerd[1482]: time="2025-01-29T16:21:40.366603858Z" level=info msg="StopPodSandbox for \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\" returns successfully" Jan 29 16:21:40.367860 containerd[1482]: time="2025-01-29T16:21:40.367801396Z" level=info msg="StopPodSandbox for \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\"" Jan 29 16:21:40.368310 containerd[1482]: time="2025-01-29T16:21:40.368218422Z" level=info msg="TearDown network for sandbox \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\" successfully" Jan 29 16:21:40.368310 containerd[1482]: time="2025-01-29T16:21:40.368250893Z" level=info msg="StopPodSandbox for \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\" returns successfully" Jan 29 16:21:40.371159 containerd[1482]: time="2025-01-29T16:21:40.370391935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lt79c,Uid:fab8c22d-5eb2-44c6-a21e-243ba8bafb34,Namespace:default,Attempt:5,}" Jan 29 16:21:40.819710 systemd-networkd[1388]: cali867d18dde44: Link UP Jan 29 16:21:40.820402 systemd-networkd[1388]: cali867d18dde44: Gained carrier Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.505 [INFO][2805] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.565 [INFO][2805] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {64.23.139.59-k8s-nginx--deployment--7fcdb87857--lt79c-eth0 nginx-deployment-7fcdb87857- default fab8c22d-5eb2-44c6-a21e-243ba8bafb34 1156 0 2025-01-29 16:21:34 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 64.23.139.59 nginx-deployment-7fcdb87857-lt79c eth0 default [] [] [kns.default ksa.default.default] cali867d18dde44 [] []}} ContainerID="6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" Namespace="default" Pod="nginx-deployment-7fcdb87857-lt79c" WorkloadEndpoint="64.23.139.59-k8s-nginx--deployment--7fcdb87857--lt79c-" Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.565 [INFO][2805] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" Namespace="default" Pod="nginx-deployment-7fcdb87857-lt79c" WorkloadEndpoint="64.23.139.59-k8s-nginx--deployment--7fcdb87857--lt79c-eth0" Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.641 [INFO][2823] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" HandleID="k8s-pod-network.6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" Workload="64.23.139.59-k8s-nginx--deployment--7fcdb87857--lt79c-eth0" Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.753 [INFO][2823] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" HandleID="k8s-pod-network.6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" Workload="64.23.139.59-k8s-nginx--deployment--7fcdb87857--lt79c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003195e0), Attrs:map[string]string{"namespace":"default", "node":"64.23.139.59", "pod":"nginx-deployment-7fcdb87857-lt79c", "timestamp":"2025-01-29 16:21:40.641454537 +0000 UTC"}, Hostname:"64.23.139.59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.753 [INFO][2823] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.753 [INFO][2823] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.753 [INFO][2823] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '64.23.139.59' Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.758 [INFO][2823] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" host="64.23.139.59" Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.764 [INFO][2823] ipam/ipam.go 372: Looking up existing affinities for host host="64.23.139.59" Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.772 [INFO][2823] ipam/ipam.go 489: Trying affinity for 192.168.58.64/26 host="64.23.139.59" Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.776 [INFO][2823] ipam/ipam.go 155: Attempting to load block cidr=192.168.58.64/26 host="64.23.139.59" Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.780 [INFO][2823] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="64.23.139.59" Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.780 [INFO][2823] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" host="64.23.139.59" Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.783 [INFO][2823] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.788 [INFO][2823] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" host="64.23.139.59" Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.797 [INFO][2823] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.58.65/26] block=192.168.58.64/26 handle="k8s-pod-network.6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" host="64.23.139.59" Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.797 [INFO][2823] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.65/26] handle="k8s-pod-network.6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" host="64.23.139.59" Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.797 [INFO][2823] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:21:40.837919 containerd[1482]: 2025-01-29 16:21:40.797 [INFO][2823] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.65/26] IPv6=[] ContainerID="6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" HandleID="k8s-pod-network.6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" Workload="64.23.139.59-k8s-nginx--deployment--7fcdb87857--lt79c-eth0" Jan 29 16:21:40.839321 containerd[1482]: 2025-01-29 16:21:40.801 [INFO][2805] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" Namespace="default" Pod="nginx-deployment-7fcdb87857-lt79c" WorkloadEndpoint="64.23.139.59-k8s-nginx--deployment--7fcdb87857--lt79c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.139.59-k8s-nginx--deployment--7fcdb87857--lt79c-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"fab8c22d-5eb2-44c6-a21e-243ba8bafb34", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.139.59", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-lt79c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.58.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali867d18dde44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:21:40.839321 containerd[1482]: 2025-01-29 16:21:40.801 [INFO][2805] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.58.65/32] ContainerID="6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" Namespace="default" Pod="nginx-deployment-7fcdb87857-lt79c" WorkloadEndpoint="64.23.139.59-k8s-nginx--deployment--7fcdb87857--lt79c-eth0" Jan 29 16:21:40.839321 containerd[1482]: 2025-01-29 16:21:40.801 [INFO][2805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali867d18dde44 ContainerID="6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" Namespace="default" Pod="nginx-deployment-7fcdb87857-lt79c" WorkloadEndpoint="64.23.139.59-k8s-nginx--deployment--7fcdb87857--lt79c-eth0" Jan 29 16:21:40.839321 containerd[1482]: 2025-01-29 16:21:40.820 [INFO][2805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" Namespace="default" Pod="nginx-deployment-7fcdb87857-lt79c" WorkloadEndpoint="64.23.139.59-k8s-nginx--deployment--7fcdb87857--lt79c-eth0" Jan 29 16:21:40.839321 containerd[1482]: 2025-01-29 16:21:40.821 [INFO][2805] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" Namespace="default" Pod="nginx-deployment-7fcdb87857-lt79c" WorkloadEndpoint="64.23.139.59-k8s-nginx--deployment--7fcdb87857--lt79c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.139.59-k8s-nginx--deployment--7fcdb87857--lt79c-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"fab8c22d-5eb2-44c6-a21e-243ba8bafb34", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.139.59", ContainerID:"6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc", Pod:"nginx-deployment-7fcdb87857-lt79c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.58.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali867d18dde44", MAC:"b2:16:1b:f9:c3:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:21:40.839321 containerd[1482]: 2025-01-29 16:21:40.835 [INFO][2805] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc" Namespace="default" Pod="nginx-deployment-7fcdb87857-lt79c" WorkloadEndpoint="64.23.139.59-k8s-nginx--deployment--7fcdb87857--lt79c-eth0" Jan 29 16:21:40.892788 containerd[1482]: time="2025-01-29T16:21:40.892499171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:40.893470 containerd[1482]: time="2025-01-29T16:21:40.893342788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:40.893780 containerd[1482]: time="2025-01-29T16:21:40.893445796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:40.893780 containerd[1482]: time="2025-01-29T16:21:40.893596305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:40.932282 systemd[1]: Started cri-containerd-6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc.scope - libcontainer container 6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc. Jan 29 16:21:40.939104 systemd-networkd[1388]: calie2d10b1d1d6: Link UP Jan 29 16:21:40.939721 systemd-networkd[1388]: calie2d10b1d1d6: Gained carrier Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.499 [INFO][2791] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.564 [INFO][2791] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {64.23.139.59-k8s-csi--node--driver--j225r-eth0 csi-node-driver- calico-system ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf 1055 0 2025-01-29 16:21:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 64.23.139.59 csi-node-driver-j225r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie2d10b1d1d6 [] []}} ContainerID="9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" Namespace="calico-system" Pod="csi-node-driver-j225r" WorkloadEndpoint="64.23.139.59-k8s-csi--node--driver--j225r-" Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.565 [INFO][2791] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" Namespace="calico-system" Pod="csi-node-driver-j225r" WorkloadEndpoint="64.23.139.59-k8s-csi--node--driver--j225r-eth0" Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.660 [INFO][2819] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" HandleID="k8s-pod-network.9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" Workload="64.23.139.59-k8s-csi--node--driver--j225r-eth0" Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.761 [INFO][2819] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" HandleID="k8s-pod-network.9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" Workload="64.23.139.59-k8s-csi--node--driver--j225r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00052f070), Attrs:map[string]string{"namespace":"calico-system", "node":"64.23.139.59", "pod":"csi-node-driver-j225r", "timestamp":"2025-01-29 16:21:40.660759107 +0000 UTC"}, Hostname:"64.23.139.59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.761 [INFO][2819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.797 [INFO][2819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.797 [INFO][2819] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '64.23.139.59' Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.859 [INFO][2819] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" host="64.23.139.59" Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.870 [INFO][2819] ipam/ipam.go 372: Looking up existing affinities for host host="64.23.139.59" Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.882 [INFO][2819] ipam/ipam.go 489: Trying affinity for 192.168.58.64/26 host="64.23.139.59" Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.889 [INFO][2819] ipam/ipam.go 155: Attempting to load block cidr=192.168.58.64/26 host="64.23.139.59" Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.894 [INFO][2819] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="64.23.139.59" Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.895 [INFO][2819] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" host="64.23.139.59" Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.900 [INFO][2819] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.907 [INFO][2819] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" host="64.23.139.59" Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.917 [INFO][2819] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.58.66/26] block=192.168.58.64/26 handle="k8s-pod-network.9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" host="64.23.139.59" Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.917 [INFO][2819] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.66/26] handle="k8s-pod-network.9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" host="64.23.139.59" Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.917 [INFO][2819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:21:40.972778 containerd[1482]: 2025-01-29 16:21:40.917 [INFO][2819] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.66/26] IPv6=[] ContainerID="9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" HandleID="k8s-pod-network.9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" Workload="64.23.139.59-k8s-csi--node--driver--j225r-eth0" Jan 29 16:21:40.975233 containerd[1482]: 2025-01-29 16:21:40.922 [INFO][2791] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" Namespace="calico-system" Pod="csi-node-driver-j225r" WorkloadEndpoint="64.23.139.59-k8s-csi--node--driver--j225r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.139.59-k8s-csi--node--driver--j225r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 21, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.139.59", ContainerID:"", Pod:"csi-node-driver-j225r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2d10b1d1d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:21:40.975233 containerd[1482]: 2025-01-29 16:21:40.922 [INFO][2791] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.58.66/32] ContainerID="9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" Namespace="calico-system" Pod="csi-node-driver-j225r" WorkloadEndpoint="64.23.139.59-k8s-csi--node--driver--j225r-eth0" Jan 29 16:21:40.975233 containerd[1482]: 2025-01-29 16:21:40.922 [INFO][2791] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2d10b1d1d6 ContainerID="9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" Namespace="calico-system" Pod="csi-node-driver-j225r" WorkloadEndpoint="64.23.139.59-k8s-csi--node--driver--j225r-eth0" Jan 29 16:21:40.975233 containerd[1482]: 2025-01-29 16:21:40.935 [INFO][2791] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" Namespace="calico-system" Pod="csi-node-driver-j225r" WorkloadEndpoint="64.23.139.59-k8s-csi--node--driver--j225r-eth0" Jan 29 16:21:40.975233 containerd[1482]: 2025-01-29 16:21:40.945 [INFO][2791] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" Namespace="calico-system" Pod="csi-node-driver-j225r" WorkloadEndpoint="64.23.139.59-k8s-csi--node--driver--j225r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.139.59-k8s-csi--node--driver--j225r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 21, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.139.59", ContainerID:"9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e", Pod:"csi-node-driver-j225r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2d10b1d1d6", MAC:"72:31:1e:e2:78:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:21:40.975233 containerd[1482]: 2025-01-29 16:21:40.970 [INFO][2791] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e" Namespace="calico-system" Pod="csi-node-driver-j225r" WorkloadEndpoint="64.23.139.59-k8s-csi--node--driver--j225r-eth0" Jan 29 16:21:40.975738 kubelet[1831]: E0129 16:21:40.973770 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:41.051270 containerd[1482]: time="2025-01-29T16:21:41.051086042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:41.051270 containerd[1482]: time="2025-01-29T16:21:41.051192580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:41.051270 containerd[1482]: time="2025-01-29T16:21:41.051217729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:41.051953 containerd[1482]: time="2025-01-29T16:21:41.051335841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:41.059029 containerd[1482]: time="2025-01-29T16:21:41.058945982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lt79c,Uid:fab8c22d-5eb2-44c6-a21e-243ba8bafb34,Namespace:default,Attempt:5,} returns sandbox id \"6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc\"" Jan 29 16:21:41.062564 containerd[1482]: time="2025-01-29T16:21:41.062282013Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 16:21:41.087846 systemd[1]: Started cri-containerd-9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e.scope - libcontainer container 9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e. Jan 29 16:21:41.132469 containerd[1482]: time="2025-01-29T16:21:41.132401697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j225r,Uid:ad7fc0a9-ad4e-43cb-a88c-afc4b29ddcaf,Namespace:calico-system,Attempt:8,} returns sandbox id \"9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e\"" Jan 29 16:21:41.358321 kubelet[1831]: I0129 16:21:41.358172 1831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:21:41.359730 kubelet[1831]: E0129 16:21:41.358793 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 16:21:41.974664 kubelet[1831]: E0129 16:21:41.974596 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:42.154068 systemd-networkd[1388]: calie2d10b1d1d6: Gained IPv6LL Jan 29 16:21:42.269150 kernel: bpftool[3064]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 16:21:42.282639 systemd-networkd[1388]: cali867d18dde44: Gained IPv6LL Jan 29 16:21:42.799115 systemd-networkd[1388]: vxlan.calico: Link UP Jan 29 16:21:42.799129 systemd-networkd[1388]: vxlan.calico: Gained carrier Jan 29 16:21:42.974880 kubelet[1831]: E0129 16:21:42.974802 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:43.975688 kubelet[1831]: E0129 16:21:43.975608 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:44.010945 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL Jan 29 16:21:44.693765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount517838707.mount: Deactivated successfully. Jan 29 16:21:44.977968 kubelet[1831]: E0129 16:21:44.976116 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:45.074915 update_engine[1462]: I20250129 16:21:45.073962 1462 update_attempter.cc:509] Updating boot flags... Jan 29 16:21:45.175210 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2783) Jan 29 16:21:45.324801 kubelet[1831]: I0129 16:21:45.324752 1831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:21:45.326105 kubelet[1831]: E0129 16:21:45.325465 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 16:21:45.338784 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3101) Jan 29 16:21:45.526917 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3101) Jan 29 16:21:45.858947 kubelet[1831]: E0129 16:21:45.858888 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 16:21:45.979234 kubelet[1831]: E0129 16:21:45.979174 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:46.980380 kubelet[1831]: E0129 16:21:46.980277 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:47.272903 containerd[1482]: time="2025-01-29T16:21:47.271928795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:47.278137 containerd[1482]: time="2025-01-29T16:21:47.278013315Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 16:21:47.281867 containerd[1482]: time="2025-01-29T16:21:47.280578672Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:47.288659 containerd[1482]: time="2025-01-29T16:21:47.288497647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:47.291385 containerd[1482]: time="2025-01-29T16:21:47.291308954Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 6.228527648s" Jan 29 16:21:47.291385 containerd[1482]: time="2025-01-29T16:21:47.291381859Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 16:21:47.295165 containerd[1482]: time="2025-01-29T16:21:47.295103723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 16:21:47.297698 containerd[1482]: time="2025-01-29T16:21:47.297592996Z" level=info msg="CreateContainer within sandbox \"6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 16:21:47.336683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573134001.mount: Deactivated successfully. Jan 29 16:21:47.341362 containerd[1482]: time="2025-01-29T16:21:47.341120688Z" level=info msg="CreateContainer within sandbox \"6da7a205d25b469323a93c067911d3c07890e05aa3c95ee6bbec40277b5404dc\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"7190edc69a3ecf3a45b58ba2a9bab828eefcbd56d3a77d5f0fdd65b03737b8ba\"" Jan 29 16:21:47.342758 containerd[1482]: time="2025-01-29T16:21:47.342243274Z" level=info msg="StartContainer for \"7190edc69a3ecf3a45b58ba2a9bab828eefcbd56d3a77d5f0fdd65b03737b8ba\"" Jan 29 16:21:47.430547 systemd[1]: run-containerd-runc-k8s.io-7190edc69a3ecf3a45b58ba2a9bab828eefcbd56d3a77d5f0fdd65b03737b8ba-runc.RVJvJS.mount: Deactivated successfully. Jan 29 16:21:47.446305 systemd[1]: Started cri-containerd-7190edc69a3ecf3a45b58ba2a9bab828eefcbd56d3a77d5f0fdd65b03737b8ba.scope - libcontainer container 7190edc69a3ecf3a45b58ba2a9bab828eefcbd56d3a77d5f0fdd65b03737b8ba. Jan 29 16:21:47.501322 containerd[1482]: time="2025-01-29T16:21:47.501242715Z" level=info msg="StartContainer for \"7190edc69a3ecf3a45b58ba2a9bab828eefcbd56d3a77d5f0fdd65b03737b8ba\" returns successfully" Jan 29 16:21:47.980643 kubelet[1831]: E0129 16:21:47.980568 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:48.806718 containerd[1482]: time="2025-01-29T16:21:48.806638727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:48.810171 containerd[1482]: time="2025-01-29T16:21:48.810057093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 16:21:48.812608 containerd[1482]: time="2025-01-29T16:21:48.812524637Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:48.817724 containerd[1482]: time="2025-01-29T16:21:48.817221275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:48.818697 containerd[1482]: time="2025-01-29T16:21:48.818621013Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.523452844s" Jan 29 16:21:48.818697 containerd[1482]: time="2025-01-29T16:21:48.818694510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 16:21:48.822808 containerd[1482]: time="2025-01-29T16:21:48.822515702Z" level=info msg="CreateContainer within sandbox \"9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 16:21:48.857618 containerd[1482]: time="2025-01-29T16:21:48.857507807Z" level=info msg="CreateContainer within sandbox \"9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d41ea911bdfd1076d595d05ba0c1a1cfde11af37573626265367df510bd57095\"" Jan 29 16:21:48.859861 containerd[1482]: time="2025-01-29T16:21:48.858288800Z" level=info msg="StartContainer for \"d41ea911bdfd1076d595d05ba0c1a1cfde11af37573626265367df510bd57095\"" Jan 29 16:21:48.915155 systemd[1]: Started cri-containerd-d41ea911bdfd1076d595d05ba0c1a1cfde11af37573626265367df510bd57095.scope - libcontainer container d41ea911bdfd1076d595d05ba0c1a1cfde11af37573626265367df510bd57095. Jan 29 16:21:48.974866 containerd[1482]: time="2025-01-29T16:21:48.974384815Z" level=info msg="StartContainer for \"d41ea911bdfd1076d595d05ba0c1a1cfde11af37573626265367df510bd57095\" returns successfully" Jan 29 16:21:48.978252 containerd[1482]: time="2025-01-29T16:21:48.978026629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 16:21:48.981057 kubelet[1831]: E0129 16:21:48.980959 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:49.981301 kubelet[1831]: E0129 16:21:49.981234 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:50.716306 containerd[1482]: time="2025-01-29T16:21:50.716224552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:50.718648 containerd[1482]: time="2025-01-29T16:21:50.718542765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 16:21:50.722311 containerd[1482]: time="2025-01-29T16:21:50.722222055Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:50.729157 containerd[1482]: time="2025-01-29T16:21:50.729045390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:50.730209 containerd[1482]: time="2025-01-29T16:21:50.730019014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.751931982s" Jan 29 16:21:50.730209 containerd[1482]: time="2025-01-29T16:21:50.730067904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 16:21:50.735306 containerd[1482]: time="2025-01-29T16:21:50.735253703Z" level=info msg="CreateContainer within sandbox \"9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 16:21:50.790462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2224783534.mount: Deactivated successfully. Jan 29 16:21:50.805783 containerd[1482]: time="2025-01-29T16:21:50.805683971Z" level=info msg="CreateContainer within sandbox \"9639d56e7b9963a19277f1a5dde2b3814ecfa90fb5273f8c97d70d1d34cc048e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2114159ab00fd13805d5ea3693054e7105c49aec74ffd28febc52bff3c7813be\"" Jan 29 16:21:50.806683 containerd[1482]: time="2025-01-29T16:21:50.806415865Z" level=info msg="StartContainer for \"2114159ab00fd13805d5ea3693054e7105c49aec74ffd28febc52bff3c7813be\"" Jan 29 16:21:50.860444 systemd[1]: Started cri-containerd-2114159ab00fd13805d5ea3693054e7105c49aec74ffd28febc52bff3c7813be.scope - libcontainer container 2114159ab00fd13805d5ea3693054e7105c49aec74ffd28febc52bff3c7813be. Jan 29 16:21:50.918743 containerd[1482]: time="2025-01-29T16:21:50.918659906Z" level=info msg="StartContainer for \"2114159ab00fd13805d5ea3693054e7105c49aec74ffd28febc52bff3c7813be\" returns successfully" Jan 29 16:21:50.981644 kubelet[1831]: E0129 16:21:50.981401 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:51.138851 kubelet[1831]: I0129 16:21:51.138781 1831 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 16:21:51.139292 kubelet[1831]: I0129 16:21:51.139073 1831 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 16:21:51.457726 kubelet[1831]: I0129 16:21:51.457617 1831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-lt79c" podStartSLOduration=11.224421988 podStartE2EDuration="17.457592082s" podCreationTimestamp="2025-01-29 16:21:34 +0000 UTC" firstStartedPulling="2025-01-29 16:21:41.061141948 +0000 UTC m=+24.810099387" lastFinishedPulling="2025-01-29 16:21:47.294312039 +0000 UTC m=+31.043269481" observedRunningTime="2025-01-29 16:21:48.424202267 +0000 UTC m=+32.173159725" watchObservedRunningTime="2025-01-29 16:21:51.457592082 +0000 UTC m=+35.206549538" Jan 29 16:21:51.981908 kubelet[1831]: E0129 16:21:51.981839 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:52.982513 kubelet[1831]: E0129 16:21:52.982431 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:53.040685 kubelet[1831]: I0129 16:21:53.040590 1831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-j225r" podStartSLOduration=26.442829564 podStartE2EDuration="36.0405548s" podCreationTimestamp="2025-01-29 16:21:17 +0000 UTC" firstStartedPulling="2025-01-29 16:21:41.135430167 +0000 UTC m=+24.884387607" lastFinishedPulling="2025-01-29 16:21:50.73315539 +0000 UTC m=+34.482112843" observedRunningTime="2025-01-29 16:21:51.457986628 +0000 UTC m=+35.206944086" watchObservedRunningTime="2025-01-29 16:21:53.0405548 +0000 UTC m=+36.789512260" Jan 29 16:21:53.054459 systemd[1]: Created slice kubepods-besteffort-podd5b8204c_e70e_4ca2_a93e_00b00e45a4e7.slice - libcontainer container kubepods-besteffort-podd5b8204c_e70e_4ca2_a93e_00b00e45a4e7.slice. Jan 29 16:21:53.104658 kubelet[1831]: I0129 16:21:53.104547 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhd9l\" (UniqueName: \"kubernetes.io/projected/d5b8204c-e70e-4ca2-a93e-00b00e45a4e7-kube-api-access-qhd9l\") pod \"nfs-server-provisioner-0\" (UID: \"d5b8204c-e70e-4ca2-a93e-00b00e45a4e7\") " pod="default/nfs-server-provisioner-0" Jan 29 16:21:53.104658 kubelet[1831]: I0129 16:21:53.104647 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d5b8204c-e70e-4ca2-a93e-00b00e45a4e7-data\") pod \"nfs-server-provisioner-0\" (UID: \"d5b8204c-e70e-4ca2-a93e-00b00e45a4e7\") " pod="default/nfs-server-provisioner-0" Jan 29 16:21:53.360280 containerd[1482]: time="2025-01-29T16:21:53.359579213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d5b8204c-e70e-4ca2-a93e-00b00e45a4e7,Namespace:default,Attempt:0,}" Jan 29 16:21:53.982678 kubelet[1831]: E0129 16:21:53.982608 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:54.738003 systemd-networkd[1388]: cali60e51b789ff: Link UP Jan 29 16:21:54.739053 systemd-networkd[1388]: cali60e51b789ff: Gained carrier Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:53.469 [INFO][3372] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {64.23.139.59-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default d5b8204c-e70e-4ca2-a93e-00b00e45a4e7 1293 0 2025-01-29 16:21:52 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 64.23.139.59 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.139.59-k8s-nfs--server--provisioner--0-" Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:53.469 [INFO][3372] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.139.59-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:53.539 [INFO][3392] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" HandleID="k8s-pod-network.43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" Workload="64.23.139.59-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:53.563 [INFO][3392] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" HandleID="k8s-pod-network.43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" Workload="64.23.139.59-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"default", "node":"64.23.139.59", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 16:21:53.539361124 +0000 UTC"}, Hostname:"64.23.139.59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:53.563 [INFO][3392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:53.563 [INFO][3392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:53.563 [INFO][3392] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '64.23.139.59' Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:53.569 [INFO][3392] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" host="64.23.139.59" Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:53.580 [INFO][3392] ipam/ipam.go 372: Looking up existing affinities for host host="64.23.139.59" Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:54.662 [INFO][3392] ipam/ipam.go 489: Trying affinity for 192.168.58.64/26 host="64.23.139.59" Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:54.668 [INFO][3392] ipam/ipam.go 155: Attempting to load block cidr=192.168.58.64/26 host="64.23.139.59" Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:54.672 [INFO][3392] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="64.23.139.59" Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:54.672 [INFO][3392] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" host="64.23.139.59" Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:54.680 [INFO][3392] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60 Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:54.713 [INFO][3392] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" host="64.23.139.59" Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:54.730 [INFO][3392] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.58.67/26] block=192.168.58.64/26 handle="k8s-pod-network.43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" host="64.23.139.59" Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:54.730 [INFO][3392] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.67/26] handle="k8s-pod-network.43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" host="64.23.139.59" Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:54.730 [INFO][3392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:21:54.760897 containerd[1482]: 2025-01-29 16:21:54.730 [INFO][3392] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.67/26] IPv6=[] ContainerID="43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" HandleID="k8s-pod-network.43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" Workload="64.23.139.59-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:21:54.762556 containerd[1482]: 2025-01-29 16:21:54.733 [INFO][3372] cni-plugin/k8s.go 386: Populated endpoint ContainerID="43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.139.59-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.139.59-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d5b8204c-e70e-4ca2-a93e-00b00e45a4e7", ResourceVersion:"1293", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 21, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.139.59", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:21:54.762556 containerd[1482]: 2025-01-29 16:21:54.733 [INFO][3372] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.58.67/32] ContainerID="43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.139.59-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:21:54.762556 containerd[1482]: 2025-01-29 16:21:54.734 [INFO][3372] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.139.59-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:21:54.762556 containerd[1482]: 2025-01-29 16:21:54.737 [INFO][3372] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.139.59-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:21:54.763217 containerd[1482]: 2025-01-29 16:21:54.739 [INFO][3372] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.139.59-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.139.59-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d5b8204c-e70e-4ca2-a93e-00b00e45a4e7", ResourceVersion:"1293", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 21, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.139.59", ContainerID:"43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"4e:f7:d4:05:a2:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:21:54.763217 containerd[1482]: 2025-01-29 16:21:54.756 [INFO][3372] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.139.59-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:21:54.804658 containerd[1482]: time="2025-01-29T16:21:54.804335785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:54.804976 containerd[1482]: time="2025-01-29T16:21:54.804621529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:54.806055 containerd[1482]: time="2025-01-29T16:21:54.805845111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:54.806055 containerd[1482]: time="2025-01-29T16:21:54.806002333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:54.848315 systemd[1]: Started cri-containerd-43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60.scope - libcontainer container 43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60. Jan 29 16:21:54.925923 containerd[1482]: time="2025-01-29T16:21:54.925295051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d5b8204c-e70e-4ca2-a93e-00b00e45a4e7,Namespace:default,Attempt:0,} returns sandbox id \"43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60\"" Jan 29 16:21:54.929426 containerd[1482]: time="2025-01-29T16:21:54.929073632Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 16:21:54.983200 kubelet[1831]: E0129 16:21:54.983138 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:55.983591 kubelet[1831]: E0129 16:21:55.983508 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:56.682213 systemd-networkd[1388]: cali60e51b789ff: Gained IPv6LL Jan 29 16:21:56.951495 kubelet[1831]: E0129 16:21:56.950679 1831 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:56.984218 kubelet[1831]: E0129 16:21:56.983934 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:57.725750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2326774271.mount: Deactivated successfully. Jan 29 16:21:57.985047 kubelet[1831]: E0129 16:21:57.984854 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:58.985360 kubelet[1831]: E0129 16:21:58.985172 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:21:59.985839 kubelet[1831]: E0129 16:21:59.985754 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:00.974082 containerd[1482]: time="2025-01-29T16:22:00.973973801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:22:00.980865 containerd[1482]: time="2025-01-29T16:22:00.980696291Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 29 16:22:00.985198 containerd[1482]: time="2025-01-29T16:22:00.985062607Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:22:00.987304 kubelet[1831]: E0129 16:22:00.987167 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:00.993124 containerd[1482]: time="2025-01-29T16:22:00.993037754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:22:00.997207 containerd[1482]: time="2025-01-29T16:22:00.996973707Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.067850839s" Jan 29 16:22:00.997207 containerd[1482]: time="2025-01-29T16:22:00.997041179Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 16:22:01.004888 containerd[1482]: time="2025-01-29T16:22:01.003713822Z" level=info msg="CreateContainer within sandbox \"43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 16:22:01.123325 containerd[1482]: time="2025-01-29T16:22:01.123218821Z" level=info msg="CreateContainer within sandbox \"43b180299e49a4dad0330d63710255d8d67fbde9455d453560273d5d30e0fc60\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"04d4e95214728127aaac21b45ef6d711fff07754edb33a0744291606204b5c0b\"" Jan 29 16:22:01.124861 containerd[1482]: time="2025-01-29T16:22:01.124795224Z" level=info msg="StartContainer for \"04d4e95214728127aaac21b45ef6d711fff07754edb33a0744291606204b5c0b\"" Jan 29 16:22:01.188672 systemd[1]: Started cri-containerd-04d4e95214728127aaac21b45ef6d711fff07754edb33a0744291606204b5c0b.scope - libcontainer container 04d4e95214728127aaac21b45ef6d711fff07754edb33a0744291606204b5c0b. Jan 29 16:22:01.243835 containerd[1482]: time="2025-01-29T16:22:01.243627396Z" level=info msg="StartContainer for \"04d4e95214728127aaac21b45ef6d711fff07754edb33a0744291606204b5c0b\" returns successfully" Jan 29 16:22:01.988743 kubelet[1831]: E0129 16:22:01.988601 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:02.989153 kubelet[1831]: E0129 16:22:02.989088 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:03.989697 kubelet[1831]: E0129 16:22:03.989633 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:04.990881 kubelet[1831]: E0129 16:22:04.990776 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:05.992070 kubelet[1831]: E0129 16:22:05.991959 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:06.993299 kubelet[1831]: E0129 16:22:06.993179 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:07.060505 systemd[1]: Started sshd@9-64.23.139.59:22-92.255.85.189:47158.service - OpenSSH per-connection server daemon (92.255.85.189:47158). Jan 29 16:22:07.994267 kubelet[1831]: E0129 16:22:07.994180 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:08.483198 sshd[3570]: Invalid user username from 92.255.85.189 port 47158 Jan 29 16:22:08.710375 sshd[3570]: Connection closed by invalid user username 92.255.85.189 port 47158 [preauth] Jan 29 16:22:08.713776 systemd[1]: sshd@9-64.23.139.59:22-92.255.85.189:47158.service: Deactivated successfully. Jan 29 16:22:08.994659 kubelet[1831]: E0129 16:22:08.994587 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:09.995920 kubelet[1831]: E0129 16:22:09.995838 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:10.996916 kubelet[1831]: E0129 16:22:10.996849 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:11.287779 kubelet[1831]: I0129 16:22:11.287687 1831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=13.217427986 podStartE2EDuration="19.287663581s" podCreationTimestamp="2025-01-29 16:21:52 +0000 UTC" firstStartedPulling="2025-01-29 16:21:54.928042118 +0000 UTC m=+38.676999568" lastFinishedPulling="2025-01-29 16:22:00.998277723 +0000 UTC m=+44.747235163" observedRunningTime="2025-01-29 16:22:01.501701935 +0000 UTC m=+45.250659402" watchObservedRunningTime="2025-01-29 16:22:11.287663581 +0000 UTC m=+55.036621041" Jan 29 16:22:11.301380 systemd[1]: Created slice kubepods-besteffort-pod637c6949_3b8d_407e_b426_4e48cacad1b7.slice - libcontainer container kubepods-besteffort-pod637c6949_3b8d_407e_b426_4e48cacad1b7.slice. Jan 29 16:22:11.337421 kubelet[1831]: I0129 16:22:11.337239 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-11e9b45c-715b-4b78-af5a-9905d3f67b4e\" (UniqueName: \"kubernetes.io/nfs/637c6949-3b8d-407e-b426-4e48cacad1b7-pvc-11e9b45c-715b-4b78-af5a-9905d3f67b4e\") pod \"test-pod-1\" (UID: \"637c6949-3b8d-407e-b426-4e48cacad1b7\") " pod="default/test-pod-1" Jan 29 16:22:11.337421 kubelet[1831]: I0129 16:22:11.337313 1831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kmjb\" (UniqueName: \"kubernetes.io/projected/637c6949-3b8d-407e-b426-4e48cacad1b7-kube-api-access-7kmjb\") pod \"test-pod-1\" (UID: \"637c6949-3b8d-407e-b426-4e48cacad1b7\") " pod="default/test-pod-1" Jan 29 16:22:11.483190 kernel: FS-Cache: Loaded Jan 29 16:22:11.591134 kernel: RPC: Registered named UNIX socket transport module. Jan 29 16:22:11.591324 kernel: RPC: Registered udp transport module. Jan 29 16:22:11.591372 kernel: RPC: Registered tcp transport module. Jan 29 16:22:11.592535 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 16:22:11.594816 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 16:22:11.903053 kernel: NFS: Registering the id_resolver key type Jan 29 16:22:11.905097 kernel: Key type id_resolver registered Jan 29 16:22:11.905277 kernel: Key type id_legacy registered Jan 29 16:22:11.963164 nfsidmap[3590]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '0.0-5-94c51ad0b0' Jan 29 16:22:11.971102 nfsidmap[3591]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '0.0-5-94c51ad0b0' Jan 29 16:22:11.997264 kubelet[1831]: E0129 16:22:11.997183 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:12.208590 containerd[1482]: time="2025-01-29T16:22:12.208268442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:637c6949-3b8d-407e-b426-4e48cacad1b7,Namespace:default,Attempt:0,}" Jan 29 16:22:12.452441 systemd-networkd[1388]: cali5ec59c6bf6e: Link UP Jan 29 16:22:12.454439 systemd-networkd[1388]: cali5ec59c6bf6e: Gained carrier Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.307 [INFO][3592] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {64.23.139.59-k8s-test--pod--1-eth0 default 637c6949-3b8d-407e-b426-4e48cacad1b7 1363 0 2025-01-29 16:21:53 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 64.23.139.59 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.139.59-k8s-test--pod--1-" Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.308 [INFO][3592] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.139.59-k8s-test--pod--1-eth0" Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.362 [INFO][3603] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" HandleID="k8s-pod-network.7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" Workload="64.23.139.59-k8s-test--pod--1-eth0" Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.381 [INFO][3603] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" HandleID="k8s-pod-network.7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" Workload="64.23.139.59-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000284630), Attrs:map[string]string{"namespace":"default", "node":"64.23.139.59", "pod":"test-pod-1", "timestamp":"2025-01-29 16:22:12.362163317 +0000 UTC"}, Hostname:"64.23.139.59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.381 [INFO][3603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.381 [INFO][3603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.381 [INFO][3603] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '64.23.139.59' Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.385 [INFO][3603] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" host="64.23.139.59" Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.393 [INFO][3603] ipam/ipam.go 372: Looking up existing affinities for host host="64.23.139.59" Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.408 [INFO][3603] ipam/ipam.go 489: Trying affinity for 192.168.58.64/26 host="64.23.139.59" Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.412 [INFO][3603] ipam/ipam.go 155: Attempting to load block cidr=192.168.58.64/26 host="64.23.139.59" Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.417 [INFO][3603] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="64.23.139.59" Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.417 [INFO][3603] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" host="64.23.139.59" Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.420 [INFO][3603] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3 Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.427 [INFO][3603] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" host="64.23.139.59" Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.440 [INFO][3603] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.58.68/26] block=192.168.58.64/26 handle="k8s-pod-network.7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" host="64.23.139.59" Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.440 [INFO][3603] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.68/26] handle="k8s-pod-network.7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" host="64.23.139.59" Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.440 [INFO][3603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:22:12.475322 containerd[1482]: 2025-01-29 16:22:12.440 [INFO][3603] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.68/26] IPv6=[] ContainerID="7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" HandleID="k8s-pod-network.7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" Workload="64.23.139.59-k8s-test--pod--1-eth0" Jan 29 16:22:12.476468 containerd[1482]: 2025-01-29 16:22:12.445 [INFO][3592] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.139.59-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.139.59-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"637c6949-3b8d-407e-b426-4e48cacad1b7", ResourceVersion:"1363", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.139.59", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:22:12.476468 containerd[1482]: 2025-01-29 16:22:12.446 [INFO][3592] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.58.68/32] ContainerID="7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.139.59-k8s-test--pod--1-eth0" Jan 29 16:22:12.476468 containerd[1482]: 2025-01-29 16:22:12.446 [INFO][3592] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.139.59-k8s-test--pod--1-eth0" Jan 29 16:22:12.476468 containerd[1482]: 2025-01-29 16:22:12.452 [INFO][3592] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.139.59-k8s-test--pod--1-eth0" Jan 29 16:22:12.476468 containerd[1482]: 2025-01-29 16:22:12.455 [INFO][3592] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.139.59-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.139.59-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"637c6949-3b8d-407e-b426-4e48cacad1b7", ResourceVersion:"1363", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.139.59", ContainerID:"7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"52:84:7b:cb:95:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:22:12.476468 containerd[1482]: 2025-01-29 16:22:12.468 [INFO][3592] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.139.59-k8s-test--pod--1-eth0" Jan 29 16:22:12.521997 containerd[1482]: time="2025-01-29T16:22:12.521773461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:22:12.521997 containerd[1482]: time="2025-01-29T16:22:12.521923140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:22:12.521997 containerd[1482]: time="2025-01-29T16:22:12.521955497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:22:12.523102 containerd[1482]: time="2025-01-29T16:22:12.522569704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:22:12.569388 systemd[1]: run-containerd-runc-k8s.io-7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3-runc.AI9xk7.mount: Deactivated successfully. Jan 29 16:22:12.581441 systemd[1]: Started cri-containerd-7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3.scope - libcontainer container 7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3. Jan 29 16:22:12.653884 containerd[1482]: time="2025-01-29T16:22:12.653791552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:637c6949-3b8d-407e-b426-4e48cacad1b7,Namespace:default,Attempt:0,} returns sandbox id \"7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3\"" Jan 29 16:22:12.656303 containerd[1482]: time="2025-01-29T16:22:12.656132640Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 16:22:12.998391 kubelet[1831]: E0129 16:22:12.998319 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:13.125452 containerd[1482]: time="2025-01-29T16:22:13.125336172Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:22:13.139873 containerd[1482]: time="2025-01-29T16:22:13.131731279Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 16:22:13.143497 containerd[1482]: time="2025-01-29T16:22:13.143421625Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 487.231918ms" Jan 29 16:22:13.143735 containerd[1482]: time="2025-01-29T16:22:13.143709995Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 16:22:13.150178 containerd[1482]: time="2025-01-29T16:22:13.150029250Z" level=info msg="CreateContainer within sandbox \"7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 16:22:13.209415 containerd[1482]: time="2025-01-29T16:22:13.209171998Z" level=info msg="CreateContainer within sandbox \"7ee4ffb9415f2ee9bbd30cdd2beaba6b162551d5bbdae48eb0b20eff948efae3\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c4a83319ebde18c06a1e5555eb19416b6324c86d479f43f74c8dd6cf41ed667a\"" Jan 29 16:22:13.210305 containerd[1482]: time="2025-01-29T16:22:13.210257541Z" level=info msg="StartContainer for \"c4a83319ebde18c06a1e5555eb19416b6324c86d479f43f74c8dd6cf41ed667a\"" Jan 29 16:22:13.250275 systemd[1]: Started cri-containerd-c4a83319ebde18c06a1e5555eb19416b6324c86d479f43f74c8dd6cf41ed667a.scope - libcontainer container c4a83319ebde18c06a1e5555eb19416b6324c86d479f43f74c8dd6cf41ed667a. Jan 29 16:22:13.311171 containerd[1482]: time="2025-01-29T16:22:13.310957558Z" level=info msg="StartContainer for \"c4a83319ebde18c06a1e5555eb19416b6324c86d479f43f74c8dd6cf41ed667a\" returns successfully" Jan 29 16:22:13.998694 kubelet[1831]: E0129 16:22:13.998597 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:14.410277 systemd-networkd[1388]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 16:22:14.999907 kubelet[1831]: E0129 16:22:14.999801 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:16.001011 kubelet[1831]: E0129 16:22:16.000938 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:16.950393 kubelet[1831]: E0129 16:22:16.950316 1831 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:16.990629 containerd[1482]: time="2025-01-29T16:22:16.990220844Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\"" Jan 29 16:22:16.990629 containerd[1482]: time="2025-01-29T16:22:16.990379021Z" level=info msg="TearDown network for sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" successfully" Jan 29 16:22:16.990629 containerd[1482]: time="2025-01-29T16:22:16.990394833Z" level=info msg="StopPodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" returns successfully" Jan 29 16:22:16.996988 containerd[1482]: time="2025-01-29T16:22:16.996784460Z" level=info msg="RemovePodSandbox for \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\"" Jan 29 16:22:17.002433 kubelet[1831]: E0129 16:22:17.002328 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:22:17.007519 containerd[1482]: time="2025-01-29T16:22:17.007389352Z" level=info msg="Forcibly stopping sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\"" Jan 29 16:22:17.030056 containerd[1482]: time="2025-01-29T16:22:17.007611977Z" level=info msg="TearDown network for sandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" successfully" Jan 29 16:22:17.056787 containerd[1482]: time="2025-01-29T16:22:17.056657566Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:22:17.057265 containerd[1482]: time="2025-01-29T16:22:17.056806039Z" level=info msg="RemovePodSandbox \"eaa3b1992695534389d77f2ede316f6071c039c38a567a19f269ab2e0804da5c\" returns successfully" Jan 29 16:22:17.058208 containerd[1482]: time="2025-01-29T16:22:17.058127185Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\"" Jan 29 16:22:17.058352 containerd[1482]: time="2025-01-29T16:22:17.058328458Z" level=info msg="TearDown network for sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" successfully" Jan 29 16:22:17.058459 containerd[1482]: time="2025-01-29T16:22:17.058348217Z" level=info msg="StopPodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" returns successfully" Jan 29 16:22:17.059641 containerd[1482]: time="2025-01-29T16:22:17.059466135Z" level=info msg="RemovePodSandbox for \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\"" Jan 29 16:22:17.059641 containerd[1482]: time="2025-01-29T16:22:17.059520035Z" level=info msg="Forcibly stopping sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\"" Jan 29 16:22:17.059897 containerd[1482]: time="2025-01-29T16:22:17.059757002Z" level=info msg="TearDown network for sandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" successfully" Jan 29 16:22:17.066926 containerd[1482]: time="2025-01-29T16:22:17.066813521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:22:17.067157 containerd[1482]: time="2025-01-29T16:22:17.066957967Z" level=info msg="RemovePodSandbox \"59df51e8256c9ff3d042e2afdaf99bd47c124d7934542ef5289fefd6e6bd8d17\" returns successfully" Jan 29 16:22:17.068053 containerd[1482]: time="2025-01-29T16:22:17.067897832Z" level=info msg="StopPodSandbox for \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\"" Jan 29 16:22:17.068199 containerd[1482]: time="2025-01-29T16:22:17.068143576Z" level=info msg="TearDown network for sandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" successfully" Jan 29 16:22:17.068199 containerd[1482]: time="2025-01-29T16:22:17.068171054Z" level=info msg="StopPodSandbox for \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" returns successfully" Jan 29 16:22:17.070043 containerd[1482]: time="2025-01-29T16:22:17.069288691Z" level=info msg="RemovePodSandbox for \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\"" Jan 29 16:22:17.070043 containerd[1482]: time="2025-01-29T16:22:17.069336750Z" level=info msg="Forcibly stopping sandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\"" Jan 29 16:22:17.070043 containerd[1482]: time="2025-01-29T16:22:17.069474647Z" level=info msg="TearDown network for sandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" successfully" Jan 29 16:22:17.086562 containerd[1482]: time="2025-01-29T16:22:17.086433564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:22:17.096625 containerd[1482]: time="2025-01-29T16:22:17.096388895Z" level=info msg="RemovePodSandbox \"4516e69c083b9d6d48e6bc86e4317c3c3f74fc3d451766debb17c246202d84eb\" returns successfully" Jan 29 16:22:17.097318 containerd[1482]: time="2025-01-29T16:22:17.097278286Z" level=info msg="StopPodSandbox for \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\"" Jan 29 16:22:17.097591 containerd[1482]: time="2025-01-29T16:22:17.097460119Z" level=info msg="TearDown network for sandbox \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\" successfully" Jan 29 16:22:17.097591 containerd[1482]: time="2025-01-29T16:22:17.097547657Z" level=info msg="StopPodSandbox for \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\" returns successfully" Jan 29 16:22:17.099095 containerd[1482]: time="2025-01-29T16:22:17.098292556Z" level=info msg="RemovePodSandbox for \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\"" Jan 29 16:22:17.099095 containerd[1482]: time="2025-01-29T16:22:17.098344408Z" level=info msg="Forcibly stopping sandbox \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\"" Jan 29 16:22:17.099095 containerd[1482]: time="2025-01-29T16:22:17.098475740Z" level=info msg="TearDown network for sandbox \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\" successfully" Jan 29 16:22:17.106605 containerd[1482]: time="2025-01-29T16:22:17.106534619Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:22:17.107470 containerd[1482]: time="2025-01-29T16:22:17.106968157Z" level=info msg="RemovePodSandbox \"c084ab66e570bc9481cab36e23fa9c3d1eaba6f02d5fb3bce82d97c1c087d619\" returns successfully" Jan 29 16:22:17.108395 containerd[1482]: time="2025-01-29T16:22:17.108280477Z" level=info msg="StopPodSandbox for \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\"" Jan 29 16:22:17.108718 containerd[1482]: time="2025-01-29T16:22:17.108668329Z" level=info msg="TearDown network for sandbox \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\" successfully" Jan 29 16:22:17.108718 containerd[1482]: time="2025-01-29T16:22:17.108703654Z" level=info msg="StopPodSandbox for \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\" returns successfully" Jan 29 16:22:17.109466 containerd[1482]: time="2025-01-29T16:22:17.109433035Z" level=info msg="RemovePodSandbox for \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\"" Jan 29 16:22:17.109466 containerd[1482]: time="2025-01-29T16:22:17.109469137Z" level=info msg="Forcibly stopping sandbox \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\"" Jan 29 16:22:17.109650 containerd[1482]: time="2025-01-29T16:22:17.109577942Z" level=info msg="TearDown network for sandbox \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\" successfully" Jan 29 16:22:17.118020 containerd[1482]: time="2025-01-29T16:22:17.117925669Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:22:17.118194 containerd[1482]: time="2025-01-29T16:22:17.118042963Z" level=info msg="RemovePodSandbox \"d270791ea3bd4f1d74ceacc6afaebb3da0460cb00932cf497b5e9f265196381d\" returns successfully" Jan 29 16:22:17.119000 containerd[1482]: time="2025-01-29T16:22:17.118952492Z" level=info msg="StopPodSandbox for \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\"" Jan 29 16:22:17.119192 containerd[1482]: time="2025-01-29T16:22:17.119134792Z" level=info msg="TearDown network for sandbox \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\" successfully" Jan 29 16:22:17.119192 containerd[1482]: time="2025-01-29T16:22:17.119155321Z" level=info msg="StopPodSandbox for \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\" returns successfully" Jan 29 16:22:17.121302 containerd[1482]: time="2025-01-29T16:22:17.119705904Z" level=info msg="RemovePodSandbox for \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\"" Jan 29 16:22:17.121302 containerd[1482]: time="2025-01-29T16:22:17.119753250Z" level=info msg="Forcibly stopping sandbox \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\"" Jan 29 16:22:17.121302 containerd[1482]: time="2025-01-29T16:22:17.119864446Z" level=info msg="TearDown network for sandbox \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\" successfully" Jan 29 16:22:17.134856 containerd[1482]: time="2025-01-29T16:22:17.134761093Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:22:17.135483 containerd[1482]: time="2025-01-29T16:22:17.135431099Z" level=info msg="RemovePodSandbox \"d79fd89270a774430ef39cb3a899fa4d2f0ac89bf856353381a9ab071753aab0\" returns successfully" Jan 29 16:22:17.136568 containerd[1482]: time="2025-01-29T16:22:17.136522850Z" level=info msg="StopPodSandbox for \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\"" Jan 29 16:22:17.136959 containerd[1482]: time="2025-01-29T16:22:17.136883014Z" level=info msg="TearDown network for sandbox \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\" successfully" Jan 29 16:22:17.136959 containerd[1482]: time="2025-01-29T16:22:17.136909960Z" level=info msg="StopPodSandbox for \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\" returns successfully" Jan 29 16:22:17.138907 containerd[1482]: time="2025-01-29T16:22:17.138591468Z" level=info msg="RemovePodSandbox for \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\"" Jan 29 16:22:17.138907 containerd[1482]: time="2025-01-29T16:22:17.138639626Z" level=info msg="Forcibly stopping sandbox \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\"" Jan 29 16:22:17.139401 containerd[1482]: time="2025-01-29T16:22:17.138760357Z" level=info msg="TearDown network for sandbox \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\" successfully" Jan 29 16:22:17.147296 containerd[1482]: time="2025-01-29T16:22:17.147231929Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:22:17.147483 containerd[1482]: time="2025-01-29T16:22:17.147340030Z" level=info msg="RemovePodSandbox \"39635c0c2cdf16a1493633ac81096546bb93763d54047c297550e15d85c375d8\" returns successfully" Jan 29 16:22:17.148664 containerd[1482]: time="2025-01-29T16:22:17.148597643Z" level=info msg="StopPodSandbox for \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\"" Jan 29 16:22:17.148914 containerd[1482]: time="2025-01-29T16:22:17.148792060Z" level=info msg="TearDown network for sandbox \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\" successfully" Jan 29 16:22:17.148914 containerd[1482]: time="2025-01-29T16:22:17.148857645Z" level=info msg="StopPodSandbox for \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\" returns successfully" Jan 29 16:22:17.149815 containerd[1482]: time="2025-01-29T16:22:17.149767455Z" level=info msg="RemovePodSandbox for \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\"" Jan 29 16:22:17.150056 containerd[1482]: time="2025-01-29T16:22:17.149985418Z" level=info msg="Forcibly stopping sandbox \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\"" Jan 29 16:22:17.150270 containerd[1482]: time="2025-01-29T16:22:17.150192833Z" level=info msg="TearDown network for sandbox \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\" successfully" Jan 29 16:22:17.158727 containerd[1482]: time="2025-01-29T16:22:17.158618641Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:22:17.158727 containerd[1482]: time="2025-01-29T16:22:17.158733440Z" level=info msg="RemovePodSandbox \"25498a4c77eadf300251165b8d86006ef78170fc2095fa542e198c213c74be98\" returns successfully" Jan 29 16:22:17.159527 containerd[1482]: time="2025-01-29T16:22:17.159472296Z" level=info msg="StopPodSandbox for \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\"" Jan 29 16:22:17.159749 containerd[1482]: time="2025-01-29T16:22:17.159649546Z" level=info msg="TearDown network for sandbox \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\" successfully" Jan 29 16:22:17.159749 containerd[1482]: time="2025-01-29T16:22:17.159740072Z" level=info msg="StopPodSandbox for \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\" returns successfully" Jan 29 16:22:17.160716 containerd[1482]: time="2025-01-29T16:22:17.160675946Z" level=info msg="RemovePodSandbox for \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\"" Jan 29 16:22:17.160918 containerd[1482]: time="2025-01-29T16:22:17.160725896Z" level=info msg="Forcibly stopping sandbox \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\"" Jan 29 16:22:17.160986 containerd[1482]: time="2025-01-29T16:22:17.160883179Z" level=info msg="TearDown network for sandbox \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\" successfully" Jan 29 16:22:17.167209 containerd[1482]: time="2025-01-29T16:22:17.167104480Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:22:17.167378 containerd[1482]: time="2025-01-29T16:22:17.167239013Z" level=info msg="RemovePodSandbox \"2460e0d7709ccc21ae865b6c41cbf8a97f225a90d574594ed5c55996b74bfe0d\" returns successfully" Jan 29 16:22:17.168249 containerd[1482]: time="2025-01-29T16:22:17.167892719Z" level=info msg="StopPodSandbox for \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\"" Jan 29 16:22:17.168249 containerd[1482]: time="2025-01-29T16:22:17.168040276Z" level=info msg="TearDown network for sandbox \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\" successfully" Jan 29 16:22:17.168249 containerd[1482]: time="2025-01-29T16:22:17.168059781Z" level=info msg="StopPodSandbox for \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\" returns successfully" Jan 29 16:22:17.169431 containerd[1482]: time="2025-01-29T16:22:17.169345481Z" level=info msg="RemovePodSandbox for \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\"" Jan 29 16:22:17.169431 containerd[1482]: time="2025-01-29T16:22:17.169445196Z" level=info msg="Forcibly stopping sandbox \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\"" Jan 29 16:22:17.169711 containerd[1482]: time="2025-01-29T16:22:17.169566299Z" level=info msg="TearDown network for sandbox \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\" successfully" Jan 29 16:22:17.176173 containerd[1482]: time="2025-01-29T16:22:17.176082617Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:22:17.176173 containerd[1482]: time="2025-01-29T16:22:17.176178045Z" level=info msg="RemovePodSandbox \"cc7cadd87522ef8fef23f5df261fdee928c77bdcff71e2947a44e9f63e06003a\" returns successfully" Jan 29 16:22:17.177457 containerd[1482]: time="2025-01-29T16:22:17.177397446Z" level=info msg="StopPodSandbox for \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\"" Jan 29 16:22:17.177635 containerd[1482]: time="2025-01-29T16:22:17.177590508Z" level=info msg="TearDown network for sandbox \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\" successfully" Jan 29 16:22:17.177635 containerd[1482]: time="2025-01-29T16:22:17.177611251Z" level=info msg="StopPodSandbox for \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\" returns successfully" Jan 29 16:22:17.178988 containerd[1482]: time="2025-01-29T16:22:17.178160855Z" level=info msg="RemovePodSandbox for \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\"" Jan 29 16:22:17.178988 containerd[1482]: time="2025-01-29T16:22:17.178214616Z" level=info msg="Forcibly stopping sandbox \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\"" Jan 29 16:22:17.194151 containerd[1482]: time="2025-01-29T16:22:17.193990117Z" level=info msg="TearDown network for sandbox \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\" successfully" Jan 29 16:22:17.199856 containerd[1482]: time="2025-01-29T16:22:17.199762668Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:22:17.200669 containerd[1482]: time="2025-01-29T16:22:17.199887272Z" level=info msg="RemovePodSandbox \"4e1878748333192aa0e862075ba6fcbf32a59a0d9eb3ffa6cb4aceaa2ff1603d\" returns successfully" Jan 29 16:22:17.202968 containerd[1482]: time="2025-01-29T16:22:17.202067770Z" level=info msg="StopPodSandbox for \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\"" Jan 29 16:22:17.202968 containerd[1482]: time="2025-01-29T16:22:17.202343723Z" level=info msg="TearDown network for sandbox \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\" successfully" Jan 29 16:22:17.202968 containerd[1482]: time="2025-01-29T16:22:17.202372200Z" level=info msg="StopPodSandbox for \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\" returns successfully" Jan 29 16:22:17.203449 containerd[1482]: time="2025-01-29T16:22:17.203327731Z" level=info msg="RemovePodSandbox for \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\"" Jan 29 16:22:17.203449 containerd[1482]: time="2025-01-29T16:22:17.203382743Z" level=info msg="Forcibly stopping sandbox \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\"" Jan 29 16:22:17.203687 containerd[1482]: time="2025-01-29T16:22:17.203510115Z" level=info msg="TearDown network for sandbox \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\" successfully" Jan 29 16:22:17.210992 containerd[1482]: time="2025-01-29T16:22:17.210910184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:22:17.211943 containerd[1482]: time="2025-01-29T16:22:17.211017627Z" level=info msg="RemovePodSandbox \"170082add94583d84f5fdbdc8f89a9e615ae26fb1a6c89836e8a6fd7abfad48e\" returns successfully" Jan 29 16:22:17.211943 containerd[1482]: time="2025-01-29T16:22:17.211601420Z" level=info msg="StopPodSandbox for \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\"" Jan 29 16:22:17.211943 containerd[1482]: time="2025-01-29T16:22:17.211773646Z" level=info msg="TearDown network for sandbox \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\" successfully" Jan 29 16:22:17.211943 containerd[1482]: time="2025-01-29T16:22:17.211790975Z" level=info msg="StopPodSandbox for \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\" returns successfully" Jan 29 16:22:17.212598 containerd[1482]: time="2025-01-29T16:22:17.212524176Z" level=info msg="RemovePodSandbox for \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\"" Jan 29 16:22:17.212742 containerd[1482]: time="2025-01-29T16:22:17.212564958Z" level=info msg="Forcibly stopping sandbox \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\"" Jan 29 16:22:17.212960 containerd[1482]: time="2025-01-29T16:22:17.212842803Z" level=info msg="TearDown network for sandbox \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\" successfully" Jan 29 16:22:17.218182 containerd[1482]: time="2025-01-29T16:22:17.218110198Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:22:17.218384 containerd[1482]: time="2025-01-29T16:22:17.218273292Z" level=info msg="RemovePodSandbox \"8491fc9d2bec06bb943a678c5edfa0fd8f52ec539a5261b555b08b0267b3a886\" returns successfully" Jan 29 16:22:18.003245 kubelet[1831]: E0129 16:22:18.003143 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"