Jan 29 16:39:42.006059 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:39:42.006087 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:39:42.006098 kernel: BIOS-provided physical RAM map: Jan 29 16:39:42.006105 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:39:42.006113 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:39:42.006124 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:39:42.006133 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 29 16:39:42.006141 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 29 16:39:42.006148 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:39:42.006156 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:39:42.006164 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 29 16:39:42.006172 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 16:39:42.006179 kernel: NX (Execute Disable) protection: active Jan 29 16:39:42.006187 kernel: APIC: Static calls initialized Jan 29 16:39:42.006200 kernel: SMBIOS 3.0.0 present. Jan 29 16:39:42.006208 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 29 16:39:42.006216 kernel: Hypervisor detected: KVM Jan 29 16:39:42.006224 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:39:42.006262 kernel: kvm-clock: using sched offset of 4177995706 cycles Jan 29 16:39:42.006275 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:39:42.006284 kernel: tsc: Detected 1996.249 MHz processor Jan 29 16:39:42.006292 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:39:42.006301 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:39:42.006309 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 29 16:39:42.006318 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:39:42.006326 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:39:42.006334 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 29 16:39:42.006342 kernel: ACPI: Early table checksum verification disabled Jan 29 16:39:42.006354 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 29 16:39:42.006362 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:39:42.006371 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:39:42.006379 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:39:42.006387 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 29 16:39:42.006395 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:39:42.006404 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:39:42.006413 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 29 16:39:42.006421 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 29 16:39:42.006432 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 29 16:39:42.006440 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 29 16:39:42.006449 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 29 16:39:42.006461 kernel: No NUMA configuration found Jan 29 16:39:42.006470 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 29 16:39:42.006478 kernel: NODE_DATA(0) allocated [mem 0x13fffa000-0x13fffffff] Jan 29 16:39:42.006490 kernel: Zone ranges: Jan 29 16:39:42.006499 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:39:42.006507 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 16:39:42.006516 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 29 16:39:42.006525 kernel: Movable zone start for each node Jan 29 16:39:42.006533 kernel: Early memory node ranges Jan 29 16:39:42.006542 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:39:42.006550 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 29 16:39:42.006559 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 29 16:39:42.006570 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 29 16:39:42.006579 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:39:42.006587 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:39:42.006596 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 29 16:39:42.006605 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 16:39:42.006614 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:39:42.006622 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:39:42.006631 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 16:39:42.006640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:39:42.006652 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:39:42.006661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:39:42.006670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:39:42.006679 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:39:42.006688 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 16:39:42.006696 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:39:42.006705 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 29 16:39:42.006714 kernel: Booting paravirtualized kernel on KVM Jan 29 16:39:42.006722 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:39:42.006734 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 16:39:42.006743 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 16:39:42.006752 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 16:39:42.006760 kernel: pcpu-alloc: [0] 0 1 Jan 29 16:39:42.006769 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 16:39:42.006779 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:39:42.006788 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:39:42.006801 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:39:42.006809 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:39:42.006818 kernel: Fallback order for Node 0: 0 Jan 29 16:39:42.006827 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 29 16:39:42.006835 kernel: Policy zone: Normal Jan 29 16:39:42.006844 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:39:42.006852 kernel: software IO TLB: area num 2. Jan 29 16:39:42.006861 kernel: Memory: 3964168K/4193772K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 229344K reserved, 0K cma-reserved) Jan 29 16:39:42.006870 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:39:42.006882 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:39:42.006891 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:39:42.006899 kernel: Dynamic Preempt: voluntary Jan 29 16:39:42.006908 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:39:42.006917 kernel: rcu: RCU event tracing is enabled. Jan 29 16:39:42.006926 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:39:42.006935 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:39:42.006944 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:39:42.006952 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:39:42.006961 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:39:42.006973 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:39:42.006981 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 16:39:42.006990 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:39:42.006998 kernel: Console: colour VGA+ 80x25 Jan 29 16:39:42.007007 kernel: printk: console [tty0] enabled Jan 29 16:39:42.007016 kernel: printk: console [ttyS0] enabled Jan 29 16:39:42.007024 kernel: ACPI: Core revision 20230628 Jan 29 16:39:42.007033 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:39:42.007042 kernel: x2apic enabled Jan 29 16:39:42.007053 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:39:42.007062 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 16:39:42.007071 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 16:39:42.007080 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 29 16:39:42.007088 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 16:39:42.007097 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 16:39:42.007105 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:39:42.007114 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:39:42.007123 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:39:42.007135 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:39:42.007144 kernel: Speculative Store Bypass: Vulnerable Jan 29 16:39:42.007152 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 29 16:39:42.007161 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:39:42.007179 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:39:42.007191 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:39:42.007200 kernel: landlock: Up and running. Jan 29 16:39:42.007209 kernel: SELinux: Initializing. Jan 29 16:39:42.007218 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:39:42.007239 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:39:42.007249 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 29 16:39:42.007262 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:39:42.007271 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:39:42.007281 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:39:42.007299 kernel: Performance Events: AMD PMU driver. Jan 29 16:39:42.007309 kernel: ... version: 0 Jan 29 16:39:42.007320 kernel: ... bit width: 48 Jan 29 16:39:42.007329 kernel: ... generic registers: 4 Jan 29 16:39:42.007338 kernel: ... value mask: 0000ffffffffffff Jan 29 16:39:42.007347 kernel: ... max period: 00007fffffffffff Jan 29 16:39:42.007356 kernel: ... fixed-purpose events: 0 Jan 29 16:39:42.007365 kernel: ... event mask: 000000000000000f Jan 29 16:39:42.007374 kernel: signal: max sigframe size: 1440 Jan 29 16:39:42.007383 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:39:42.007392 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:39:42.007403 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:39:42.007413 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:39:42.007422 kernel: .... node #0, CPUs: #1 Jan 29 16:39:42.007431 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:39:42.007440 kernel: smpboot: Max logical packages: 2 Jan 29 16:39:42.007449 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 29 16:39:42.007458 kernel: devtmpfs: initialized Jan 29 16:39:42.007467 kernel: x86/mm: Memory block size: 128MB Jan 29 16:39:42.007476 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:39:42.007485 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:39:42.007497 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:39:42.007506 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:39:42.007515 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:39:42.007524 kernel: audit: type=2000 audit(1738168781.268:1): state=initialized audit_enabled=0 res=1 Jan 29 16:39:42.007533 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:39:42.007542 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:39:42.007551 kernel: cpuidle: using governor menu Jan 29 16:39:42.007560 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:39:42.007572 kernel: dca service started, version 1.12.1 Jan 29 16:39:42.007581 kernel: PCI: Using configuration type 1 for base access Jan 29 16:39:42.007591 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:39:42.007600 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:39:42.007609 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:39:42.007618 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:39:42.007627 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:39:42.007636 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:39:42.007645 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:39:42.007654 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:39:42.007665 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:39:42.007674 kernel: ACPI: Interpreter enabled Jan 29 16:39:42.007684 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 16:39:42.007692 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:39:42.007702 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:39:42.007711 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:39:42.007720 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 16:39:42.007729 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:39:42.007875 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:39:42.007976 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 16:39:42.008069 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 16:39:42.008084 kernel: acpiphp: Slot [3] registered Jan 29 16:39:42.008093 kernel: acpiphp: Slot [4] registered Jan 29 16:39:42.008102 kernel: acpiphp: Slot [5] registered Jan 29 16:39:42.008111 kernel: acpiphp: Slot [6] registered Jan 29 16:39:42.008120 kernel: acpiphp: Slot [7] registered Jan 29 16:39:42.008134 kernel: acpiphp: Slot [8] registered Jan 29 16:39:42.008143 kernel: acpiphp: Slot [9] registered Jan 29 16:39:42.008152 kernel: acpiphp: Slot [10] registered Jan 29 16:39:42.008161 kernel: acpiphp: Slot [11] registered Jan 29 16:39:42.008169 kernel: acpiphp: Slot [12] registered Jan 29 16:39:42.008178 kernel: acpiphp: Slot [13] registered Jan 29 16:39:42.008187 kernel: acpiphp: Slot [14] registered Jan 29 16:39:42.008196 kernel: acpiphp: Slot [15] registered Jan 29 16:39:42.008205 kernel: acpiphp: Slot [16] registered Jan 29 16:39:42.008217 kernel: acpiphp: Slot [17] registered Jan 29 16:39:42.010776 kernel: acpiphp: Slot [18] registered Jan 29 16:39:42.010822 kernel: acpiphp: Slot [19] registered Jan 29 16:39:42.010833 kernel: acpiphp: Slot [20] registered Jan 29 16:39:42.010843 kernel: acpiphp: Slot [21] registered Jan 29 16:39:42.010853 kernel: acpiphp: Slot [22] registered Jan 29 16:39:42.010863 kernel: acpiphp: Slot [23] registered Jan 29 16:39:42.010872 kernel: acpiphp: Slot [24] registered Jan 29 16:39:42.010882 kernel: acpiphp: Slot [25] registered Jan 29 16:39:42.010891 kernel: acpiphp: Slot [26] registered Jan 29 16:39:42.010907 kernel: acpiphp: Slot [27] registered Jan 29 16:39:42.010917 kernel: acpiphp: Slot [28] registered Jan 29 16:39:42.010926 kernel: acpiphp: Slot [29] registered Jan 29 16:39:42.010936 kernel: acpiphp: Slot [30] registered Jan 29 16:39:42.010945 kernel: acpiphp: Slot [31] registered Jan 29 16:39:42.010954 kernel: PCI host bridge to bus 0000:00 Jan 29 16:39:42.011102 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:39:42.011193 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:39:42.011319 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:39:42.011407 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 16:39:42.011489 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 29 16:39:42.011572 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:39:42.011689 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 16:39:42.011797 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 16:39:42.011911 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 16:39:42.012009 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 29 16:39:42.012103 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 16:39:42.012199 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 16:39:42.012328 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 16:39:42.012426 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 16:39:42.012531 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 16:39:42.012635 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 16:39:42.012729 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 16:39:42.012835 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 16:39:42.012932 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 16:39:42.013027 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 29 16:39:42.013125 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 29 16:39:42.013220 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 29 16:39:42.013349 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:39:42.013461 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 16:39:42.013560 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 29 16:39:42.013658 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 29 16:39:42.013753 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 29 16:39:42.013847 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 29 16:39:42.013953 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 16:39:42.014055 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 16:39:42.014152 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 29 16:39:42.014294 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 29 16:39:42.014402 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 16:39:42.014499 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 29 16:39:42.014593 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 29 16:39:42.014698 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 16:39:42.014802 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 29 16:39:42.014896 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 29 16:39:42.014992 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 29 16:39:42.015006 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:39:42.015016 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:39:42.015026 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:39:42.015036 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:39:42.015045 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 16:39:42.015059 kernel: iommu: Default domain type: Translated Jan 29 16:39:42.015069 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:39:42.015078 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:39:42.015088 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:39:42.015098 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:39:42.015108 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 29 16:39:42.015201 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 16:39:42.016881 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 16:39:42.016988 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:39:42.017003 kernel: vgaarb: loaded Jan 29 16:39:42.017013 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:39:42.017023 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:39:42.017033 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:39:42.017042 kernel: pnp: PnP ACPI init Jan 29 16:39:42.017143 kernel: pnp 00:03: [dma 2] Jan 29 16:39:42.017159 kernel: pnp: PnP ACPI: found 5 devices Jan 29 16:39:42.017169 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:39:42.017183 kernel: NET: Registered PF_INET protocol family Jan 29 16:39:42.017193 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:39:42.017202 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 16:39:42.017212 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:39:42.017222 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:39:42.019703 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 16:39:42.019716 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 16:39:42.019726 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:39:42.019741 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:39:42.019751 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:39:42.019760 kernel: NET: Registered PF_XDP protocol family Jan 29 16:39:42.019854 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:39:42.019938 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:39:42.020023 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:39:42.020106 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 29 16:39:42.020188 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 29 16:39:42.020323 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 16:39:42.020427 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 16:39:42.020442 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:39:42.020452 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 16:39:42.020461 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 29 16:39:42.020471 kernel: Initialise system trusted keyrings Jan 29 16:39:42.020481 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 16:39:42.020490 kernel: Key type asymmetric registered Jan 29 16:39:42.020499 kernel: Asymmetric key parser 'x509' registered Jan 29 16:39:42.020513 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:39:42.020523 kernel: io scheduler mq-deadline registered Jan 29 16:39:42.020532 kernel: io scheduler kyber registered Jan 29 16:39:42.020542 kernel: io scheduler bfq registered Jan 29 16:39:42.020552 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:39:42.020562 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 16:39:42.020572 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 16:39:42.020581 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 16:39:42.020591 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 16:39:42.020603 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:39:42.020614 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:39:42.020623 kernel: random: crng init done Jan 29 16:39:42.020632 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:39:42.020642 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:39:42.020651 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:39:42.020746 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 16:39:42.020762 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:39:42.020844 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 16:39:42.020934 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T16:39:41 UTC (1738168781) Jan 29 16:39:42.021018 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 29 16:39:42.021032 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 16:39:42.021042 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:39:42.021052 kernel: Segment Routing with IPv6 Jan 29 16:39:42.021061 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:39:42.021070 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:39:42.021080 kernel: Key type dns_resolver registered Jan 29 16:39:42.021093 kernel: IPI shorthand broadcast: enabled Jan 29 16:39:42.021102 kernel: sched_clock: Marking stable (967008606, 176082471)->(1186618122, -43527045) Jan 29 16:39:42.021112 kernel: registered taskstats version 1 Jan 29 16:39:42.021121 kernel: Loading compiled-in X.509 certificates Jan 29 16:39:42.021131 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:39:42.021140 kernel: Key type .fscrypt registered Jan 29 16:39:42.021149 kernel: Key type fscrypt-provisioning registered Jan 29 16:39:42.021159 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:39:42.021169 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:39:42.021182 kernel: ima: No architecture policies found Jan 29 16:39:42.021191 kernel: clk: Disabling unused clocks Jan 29 16:39:42.021200 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:39:42.021210 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:39:42.021219 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:39:42.021258 kernel: Run /init as init process Jan 29 16:39:42.021268 kernel: with arguments: Jan 29 16:39:42.021278 kernel: /init Jan 29 16:39:42.021287 kernel: with environment: Jan 29 16:39:42.021300 kernel: HOME=/ Jan 29 16:39:42.021309 kernel: TERM=linux Jan 29 16:39:42.021318 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:39:42.021329 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:39:42.021343 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:39:42.021354 systemd[1]: Detected virtualization kvm. Jan 29 16:39:42.021364 systemd[1]: Detected architecture x86-64. Jan 29 16:39:42.021379 systemd[1]: Running in initrd. Jan 29 16:39:42.021390 systemd[1]: No hostname configured, using default hostname. Jan 29 16:39:42.021401 systemd[1]: Hostname set to . Jan 29 16:39:42.021412 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:39:42.021422 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:39:42.021433 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:39:42.021445 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:39:42.021470 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:39:42.021484 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:39:42.021495 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:39:42.021507 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:39:42.021520 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:39:42.021534 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:39:42.021545 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:39:42.021556 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:39:42.021567 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:39:42.021578 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:39:42.021589 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:39:42.021600 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:39:42.021612 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:39:42.021623 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:39:42.021637 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:39:42.021648 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:39:42.021659 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:39:42.021670 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:39:42.021681 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:39:42.021692 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:39:42.021703 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:39:42.021714 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:39:42.021725 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:39:42.021739 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:39:42.021750 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:39:42.021761 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:39:42.021773 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:39:42.021784 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:39:42.021795 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:39:42.021810 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:39:42.021822 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:39:42.021862 systemd-journald[184]: Collecting audit messages is disabled. Jan 29 16:39:42.021897 systemd-journald[184]: Journal started Jan 29 16:39:42.021922 systemd-journald[184]: Runtime Journal (/run/log/journal/b337644c963b4da083564898f02237d6) is 8M, max 78.3M, 70.3M free. Jan 29 16:39:41.984604 systemd-modules-load[185]: Inserted module 'overlay' Jan 29 16:39:42.052597 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:39:42.052622 kernel: Bridge firewalling registered Jan 29 16:39:42.052635 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:39:42.031159 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 29 16:39:42.053361 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:39:42.054242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:39:42.055418 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:39:42.063373 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:39:42.065120 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:39:42.069264 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:39:42.078733 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:39:42.079985 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:39:42.088072 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:39:42.093082 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:39:42.093824 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:39:42.101479 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:39:42.104951 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:39:42.114548 dracut-cmdline[221]: dracut-dracut-053 Jan 29 16:39:42.117869 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:39:42.157539 systemd-resolved[222]: Positive Trust Anchors: Jan 29 16:39:42.157556 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:39:42.157600 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:39:42.162077 systemd-resolved[222]: Defaulting to hostname 'linux'. Jan 29 16:39:42.165931 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:39:42.169786 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:39:42.226384 kernel: SCSI subsystem initialized Jan 29 16:39:42.239317 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:39:42.252455 kernel: iscsi: registered transport (tcp) Jan 29 16:39:42.275361 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:39:42.275443 kernel: QLogic iSCSI HBA Driver Jan 29 16:39:42.335653 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:39:42.345498 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:39:42.400937 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:39:42.401092 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:39:42.403494 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:39:42.469514 kernel: raid6: sse2x4 gen() 4908 MB/s Jan 29 16:39:42.488333 kernel: raid6: sse2x2 gen() 5140 MB/s Jan 29 16:39:42.506863 kernel: raid6: sse2x1 gen() 8364 MB/s Jan 29 16:39:42.506937 kernel: raid6: using algorithm sse2x1 gen() 8364 MB/s Jan 29 16:39:42.525971 kernel: raid6: .... xor() 5433 MB/s, rmw enabled Jan 29 16:39:42.526068 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 16:39:42.551513 kernel: xor: measuring software checksum speed Jan 29 16:39:42.551615 kernel: prefetch64-sse : 15932 MB/sec Jan 29 16:39:42.553786 kernel: generic_sse : 15439 MB/sec Jan 29 16:39:42.553843 kernel: xor: using function: prefetch64-sse (15932 MB/sec) Jan 29 16:39:42.749612 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:39:42.763609 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:39:42.769437 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:39:42.797326 systemd-udevd[406]: Using default interface naming scheme 'v255'. Jan 29 16:39:42.803063 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:39:42.814014 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:39:42.835794 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jan 29 16:39:42.870480 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:39:42.876528 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:39:42.928022 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:39:42.940354 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:39:42.981465 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:39:42.985439 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:39:42.989450 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:39:42.992826 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:39:43.002489 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:39:43.013594 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:39:43.052336 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 29 16:39:43.076759 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 29 16:39:43.076914 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:39:43.076931 kernel: GPT:17805311 != 20971519 Jan 29 16:39:43.076945 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:39:43.076968 kernel: GPT:17805311 != 20971519 Jan 29 16:39:43.076981 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:39:43.076994 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:39:43.077742 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:39:43.079507 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:39:43.080547 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:39:43.083587 kernel: libata version 3.00 loaded. Jan 29 16:39:43.082131 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:39:43.082204 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:39:43.085009 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:39:43.089305 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 16:39:43.102849 kernel: scsi host0: ata_piix Jan 29 16:39:43.103220 kernel: scsi host1: ata_piix Jan 29 16:39:43.103399 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 29 16:39:43.103425 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 29 16:39:43.092422 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:39:43.113604 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (471) Jan 29 16:39:43.129355 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (455) Jan 29 16:39:43.165601 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:39:43.194655 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:39:43.240138 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 16:39:43.264643 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 16:39:43.265264 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 16:39:43.276654 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 16:39:43.287426 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:39:43.290289 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:39:43.301541 disk-uuid[508]: Primary Header is updated. Jan 29 16:39:43.301541 disk-uuid[508]: Secondary Entries is updated. Jan 29 16:39:43.301541 disk-uuid[508]: Secondary Header is updated. Jan 29 16:39:43.314268 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:39:43.326875 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:39:44.332299 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:39:44.334655 disk-uuid[509]: The operation has completed successfully. Jan 29 16:39:44.417564 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:39:44.417677 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:39:44.472382 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:39:44.488094 sh[528]: Success Jan 29 16:39:44.506262 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 29 16:39:44.583647 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:39:44.585943 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:39:44.587129 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:39:44.626513 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:39:44.626605 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:39:44.626631 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:39:44.626655 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:39:44.628101 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:39:44.643940 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:39:44.646219 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:39:44.655512 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:39:44.658488 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:39:44.674852 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:39:44.674908 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:39:44.674927 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:39:44.681265 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:39:44.695091 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:39:44.694599 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:39:44.706527 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:39:44.713730 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:39:44.791010 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:39:44.801449 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:39:44.838015 systemd-networkd[713]: lo: Link UP Jan 29 16:39:44.838024 systemd-networkd[713]: lo: Gained carrier Jan 29 16:39:44.839346 systemd-networkd[713]: Enumeration completed Jan 29 16:39:44.839428 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:39:44.839818 systemd-networkd[713]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:39:44.839823 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:39:44.840741 systemd-networkd[713]: eth0: Link UP Jan 29 16:39:44.840746 systemd-networkd[713]: eth0: Gained carrier Jan 29 16:39:44.840754 systemd-networkd[713]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:39:44.844520 systemd[1]: Reached target network.target - Network. Jan 29 16:39:44.854277 systemd-networkd[713]: eth0: DHCPv4 address 172.24.4.158/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 16:39:44.925017 ignition[621]: Ignition 2.20.0 Jan 29 16:39:44.925061 ignition[621]: Stage: fetch-offline Jan 29 16:39:44.925163 ignition[621]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:39:44.928696 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:39:44.925202 ignition[621]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:39:44.925575 ignition[621]: parsed url from cmdline: "" Jan 29 16:39:44.925590 ignition[621]: no config URL provided Jan 29 16:39:44.925611 ignition[621]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:39:44.925643 ignition[621]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:39:44.925660 ignition[621]: failed to fetch config: resource requires networking Jan 29 16:39:44.926274 ignition[621]: Ignition finished successfully Jan 29 16:39:44.939583 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:39:44.972390 ignition[725]: Ignition 2.20.0 Jan 29 16:39:44.972436 ignition[725]: Stage: fetch Jan 29 16:39:44.972996 ignition[725]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:39:44.973037 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:39:44.973390 ignition[725]: parsed url from cmdline: "" Jan 29 16:39:44.973404 ignition[725]: no config URL provided Jan 29 16:39:44.973422 ignition[725]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:39:44.973453 ignition[725]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:39:44.973689 ignition[725]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 16:39:44.974011 ignition[725]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 16:39:44.974093 ignition[725]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 16:39:45.277274 ignition[725]: GET result: OK Jan 29 16:39:45.277409 ignition[725]: parsing config with SHA512: cb77e144f5b6cbba574c22c3257be5464bb558dbcee3eed0933a09236fbcbc27150d8bcffbed199f11e89d5cb0179c3283a29beaf5e83654110cc92a53bb5a7f Jan 29 16:39:45.286479 unknown[725]: fetched base config from "system" Jan 29 16:39:45.287078 unknown[725]: fetched base config from "system" Jan 29 16:39:45.287782 ignition[725]: fetch: fetch complete Jan 29 16:39:45.287098 unknown[725]: fetched user config from "openstack" Jan 29 16:39:45.287796 ignition[725]: fetch: fetch passed Jan 29 16:39:45.292067 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:39:45.287938 ignition[725]: Ignition finished successfully Jan 29 16:39:45.303580 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:39:45.350697 ignition[732]: Ignition 2.20.0 Jan 29 16:39:45.350726 ignition[732]: Stage: kargs Jan 29 16:39:45.351152 ignition[732]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:39:45.351196 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:39:45.353199 ignition[732]: kargs: kargs passed Jan 29 16:39:45.356871 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:39:45.353377 ignition[732]: Ignition finished successfully Jan 29 16:39:45.365581 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:39:45.413937 ignition[738]: Ignition 2.20.0 Jan 29 16:39:45.413967 ignition[738]: Stage: disks Jan 29 16:39:45.414439 ignition[738]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:39:45.414466 ignition[738]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:39:45.418910 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:39:45.416419 ignition[738]: disks: disks passed Jan 29 16:39:45.422072 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:39:45.416521 ignition[738]: Ignition finished successfully Jan 29 16:39:45.423490 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:39:45.425056 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:39:45.427332 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:39:45.429012 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:39:45.441467 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:39:45.586568 systemd-fsck[747]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 16:39:45.598589 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:39:45.626540 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:39:45.906368 kernel: EXT4-fs (vda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:39:45.908081 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:39:45.909852 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:39:45.919428 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:39:45.923471 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:39:45.925009 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:39:45.928967 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 16:39:45.933415 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:39:45.950682 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (755) Jan 29 16:39:45.950759 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:39:45.950821 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:39:45.950867 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:39:45.950915 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:39:45.933484 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:39:45.936820 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:39:45.969466 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:39:45.974650 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:39:46.075572 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:39:46.084759 initrd-setup-root[790]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:39:46.091499 initrd-setup-root[797]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:39:46.097269 initrd-setup-root[804]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:39:46.235690 systemd-networkd[713]: eth0: Gained IPv6LL Jan 29 16:39:46.268517 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:39:46.278423 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:39:46.288603 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:39:46.310403 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:39:46.339067 ignition[872]: INFO : Ignition 2.20.0 Jan 29 16:39:46.339067 ignition[872]: INFO : Stage: mount Jan 29 16:39:46.342412 ignition[872]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:39:46.342412 ignition[872]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:39:46.342412 ignition[872]: INFO : mount: mount passed Jan 29 16:39:46.342412 ignition[872]: INFO : Ignition finished successfully Jan 29 16:39:46.341662 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:39:46.354212 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:39:46.622332 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:39:53.103550 coreos-metadata[757]: Jan 29 16:39:53.103 WARN failed to locate config-drive, using the metadata service API instead Jan 29 16:39:53.143435 coreos-metadata[757]: Jan 29 16:39:53.143 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 16:39:53.158604 coreos-metadata[757]: Jan 29 16:39:53.158 INFO Fetch successful Jan 29 16:39:53.160413 coreos-metadata[757]: Jan 29 16:39:53.159 INFO wrote hostname ci-4230-0-0-b-151e37739e.novalocal to /sysroot/etc/hostname Jan 29 16:39:53.162750 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 16:39:53.162872 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 16:39:53.177404 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:39:53.183749 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:39:53.217337 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (889) Jan 29 16:39:53.228539 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:39:53.228655 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:39:53.232923 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:39:53.244301 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:39:53.250019 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:39:53.290567 ignition[906]: INFO : Ignition 2.20.0 Jan 29 16:39:53.292513 ignition[906]: INFO : Stage: files Jan 29 16:39:53.292513 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:39:53.292513 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:39:53.297223 ignition[906]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:39:53.297223 ignition[906]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:39:53.297223 ignition[906]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:39:53.308610 ignition[906]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:39:53.309470 ignition[906]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:39:53.310314 ignition[906]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:39:53.309984 unknown[906]: wrote ssh authorized keys file for user: core Jan 29 16:39:53.313518 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:39:53.314452 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:39:53.314452 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:39:53.316322 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:39:53.316322 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:39:53.316322 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:39:53.316322 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:39:53.316322 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 16:39:53.720021 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 16:39:55.268037 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:39:55.268037 ignition[906]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:39:55.268037 ignition[906]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:39:55.268037 ignition[906]: INFO : files: files passed Jan 29 16:39:55.268037 ignition[906]: INFO : Ignition finished successfully Jan 29 16:39:55.273744 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:39:55.279504 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:39:55.292395 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:39:55.303082 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:39:55.303948 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:39:55.313945 initrd-setup-root-after-ignition[935]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:39:55.315037 initrd-setup-root-after-ignition[935]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:39:55.316725 initrd-setup-root-after-ignition[939]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:39:55.319094 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:39:55.320174 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:39:55.327394 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:39:55.381107 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:39:55.381371 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:39:55.384349 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:39:55.386416 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:39:55.388722 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:39:55.396695 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:39:55.428416 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:39:55.438519 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:39:55.475181 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:39:55.476833 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:39:55.479804 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:39:55.482477 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:39:55.482746 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:39:55.485706 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:39:55.487544 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:39:55.490216 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:39:55.492666 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:39:55.495061 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:39:55.507530 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:39:55.510273 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:39:55.513315 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:39:55.515996 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:39:55.518855 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:39:55.521438 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:39:55.521705 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:39:55.524717 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:39:55.526500 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:39:55.528937 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:39:55.529172 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:39:55.531931 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:39:55.532345 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:39:55.535865 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:39:55.536258 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:39:55.539200 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:39:55.539509 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:39:55.551494 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:39:55.558621 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:39:55.564348 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:39:55.564820 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:39:55.569593 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:39:55.569973 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:39:55.582606 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:39:55.583343 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:39:55.592276 ignition[959]: INFO : Ignition 2.20.0 Jan 29 16:39:55.592276 ignition[959]: INFO : Stage: umount Jan 29 16:39:55.592276 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:39:55.592276 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:39:55.595344 ignition[959]: INFO : umount: umount passed Jan 29 16:39:55.595344 ignition[959]: INFO : Ignition finished successfully Jan 29 16:39:55.594676 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:39:55.597276 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:39:55.598600 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:39:55.598651 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:39:55.599759 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:39:55.599805 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:39:55.602122 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:39:55.602164 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:39:55.603193 systemd[1]: Stopped target network.target - Network. Jan 29 16:39:55.604148 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:39:55.604193 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:39:55.606176 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:39:55.607173 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:39:55.612293 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:39:55.612918 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:39:55.614459 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:39:55.615628 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:39:55.615668 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:39:55.616849 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:39:55.616881 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:39:55.617863 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:39:55.617910 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:39:55.618893 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:39:55.618936 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:39:55.621562 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:39:55.623349 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:39:55.627999 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:39:55.632005 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:39:55.632139 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:39:55.635594 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:39:55.635809 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:39:55.635921 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:39:55.637879 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:39:55.638445 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:39:55.638491 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:39:55.649340 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:39:55.650089 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:39:55.650142 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:39:55.650744 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:39:55.650792 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:39:55.651474 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:39:55.651518 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:39:55.652539 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:39:55.652580 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:39:55.654178 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:39:55.656253 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:39:55.656327 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:39:55.663989 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:39:55.664409 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:39:55.665510 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:39:55.665607 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:39:55.667178 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:39:55.667451 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:39:55.669004 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:39:55.669037 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:39:55.671023 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:39:55.671072 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:39:55.674157 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:39:55.674203 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:39:55.679715 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:39:55.679853 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:39:55.693384 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:39:55.694589 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:39:55.694656 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:39:55.697446 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:39:55.697492 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:39:55.698116 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:39:55.698161 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:39:55.698776 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:39:55.698820 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:39:55.701044 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:39:55.701098 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:39:55.701529 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:39:55.701627 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:39:55.799706 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:39:55.799949 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:39:55.804053 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:39:55.805546 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:39:55.805670 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:39:55.817691 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:39:55.846860 systemd[1]: Switching root. Jan 29 16:39:55.904172 systemd-journald[184]: Journal stopped Jan 29 16:39:57.378582 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 29 16:39:57.378658 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:39:57.378683 kernel: SELinux: policy capability open_perms=1 Jan 29 16:39:57.378697 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:39:57.378710 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:39:57.378721 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:39:57.378733 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:39:57.378745 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:39:57.378763 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:39:57.378775 kernel: audit: type=1403 audit(1738168796.279:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:39:57.378791 systemd[1]: Successfully loaded SELinux policy in 75.189ms. Jan 29 16:39:57.378811 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.985ms. Jan 29 16:39:57.378826 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:39:57.378838 systemd[1]: Detected virtualization kvm. Jan 29 16:39:57.378851 systemd[1]: Detected architecture x86-64. Jan 29 16:39:57.378863 systemd[1]: Detected first boot. Jan 29 16:39:57.378878 systemd[1]: Hostname set to . Jan 29 16:39:57.378890 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:39:57.378903 zram_generator::config[1005]: No configuration found. Jan 29 16:39:57.378916 kernel: Guest personality initialized and is inactive Jan 29 16:39:57.378933 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:39:57.378945 kernel: Initialized host personality Jan 29 16:39:57.378956 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:39:57.378968 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:39:57.378984 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:39:57.378997 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:39:57.379014 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:39:57.379026 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:39:57.379040 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:39:57.379052 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:39:57.379064 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:39:57.379077 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:39:57.379090 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:39:57.379105 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:39:57.379118 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:39:57.379130 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:39:57.379143 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:39:57.379156 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:39:57.379168 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:39:57.379181 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:39:57.379196 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:39:57.379210 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:39:57.379223 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:39:57.379407 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:39:57.379423 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:39:57.379435 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:39:57.379447 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:39:57.379460 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:39:57.379478 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:39:57.379492 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:39:57.379505 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:39:57.379518 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:39:57.379531 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:39:57.379544 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:39:57.379557 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:39:57.379570 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:39:57.379583 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:39:57.379599 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:39:57.379612 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:39:57.379625 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:39:57.379638 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:39:57.379651 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:39:57.379665 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:39:57.379678 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:39:57.379691 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:39:57.379704 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:39:57.379721 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:39:57.379734 systemd[1]: Reached target machines.target - Containers. Jan 29 16:39:57.379747 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:39:57.379760 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:39:57.379774 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:39:57.379787 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:39:57.379800 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:39:57.379814 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:39:57.379829 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:39:57.379843 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:39:57.379856 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:39:57.379869 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:39:57.379883 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:39:57.379901 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:39:57.379921 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:39:57.379942 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:39:57.379966 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:39:57.379991 kernel: fuse: init (API version 7.39) Jan 29 16:39:57.380013 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:39:57.380409 kernel: loop: module loaded Jan 29 16:39:57.380446 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:39:57.380470 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:39:57.380493 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:39:57.380515 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:39:57.380537 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:39:57.380566 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:39:57.380589 systemd[1]: Stopped verity-setup.service. Jan 29 16:39:57.380612 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:39:57.380638 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:39:57.380661 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:39:57.380683 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:39:57.380858 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:39:57.380883 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:39:57.380905 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:39:57.380928 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:39:57.380955 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:39:57.380996 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:39:57.381014 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:39:57.381028 kernel: ACPI: bus type drm_connector registered Jan 29 16:39:57.381040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:39:57.381054 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:39:57.381067 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:39:57.381324 systemd-journald[1109]: Collecting audit messages is disabled. Jan 29 16:39:57.381363 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:39:57.381378 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:39:57.381392 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:39:57.381405 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:39:57.381418 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:39:57.381436 systemd-journald[1109]: Journal started Jan 29 16:39:57.381462 systemd-journald[1109]: Runtime Journal (/run/log/journal/b337644c963b4da083564898f02237d6) is 8M, max 78.3M, 70.3M free. Jan 29 16:39:56.985062 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:39:57.384476 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:39:56.993536 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 16:39:56.993950 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:39:57.386455 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:39:57.386661 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:39:57.387624 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:39:57.388477 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:39:57.389355 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:39:57.393700 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:39:57.400131 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:39:57.408022 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:39:57.416337 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:39:57.416964 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:39:57.417004 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:39:57.418731 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:39:57.423881 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:39:57.428025 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:39:57.428723 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:39:57.436448 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:39:57.438020 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:39:57.438611 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:39:57.441377 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:39:57.442614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:39:57.450416 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:39:57.454698 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:39:57.464499 systemd-journald[1109]: Time spent on flushing to /var/log/journal/b337644c963b4da083564898f02237d6 is 56.413ms for 940 entries. Jan 29 16:39:57.464499 systemd-journald[1109]: System Journal (/var/log/journal/b337644c963b4da083564898f02237d6) is 8M, max 584.8M, 576.8M free. Jan 29 16:39:57.536957 systemd-journald[1109]: Received client request to flush runtime journal. Jan 29 16:39:57.537016 kernel: loop0: detected capacity change from 0 to 147912 Jan 29 16:39:57.466556 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:39:57.470158 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:39:57.472065 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:39:57.472856 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:39:57.475786 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:39:57.482731 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:39:57.520502 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:39:57.522482 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:39:57.531486 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:39:57.547922 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:39:57.561261 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:39:57.563673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:39:57.572944 systemd-tmpfiles[1145]: ACLs are not supported, ignoring. Jan 29 16:39:57.572972 systemd-tmpfiles[1145]: ACLs are not supported, ignoring. Jan 29 16:39:57.580596 udevadm[1153]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 16:39:57.585472 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:39:57.595985 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:39:57.607115 kernel: loop1: detected capacity change from 0 to 138176 Jan 29 16:39:57.619542 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:39:57.659998 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:39:57.668476 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:39:57.672258 kernel: loop2: detected capacity change from 0 to 8 Jan 29 16:39:57.691284 kernel: loop3: detected capacity change from 0 to 210664 Jan 29 16:39:57.700569 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 29 16:39:57.700645 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 29 16:39:57.707182 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:39:57.748260 kernel: loop4: detected capacity change from 0 to 147912 Jan 29 16:39:57.785266 kernel: loop5: detected capacity change from 0 to 138176 Jan 29 16:39:57.870250 kernel: loop6: detected capacity change from 0 to 8 Jan 29 16:39:57.874257 kernel: loop7: detected capacity change from 0 to 210664 Jan 29 16:39:57.925580 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 16:39:57.926067 (sd-merge)[1174]: Merged extensions into '/usr'. Jan 29 16:39:57.932131 systemd[1]: Reload requested from client PID 1144 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:39:57.932149 systemd[1]: Reloading... Jan 29 16:39:58.066291 zram_generator::config[1205]: No configuration found. Jan 29 16:39:58.319416 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:39:58.419956 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:39:58.420180 systemd[1]: Reloading finished in 487 ms. Jan 29 16:39:58.437747 ldconfig[1139]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:39:58.440325 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:39:58.441349 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:39:58.442304 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:39:58.454371 systemd[1]: Starting ensure-sysext.service... Jan 29 16:39:58.457404 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:39:58.460471 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:39:58.475396 systemd[1]: Reload requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:39:58.475414 systemd[1]: Reloading... Jan 29 16:39:58.504657 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:39:58.508547 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:39:58.509447 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:39:58.509731 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 29 16:39:58.509793 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 29 16:39:58.515944 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:39:58.516273 systemd-tmpfiles[1260]: Skipping /boot Jan 29 16:39:58.526933 systemd-udevd[1261]: Using default interface naming scheme 'v255'. Jan 29 16:39:58.546934 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:39:58.546950 systemd-tmpfiles[1260]: Skipping /boot Jan 29 16:39:58.563283 zram_generator::config[1289]: No configuration found. Jan 29 16:39:58.704258 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1300) Jan 29 16:39:58.752256 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 16:39:58.804258 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 16:39:58.814256 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 16:39:58.826257 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:39:58.869257 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:39:58.872327 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:39:58.899527 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 16:39:58.899611 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 16:39:58.901490 kernel: Console: switching to colour dummy device 80x25 Jan 29 16:39:58.905326 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 16:39:58.905383 kernel: [drm] features: -context_init Jan 29 16:39:58.908250 kernel: [drm] number of scanouts: 1 Jan 29 16:39:58.909248 kernel: [drm] number of cap sets: 0 Jan 29 16:39:58.914251 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 16:39:58.921471 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 16:39:58.921577 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 16:39:58.927100 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 16:39:58.963120 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:39:58.963200 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:39:58.966028 systemd[1]: Reloading finished in 490 ms. Jan 29 16:39:58.979121 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:39:58.995463 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:39:59.027211 systemd[1]: Finished ensure-sysext.service. Jan 29 16:39:59.045057 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:39:59.050806 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:39:59.059457 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:39:59.068384 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:39:59.070497 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:39:59.073390 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:39:59.086407 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:39:59.088295 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:39:59.092408 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:39:59.101517 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:39:59.103676 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:39:59.105873 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:39:59.108359 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:39:59.119314 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:39:59.126210 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:39:59.136571 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:39:59.148620 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:39:59.157518 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:39:59.161304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:39:59.163507 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:39:59.164444 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:39:59.164829 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:39:59.165794 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:39:59.166885 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:39:59.167183 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:39:59.169529 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:39:59.169857 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:39:59.170000 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:39:59.180717 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:39:59.180890 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:39:59.187566 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:39:59.195869 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:39:59.206794 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:39:59.206968 lvm[1381]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:39:59.244728 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:39:59.247530 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:39:59.257577 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:39:59.266370 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:39:59.279933 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:39:59.302928 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:39:59.419088 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:39:59.419901 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:39:59.424259 augenrules[1434]: No rules Jan 29 16:39:59.426838 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:39:59.427022 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:39:59.439015 systemd-resolved[1390]: Positive Trust Anchors: Jan 29 16:39:59.439028 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:39:59.439070 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:39:59.446867 systemd-resolved[1390]: Using system hostname 'ci-4230-0-0-b-151e37739e.novalocal'. Jan 29 16:39:59.448201 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:39:59.450705 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:39:59.479994 systemd-networkd[1389]: lo: Link UP Jan 29 16:39:59.480004 systemd-networkd[1389]: lo: Gained carrier Jan 29 16:39:59.481419 systemd-networkd[1389]: Enumeration completed Jan 29 16:39:59.481572 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:39:59.481777 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:39:59.481781 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:39:59.483839 systemd[1]: Reached target network.target - Network. Jan 29 16:39:59.483973 systemd-networkd[1389]: eth0: Link UP Jan 29 16:39:59.483978 systemd-networkd[1389]: eth0: Gained carrier Jan 29 16:39:59.483997 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:39:59.493575 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:39:59.497408 systemd-networkd[1389]: eth0: DHCPv4 address 172.24.4.158/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 16:39:59.499722 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Jan 29 16:39:59.506991 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:39:59.511444 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:39:59.523530 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:39:59.567948 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:39:59.589572 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:39:59.722088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:39:59.762484 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:39:59.765792 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:39:59.765890 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:39:59.768959 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:39:59.771718 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:39:59.774587 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:39:59.777283 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:39:59.779841 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:39:59.782147 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:39:59.782475 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:39:59.784812 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:39:59.790174 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:39:59.796863 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:39:59.807163 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:39:59.811569 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:39:59.814057 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:39:59.831599 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:39:59.836741 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:39:59.841076 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:39:59.845050 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:39:59.848984 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:39:59.853024 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:39:59.853132 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:39:59.860479 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:39:59.879691 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:39:59.903536 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:39:59.919447 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:39:59.932817 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:39:59.935921 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:39:59.947543 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:39:59.948679 jq[1458]: false Jan 29 16:39:59.953447 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:39:59.962658 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:39:59.970799 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:39:59.976112 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:39:59.976716 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:39:59.977395 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:39:59.986368 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:39:59.990395 extend-filesystems[1459]: Found loop4 Jan 29 16:40:00.000146 extend-filesystems[1459]: Found loop5 Jan 29 16:40:00.000146 extend-filesystems[1459]: Found loop6 Jan 29 16:40:00.000146 extend-filesystems[1459]: Found loop7 Jan 29 16:40:00.000146 extend-filesystems[1459]: Found vda Jan 29 16:40:00.000146 extend-filesystems[1459]: Found vda1 Jan 29 16:40:00.000146 extend-filesystems[1459]: Found vda2 Jan 29 16:40:00.000146 extend-filesystems[1459]: Found vda3 Jan 29 16:40:00.000146 extend-filesystems[1459]: Found usr Jan 29 16:40:00.000146 extend-filesystems[1459]: Found vda4 Jan 29 16:40:00.000146 extend-filesystems[1459]: Found vda6 Jan 29 16:40:00.000146 extend-filesystems[1459]: Found vda7 Jan 29 16:40:00.000146 extend-filesystems[1459]: Found vda9 Jan 29 16:40:00.000146 extend-filesystems[1459]: Checking size of /dev/vda9 Jan 29 16:40:00.152361 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 29 16:40:00.152398 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 29 16:40:00.152422 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1319) Jan 29 16:39:59.999194 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:40:00.001096 dbus-daemon[1455]: [system] SELinux support is enabled Jan 29 16:40:00.152860 extend-filesystems[1459]: Resized partition /dev/vda9 Jan 29 16:40:00.000333 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:40:00.168569 extend-filesystems[1484]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:40:00.168569 extend-filesystems[1484]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 16:40:00.168569 extend-filesystems[1484]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 16:40:00.168569 extend-filesystems[1484]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 29 16:40:00.186858 jq[1466]: true Jan 29 16:40:00.000634 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:40:00.188753 extend-filesystems[1459]: Resized filesystem in /dev/vda9 Jan 29 16:40:00.190766 update_engine[1465]: I20250129 16:40:00.048615 1465 main.cc:92] Flatcar Update Engine starting Jan 29 16:40:00.190766 update_engine[1465]: I20250129 16:40:00.050220 1465 update_check_scheduler.cc:74] Next update check in 4m28s Jan 29 16:40:00.000798 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:40:00.013815 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:40:00.191324 jq[1480]: true Jan 29 16:40:00.041123 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:40:00.041157 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:40:00.053874 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:40:00.053908 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:40:00.056692 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:40:00.056953 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:40:00.076171 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:40:00.097860 (ntainerd)[1486]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:40:00.109492 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:40:00.154908 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:40:00.155136 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:40:00.229184 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:40:00.227955 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:40:00.229105 systemd-logind[1464]: New seat seat0. Jan 29 16:40:00.245312 systemd-logind[1464]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 16:40:00.245340 systemd-logind[1464]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:40:00.248448 systemd[1]: Starting sshkeys.service... Jan 29 16:40:00.250899 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:40:00.281614 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:40:00.297654 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:40:00.347584 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:40:00.486302 containerd[1486]: time="2025-01-29T16:40:00.486165929Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:40:00.530325 containerd[1486]: time="2025-01-29T16:40:00.530084098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:40:00.531519 containerd[1486]: time="2025-01-29T16:40:00.531491678Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:40:00.532791 containerd[1486]: time="2025-01-29T16:40:00.531571958Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:40:00.532791 containerd[1486]: time="2025-01-29T16:40:00.531593839Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:40:00.532791 containerd[1486]: time="2025-01-29T16:40:00.531760161Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:40:00.532791 containerd[1486]: time="2025-01-29T16:40:00.531778536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:40:00.532791 containerd[1486]: time="2025-01-29T16:40:00.531845131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:40:00.532791 containerd[1486]: time="2025-01-29T16:40:00.531861010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:40:00.532791 containerd[1486]: time="2025-01-29T16:40:00.532047460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:40:00.532791 containerd[1486]: time="2025-01-29T16:40:00.532064121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:40:00.532791 containerd[1486]: time="2025-01-29T16:40:00.532078719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:40:00.532791 containerd[1486]: time="2025-01-29T16:40:00.532089479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:40:00.532791 containerd[1486]: time="2025-01-29T16:40:00.532163287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:40:00.532791 containerd[1486]: time="2025-01-29T16:40:00.532394251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:40:00.533091 containerd[1486]: time="2025-01-29T16:40:00.532515438Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:40:00.533091 containerd[1486]: time="2025-01-29T16:40:00.532530917Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:40:00.533091 containerd[1486]: time="2025-01-29T16:40:00.532605407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:40:00.533091 containerd[1486]: time="2025-01-29T16:40:00.532657484Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:40:00.540153 containerd[1486]: time="2025-01-29T16:40:00.540130720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:40:00.540300 containerd[1486]: time="2025-01-29T16:40:00.540281192Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:40:00.540424 containerd[1486]: time="2025-01-29T16:40:00.540407038Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:40:00.540530 containerd[1486]: time="2025-01-29T16:40:00.540513338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:40:00.540613 containerd[1486]: time="2025-01-29T16:40:00.540597325Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:40:00.540892 containerd[1486]: time="2025-01-29T16:40:00.540871109Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:40:00.541206 containerd[1486]: time="2025-01-29T16:40:00.541188444Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:40:00.541438 containerd[1486]: time="2025-01-29T16:40:00.541420680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:40:00.541534 containerd[1486]: time="2025-01-29T16:40:00.541518633Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:40:00.541625 containerd[1486]: time="2025-01-29T16:40:00.541608963Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:40:00.541713 containerd[1486]: time="2025-01-29T16:40:00.541698030Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:40:00.541801 containerd[1486]: time="2025-01-29T16:40:00.541785854Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:40:00.541894 containerd[1486]: time="2025-01-29T16:40:00.541878829Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:40:00.541987 containerd[1486]: time="2025-01-29T16:40:00.541971964Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:40:00.542085 containerd[1486]: time="2025-01-29T16:40:00.542067883Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:40:00.542172 containerd[1486]: time="2025-01-29T16:40:00.542157341Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:40:00.542268 containerd[1486]: time="2025-01-29T16:40:00.542253241Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:40:00.542371 containerd[1486]: time="2025-01-29T16:40:00.542354701Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:40:00.542476 containerd[1486]: time="2025-01-29T16:40:00.542460129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.542565 containerd[1486]: time="2025-01-29T16:40:00.542550428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.542664 containerd[1486]: time="2025-01-29T16:40:00.542645046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.542755 containerd[1486]: time="2025-01-29T16:40:00.542740345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.542852 containerd[1486]: time="2025-01-29T16:40:00.542836174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.542992 containerd[1486]: time="2025-01-29T16:40:00.542925582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.542992 containerd[1486]: time="2025-01-29T16:40:00.542945459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.542992 containerd[1486]: time="2025-01-29T16:40:00.542962030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.543171 containerd[1486]: time="2025-01-29T16:40:00.542978010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.543171 containerd[1486]: time="2025-01-29T16:40:00.543120468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.543171 containerd[1486]: time="2025-01-29T16:40:00.543136177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.543171 containerd[1486]: time="2025-01-29T16:40:00.543150203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.543429 containerd[1486]: time="2025-01-29T16:40:00.543285707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.543429 containerd[1486]: time="2025-01-29T16:40:00.543308751Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:40:00.543597 containerd[1486]: time="2025-01-29T16:40:00.543333337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.543597 containerd[1486]: time="2025-01-29T16:40:00.543510178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.543597 containerd[1486]: time="2025-01-29T16:40:00.543527661Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:40:00.543788 containerd[1486]: time="2025-01-29T16:40:00.543672353Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:40:00.543788 containerd[1486]: time="2025-01-29T16:40:00.543697370Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:40:00.544020 containerd[1486]: time="2025-01-29T16:40:00.543709693Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:40:00.544020 containerd[1486]: time="2025-01-29T16:40:00.543951636Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:40:00.544020 containerd[1486]: time="2025-01-29T16:40:00.543966514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.544020 containerd[1486]: time="2025-01-29T16:40:00.543985300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:40:00.544020 containerd[1486]: time="2025-01-29T16:40:00.543997563Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:40:00.544331 containerd[1486]: time="2025-01-29T16:40:00.544172881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:40:00.544720 containerd[1486]: time="2025-01-29T16:40:00.544646460Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:40:00.545256 containerd[1486]: time="2025-01-29T16:40:00.544900406Z" level=info msg="Connect containerd service" Jan 29 16:40:00.545256 containerd[1486]: time="2025-01-29T16:40:00.544943877Z" level=info msg="using legacy CRI server" Jan 29 16:40:00.545256 containerd[1486]: time="2025-01-29T16:40:00.544975336Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:40:00.545256 containerd[1486]: time="2025-01-29T16:40:00.545110480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:40:00.547793 containerd[1486]: time="2025-01-29T16:40:00.547762684Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:40:00.551405 containerd[1486]: time="2025-01-29T16:40:00.551349822Z" level=info msg="Start subscribing containerd event" Jan 29 16:40:00.551455 containerd[1486]: time="2025-01-29T16:40:00.551421486Z" level=info msg="Start recovering state" Jan 29 16:40:00.551545 containerd[1486]: time="2025-01-29T16:40:00.551495785Z" level=info msg="Start event monitor" Jan 29 16:40:00.551545 containerd[1486]: time="2025-01-29T16:40:00.551519450Z" level=info msg="Start snapshots syncer" Jan 29 16:40:00.551545 containerd[1486]: time="2025-01-29T16:40:00.551530300Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:40:00.551545 containerd[1486]: time="2025-01-29T16:40:00.551538916Z" level=info msg="Start streaming server" Jan 29 16:40:00.551883 containerd[1486]: time="2025-01-29T16:40:00.551833148Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:40:00.552072 containerd[1486]: time="2025-01-29T16:40:00.551996915Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:40:00.552155 containerd[1486]: time="2025-01-29T16:40:00.552139703Z" level=info msg="containerd successfully booted in 0.067733s" Jan 29 16:40:00.552155 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:40:00.911576 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:40:00.947198 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:40:00.961971 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:40:00.967897 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:40:00.968351 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:40:00.984613 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:40:01.002429 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:40:01.012933 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:40:01.031926 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:40:01.037061 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:40:01.082478 systemd-networkd[1389]: eth0: Gained IPv6LL Jan 29 16:40:01.083025 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Jan 29 16:40:01.085566 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:40:01.091619 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:40:01.102788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:40:01.109616 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:40:01.161720 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:40:03.019531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:40:03.036447 (kubelet)[1564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:40:04.528340 kubelet[1564]: E0129 16:40:04.528186 1564 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:40:04.533646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:40:04.533982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:40:04.535013 systemd[1]: kubelet.service: Consumed 2.098s CPU time, 246.3M memory peak. Jan 29 16:40:05.783513 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:40:05.797166 systemd[1]: Started sshd@0-172.24.4.158:22-172.24.4.1:42376.service - OpenSSH per-connection server daemon (172.24.4.1:42376). Jan 29 16:40:06.119288 login[1544]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Jan 29 16:40:06.124672 login[1545]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 16:40:06.158405 systemd-logind[1464]: New session 1 of user core. Jan 29 16:40:06.161982 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:40:06.173990 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:40:06.200596 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:40:06.213055 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:40:06.329844 (systemd)[1582]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:40:06.336666 systemd-logind[1464]: New session c1 of user core. Jan 29 16:40:06.712193 systemd[1582]: Queued start job for default target default.target. Jan 29 16:40:06.723264 systemd[1582]: Created slice app.slice - User Application Slice. Jan 29 16:40:06.723292 systemd[1582]: Reached target paths.target - Paths. Jan 29 16:40:06.723337 systemd[1582]: Reached target timers.target - Timers. Jan 29 16:40:06.724672 systemd[1582]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:40:06.743017 systemd[1582]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:40:06.743137 systemd[1582]: Reached target sockets.target - Sockets. Jan 29 16:40:06.743179 systemd[1582]: Reached target basic.target - Basic System. Jan 29 16:40:06.743218 systemd[1582]: Reached target default.target - Main User Target. Jan 29 16:40:06.743279 systemd[1582]: Startup finished in 392ms. Jan 29 16:40:06.743755 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:40:06.756643 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:40:06.997708 coreos-metadata[1454]: Jan 29 16:40:06.997 WARN failed to locate config-drive, using the metadata service API instead Jan 29 16:40:07.048335 coreos-metadata[1454]: Jan 29 16:40:07.048 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 16:40:07.120160 login[1544]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 16:40:07.132347 systemd-logind[1464]: New session 2 of user core. Jan 29 16:40:07.145920 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:40:07.274422 sshd[1574]: Accepted publickey for core from 172.24.4.1 port 42376 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:40:07.277393 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:40:07.290020 systemd-logind[1464]: New session 3 of user core. Jan 29 16:40:07.309918 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:40:07.354190 coreos-metadata[1454]: Jan 29 16:40:07.354 INFO Fetch successful Jan 29 16:40:07.354190 coreos-metadata[1454]: Jan 29 16:40:07.354 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 16:40:07.369582 coreos-metadata[1454]: Jan 29 16:40:07.369 INFO Fetch successful Jan 29 16:40:07.369582 coreos-metadata[1454]: Jan 29 16:40:07.369 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 16:40:07.383461 coreos-metadata[1454]: Jan 29 16:40:07.383 INFO Fetch successful Jan 29 16:40:07.383461 coreos-metadata[1454]: Jan 29 16:40:07.383 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 16:40:07.387750 coreos-metadata[1513]: Jan 29 16:40:07.387 WARN failed to locate config-drive, using the metadata service API instead Jan 29 16:40:07.399604 coreos-metadata[1454]: Jan 29 16:40:07.399 INFO Fetch successful Jan 29 16:40:07.399604 coreos-metadata[1454]: Jan 29 16:40:07.399 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 16:40:07.415139 coreos-metadata[1454]: Jan 29 16:40:07.415 INFO Fetch successful Jan 29 16:40:07.415139 coreos-metadata[1454]: Jan 29 16:40:07.415 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 16:40:07.429322 coreos-metadata[1454]: Jan 29 16:40:07.429 INFO Fetch successful Jan 29 16:40:07.430709 coreos-metadata[1513]: Jan 29 16:40:07.430 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 16:40:07.446543 coreos-metadata[1513]: Jan 29 16:40:07.446 INFO Fetch successful Jan 29 16:40:07.447438 coreos-metadata[1513]: Jan 29 16:40:07.447 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 16:40:07.464375 coreos-metadata[1513]: Jan 29 16:40:07.463 INFO Fetch successful Jan 29 16:40:07.468594 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:40:07.470143 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:40:07.471072 unknown[1513]: wrote ssh authorized keys file for user: core Jan 29 16:40:07.508433 update-ssh-keys[1621]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:40:07.510194 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:40:07.513419 systemd[1]: Finished sshkeys.service. Jan 29 16:40:07.518376 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:40:07.518717 systemd[1]: Startup finished in 1.112s (kernel) + 14.534s (initrd) + 11.313s (userspace) = 26.960s. Jan 29 16:40:07.822816 systemd[1]: Started sshd@1-172.24.4.158:22-172.24.4.1:42390.service - OpenSSH per-connection server daemon (172.24.4.1:42390). Jan 29 16:40:09.075549 sshd[1626]: Accepted publickey for core from 172.24.4.1 port 42390 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:40:09.078186 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:40:09.090222 systemd-logind[1464]: New session 4 of user core. Jan 29 16:40:09.101522 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:40:09.722292 sshd[1628]: Connection closed by 172.24.4.1 port 42390 Jan 29 16:40:09.722058 sshd-session[1626]: pam_unix(sshd:session): session closed for user core Jan 29 16:40:09.738547 systemd[1]: sshd@1-172.24.4.158:22-172.24.4.1:42390.service: Deactivated successfully. Jan 29 16:40:09.741940 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:40:09.743810 systemd-logind[1464]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:40:09.758205 systemd[1]: Started sshd@2-172.24.4.158:22-172.24.4.1:42404.service - OpenSSH per-connection server daemon (172.24.4.1:42404). Jan 29 16:40:09.761678 systemd-logind[1464]: Removed session 4. Jan 29 16:40:11.106991 sshd[1633]: Accepted publickey for core from 172.24.4.1 port 42404 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:40:11.110208 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:40:11.122905 systemd-logind[1464]: New session 5 of user core. Jan 29 16:40:11.132625 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:40:11.753490 sshd[1636]: Connection closed by 172.24.4.1 port 42404 Jan 29 16:40:11.754578 sshd-session[1633]: pam_unix(sshd:session): session closed for user core Jan 29 16:40:11.771878 systemd[1]: sshd@2-172.24.4.158:22-172.24.4.1:42404.service: Deactivated successfully. Jan 29 16:40:11.775121 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:40:11.778638 systemd-logind[1464]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:40:11.785830 systemd[1]: Started sshd@3-172.24.4.158:22-172.24.4.1:42420.service - OpenSSH per-connection server daemon (172.24.4.1:42420). Jan 29 16:40:11.789358 systemd-logind[1464]: Removed session 5. Jan 29 16:40:13.051295 sshd[1641]: Accepted publickey for core from 172.24.4.1 port 42420 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:40:13.086464 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:40:13.101193 systemd-logind[1464]: New session 6 of user core. Jan 29 16:40:13.109503 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:40:13.695841 sshd[1644]: Connection closed by 172.24.4.1 port 42420 Jan 29 16:40:13.696590 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Jan 29 16:40:13.710395 systemd[1]: sshd@3-172.24.4.158:22-172.24.4.1:42420.service: Deactivated successfully. Jan 29 16:40:13.712599 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:40:13.713835 systemd-logind[1464]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:40:13.720745 systemd[1]: Started sshd@4-172.24.4.158:22-172.24.4.1:56010.service - OpenSSH per-connection server daemon (172.24.4.1:56010). Jan 29 16:40:13.723479 systemd-logind[1464]: Removed session 6. Jan 29 16:40:14.622062 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:40:14.633554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:40:14.915859 sshd[1649]: Accepted publickey for core from 172.24.4.1 port 56010 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:40:14.919538 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:40:14.941759 systemd-logind[1464]: New session 7 of user core. Jan 29 16:40:14.953124 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:40:14.957495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:40:14.977031 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:40:15.183526 kubelet[1659]: E0129 16:40:15.183348 1659 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:40:15.190838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:40:15.191136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:40:15.192070 systemd[1]: kubelet.service: Consumed 308ms CPU time, 98M memory peak. Jan 29 16:40:15.367036 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:40:15.367739 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:40:15.388200 sudo[1669]: pam_unix(sudo:session): session closed for user root Jan 29 16:40:15.701287 sshd[1661]: Connection closed by 172.24.4.1 port 56010 Jan 29 16:40:15.702491 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Jan 29 16:40:15.719161 systemd[1]: sshd@4-172.24.4.158:22-172.24.4.1:56010.service: Deactivated successfully. Jan 29 16:40:15.723901 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:40:15.726575 systemd-logind[1464]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:40:15.736009 systemd[1]: Started sshd@5-172.24.4.158:22-172.24.4.1:56016.service - OpenSSH per-connection server daemon (172.24.4.1:56016). Jan 29 16:40:15.739040 systemd-logind[1464]: Removed session 7. Jan 29 16:40:17.080787 sshd[1674]: Accepted publickey for core from 172.24.4.1 port 56016 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:40:17.083778 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:40:17.094641 systemd-logind[1464]: New session 8 of user core. Jan 29 16:40:17.108671 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:40:17.559566 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:40:17.560294 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:40:17.566824 sudo[1679]: pam_unix(sudo:session): session closed for user root Jan 29 16:40:17.577154 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:40:17.577817 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:40:17.597782 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:40:17.631852 augenrules[1701]: No rules Jan 29 16:40:17.632391 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:40:17.632591 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:40:17.633569 sudo[1678]: pam_unix(sudo:session): session closed for user root Jan 29 16:40:17.828427 sshd[1677]: Connection closed by 172.24.4.1 port 56016 Jan 29 16:40:17.830692 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Jan 29 16:40:17.851683 systemd[1]: sshd@5-172.24.4.158:22-172.24.4.1:56016.service: Deactivated successfully. Jan 29 16:40:17.854944 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:40:17.856794 systemd-logind[1464]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:40:17.863800 systemd[1]: Started sshd@6-172.24.4.158:22-172.24.4.1:56030.service - OpenSSH per-connection server daemon (172.24.4.1:56030). Jan 29 16:40:17.866723 systemd-logind[1464]: Removed session 8. Jan 29 16:40:19.087608 sshd[1709]: Accepted publickey for core from 172.24.4.1 port 56030 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:40:19.090198 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:40:19.101794 systemd-logind[1464]: New session 9 of user core. Jan 29 16:40:19.110592 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:40:19.665949 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:40:19.667707 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:40:21.131961 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:40:21.132536 systemd[1]: kubelet.service: Consumed 308ms CPU time, 98M memory peak. Jan 29 16:40:21.151879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:40:21.201405 systemd[1]: Reload requested from client PID 1750 ('systemctl') (unit session-9.scope)... Jan 29 16:40:21.201433 systemd[1]: Reloading... Jan 29 16:40:21.305273 zram_generator::config[1795]: No configuration found. Jan 29 16:40:21.502997 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:40:21.625494 systemd[1]: Reloading finished in 423 ms. Jan 29 16:40:21.678837 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 16:40:21.678918 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 16:40:21.679156 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:40:21.679208 systemd[1]: kubelet.service: Consumed 125ms CPU time, 83.3M memory peak. Jan 29 16:40:21.680872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:40:21.789880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:40:21.794697 (kubelet)[1861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:40:22.128064 kubelet[1861]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:40:22.128064 kubelet[1861]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:40:22.128064 kubelet[1861]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:40:22.131507 kubelet[1861]: I0129 16:40:22.131379 1861 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:40:22.651993 kubelet[1861]: I0129 16:40:22.649664 1861 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:40:22.652358 kubelet[1861]: I0129 16:40:22.652222 1861 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:40:22.653375 kubelet[1861]: I0129 16:40:22.653341 1861 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:40:22.728433 kubelet[1861]: I0129 16:40:22.728326 1861 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:40:22.808474 kubelet[1861]: I0129 16:40:22.808421 1861 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:40:22.822382 kubelet[1861]: I0129 16:40:22.822070 1861 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:40:22.822711 kubelet[1861]: I0129 16:40:22.822192 1861 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.158","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:40:22.822926 kubelet[1861]: I0129 16:40:22.822713 1861 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:40:22.822926 kubelet[1861]: I0129 16:40:22.822743 1861 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:40:22.823082 kubelet[1861]: I0129 16:40:22.822987 1861 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:40:22.828447 kubelet[1861]: I0129 16:40:22.828352 1861 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:40:22.828447 kubelet[1861]: I0129 16:40:22.828400 1861 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:40:22.828447 kubelet[1861]: I0129 16:40:22.828446 1861 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:40:22.829309 kubelet[1861]: I0129 16:40:22.828480 1861 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:40:22.829509 kubelet[1861]: E0129 16:40:22.829449 1861 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:22.830032 kubelet[1861]: E0129 16:40:22.829972 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:22.842861 kubelet[1861]: W0129 16:40:22.842785 1861 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.158" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 16:40:22.842861 kubelet[1861]: E0129 16:40:22.842851 1861 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.158" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 16:40:22.847869 kubelet[1861]: I0129 16:40:22.847781 1861 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:40:22.853306 kubelet[1861]: I0129 16:40:22.852014 1861 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:40:22.853306 kubelet[1861]: W0129 16:40:22.852118 1861 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:40:22.853554 kubelet[1861]: I0129 16:40:22.853372 1861 server.go:1264] "Started kubelet" Jan 29 16:40:22.854039 kubelet[1861]: I0129 16:40:22.853978 1861 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:40:22.860992 kubelet[1861]: I0129 16:40:22.860952 1861 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:40:22.870176 kubelet[1861]: I0129 16:40:22.870128 1861 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:40:22.884890 kubelet[1861]: I0129 16:40:22.884797 1861 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:40:22.885438 kubelet[1861]: I0129 16:40:22.885405 1861 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:40:22.890744 kubelet[1861]: I0129 16:40:22.890708 1861 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:40:22.899997 kubelet[1861]: I0129 16:40:22.890981 1861 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:40:22.900162 kubelet[1861]: I0129 16:40:22.900096 1861 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:40:22.903600 kubelet[1861]: I0129 16:40:22.902649 1861 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:40:22.903600 kubelet[1861]: I0129 16:40:22.902914 1861 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:40:22.912795 kubelet[1861]: I0129 16:40:22.912709 1861 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:40:22.913667 kubelet[1861]: E0129 16:40:22.913580 1861 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.158\" not found" node="172.24.4.158" Jan 29 16:40:22.958989 kubelet[1861]: I0129 16:40:22.958937 1861 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:40:22.958989 kubelet[1861]: I0129 16:40:22.958964 1861 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:40:22.958989 kubelet[1861]: I0129 16:40:22.958993 1861 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:40:22.992571 kubelet[1861]: I0129 16:40:22.992013 1861 kubelet_node_status.go:73] "Attempting to register node" node="172.24.4.158" Jan 29 16:40:23.077301 kubelet[1861]: I0129 16:40:23.077204 1861 policy_none.go:49] "None policy: Start" Jan 29 16:40:23.081065 kubelet[1861]: I0129 16:40:23.080121 1861 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:40:23.081065 kubelet[1861]: I0129 16:40:23.080223 1861 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:40:23.100890 kubelet[1861]: I0129 16:40:23.100526 1861 kubelet_node_status.go:76] "Successfully registered node" node="172.24.4.158" Jan 29 16:40:23.124018 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:40:23.155495 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:40:23.160658 kubelet[1861]: E0129 16:40:23.160103 1861 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.158\" not found" Jan 29 16:40:23.176345 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:40:23.182113 kubelet[1861]: I0129 16:40:23.181162 1861 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:40:23.182113 kubelet[1861]: I0129 16:40:23.181519 1861 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:40:23.182113 kubelet[1861]: I0129 16:40:23.181738 1861 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:40:23.183097 kubelet[1861]: I0129 16:40:23.182977 1861 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:40:23.187680 kubelet[1861]: I0129 16:40:23.187631 1861 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:40:23.187680 kubelet[1861]: I0129 16:40:23.187668 1861 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:40:23.187680 kubelet[1861]: I0129 16:40:23.187693 1861 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:40:23.187966 kubelet[1861]: E0129 16:40:23.187755 1861 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 16:40:23.191702 kubelet[1861]: E0129 16:40:23.191057 1861 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.158\" not found" Jan 29 16:40:23.261442 kubelet[1861]: E0129 16:40:23.261198 1861 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.158\" not found" Jan 29 16:40:23.318724 sudo[1713]: pam_unix(sudo:session): session closed for user root Jan 29 16:40:23.362040 kubelet[1861]: E0129 16:40:23.361960 1861 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.158\" not found" Jan 29 16:40:23.462429 kubelet[1861]: E0129 16:40:23.462268 1861 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.158\" not found" Jan 29 16:40:23.563541 kubelet[1861]: E0129 16:40:23.563408 1861 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.158\" not found" Jan 29 16:40:23.566367 sshd[1712]: Connection closed by 172.24.4.1 port 56030 Jan 29 16:40:23.567399 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Jan 29 16:40:23.573339 systemd-logind[1464]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:40:23.575035 systemd[1]: sshd@6-172.24.4.158:22-172.24.4.1:56030.service: Deactivated successfully. Jan 29 16:40:23.579115 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:40:23.579606 systemd[1]: session-9.scope: Consumed 1.067s CPU time, 112.3M memory peak. Jan 29 16:40:23.585524 systemd-logind[1464]: Removed session 9. Jan 29 16:40:23.659381 kubelet[1861]: I0129 16:40:23.659303 1861 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 16:40:23.659716 kubelet[1861]: W0129 16:40:23.659585 1861 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:40:23.659716 kubelet[1861]: W0129 16:40:23.659641 1861 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:40:23.659716 kubelet[1861]: W0129 16:40:23.659678 1861 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:40:23.663763 kubelet[1861]: E0129 16:40:23.663661 1861 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.158\" not found" Jan 29 16:40:23.764833 kubelet[1861]: E0129 16:40:23.764592 1861 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.158\" not found" Jan 29 16:40:23.830666 kubelet[1861]: E0129 16:40:23.830482 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:23.865505 kubelet[1861]: E0129 16:40:23.865360 1861 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.158\" not found" Jan 29 16:40:23.966496 kubelet[1861]: E0129 16:40:23.966340 1861 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.158\" not found" Jan 29 16:40:24.067636 kubelet[1861]: E0129 16:40:24.067328 1861 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.158\" not found" Jan 29 16:40:24.167637 kubelet[1861]: E0129 16:40:24.167492 1861 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.158\" not found" Jan 29 16:40:24.268634 kubelet[1861]: E0129 16:40:24.268487 1861 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.158\" not found" Jan 29 16:40:24.369883 kubelet[1861]: E0129 16:40:24.369576 1861 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.158\" not found" Jan 29 16:40:24.471310 kubelet[1861]: I0129 16:40:24.471162 1861 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 16:40:24.472411 containerd[1486]: time="2025-01-29T16:40:24.472186885Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:40:24.473736 kubelet[1861]: I0129 16:40:24.473594 1861 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 16:40:24.830858 kubelet[1861]: I0129 16:40:24.830600 1861 apiserver.go:52] "Watching apiserver" Jan 29 16:40:24.831295 kubelet[1861]: E0129 16:40:24.831128 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:24.838449 kubelet[1861]: I0129 16:40:24.838340 1861 topology_manager.go:215] "Topology Admit Handler" podUID="0598894a-40f7-4ecd-8a4d-913b5a8e613d" podNamespace="calico-system" podName="calico-node-plhx4" Jan 29 16:40:24.838676 kubelet[1861]: I0129 16:40:24.838599 1861 topology_manager.go:215] "Topology Admit Handler" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" podNamespace="calico-system" podName="csi-node-driver-n9lss" Jan 29 16:40:24.840867 kubelet[1861]: I0129 16:40:24.838818 1861 topology_manager.go:215] "Topology Admit Handler" podUID="ddebdd4c-4145-4a82-baf4-edda221158f8" podNamespace="kube-system" podName="kube-proxy-42zhm" Jan 29 16:40:24.840867 kubelet[1861]: E0129 16:40:24.839965 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:24.865538 systemd[1]: Created slice kubepods-besteffort-pod0598894a_40f7_4ecd_8a4d_913b5a8e613d.slice - libcontainer container kubepods-besteffort-pod0598894a_40f7_4ecd_8a4d_913b5a8e613d.slice. Jan 29 16:40:24.898586 systemd[1]: Created slice kubepods-besteffort-podddebdd4c_4145_4a82_baf4_edda221158f8.slice - libcontainer container kubepods-besteffort-podddebdd4c_4145_4a82_baf4_edda221158f8.slice. Jan 29 16:40:24.900956 kubelet[1861]: I0129 16:40:24.900911 1861 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:40:24.912323 kubelet[1861]: I0129 16:40:24.912103 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0598894a-40f7-4ecd-8a4d-913b5a8e613d-cni-log-dir\") pod \"calico-node-plhx4\" (UID: \"0598894a-40f7-4ecd-8a4d-913b5a8e613d\") " pod="calico-system/calico-node-plhx4" Jan 29 16:40:24.912323 kubelet[1861]: I0129 16:40:24.912219 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4cbd60f8-abf3-4e44-b98f-73df647c2adc-varrun\") pod \"csi-node-driver-n9lss\" (UID: \"4cbd60f8-abf3-4e44-b98f-73df647c2adc\") " pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:24.912323 kubelet[1861]: I0129 16:40:24.912358 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddebdd4c-4145-4a82-baf4-edda221158f8-lib-modules\") pod \"kube-proxy-42zhm\" (UID: \"ddebdd4c-4145-4a82-baf4-edda221158f8\") " pod="kube-system/kube-proxy-42zhm" Jan 29 16:40:24.912323 kubelet[1861]: I0129 16:40:24.912440 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0598894a-40f7-4ecd-8a4d-913b5a8e613d-cni-net-dir\") pod \"calico-node-plhx4\" (UID: \"0598894a-40f7-4ecd-8a4d-913b5a8e613d\") " pod="calico-system/calico-node-plhx4" Jan 29 16:40:24.912908 kubelet[1861]: I0129 16:40:24.912527 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cbd60f8-abf3-4e44-b98f-73df647c2adc-kubelet-dir\") pod \"csi-node-driver-n9lss\" (UID: \"4cbd60f8-abf3-4e44-b98f-73df647c2adc\") " pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:24.912908 kubelet[1861]: I0129 16:40:24.912589 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4cbd60f8-abf3-4e44-b98f-73df647c2adc-socket-dir\") pod \"csi-node-driver-n9lss\" (UID: \"4cbd60f8-abf3-4e44-b98f-73df647c2adc\") " pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:24.912908 kubelet[1861]: I0129 16:40:24.912666 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqkfq\" (UniqueName: \"kubernetes.io/projected/4cbd60f8-abf3-4e44-b98f-73df647c2adc-kube-api-access-qqkfq\") pod \"csi-node-driver-n9lss\" (UID: \"4cbd60f8-abf3-4e44-b98f-73df647c2adc\") " pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:24.912908 kubelet[1861]: I0129 16:40:24.912737 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddebdd4c-4145-4a82-baf4-edda221158f8-xtables-lock\") pod \"kube-proxy-42zhm\" (UID: \"ddebdd4c-4145-4a82-baf4-edda221158f8\") " pod="kube-system/kube-proxy-42zhm" Jan 29 16:40:24.912908 kubelet[1861]: I0129 16:40:24.912823 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0598894a-40f7-4ecd-8a4d-913b5a8e613d-xtables-lock\") pod \"calico-node-plhx4\" (UID: \"0598894a-40f7-4ecd-8a4d-913b5a8e613d\") " pod="calico-system/calico-node-plhx4" Jan 29 16:40:24.913217 kubelet[1861]: I0129 16:40:24.912886 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0598894a-40f7-4ecd-8a4d-913b5a8e613d-node-certs\") pod \"calico-node-plhx4\" (UID: \"0598894a-40f7-4ecd-8a4d-913b5a8e613d\") " pod="calico-system/calico-node-plhx4" Jan 29 16:40:24.913217 kubelet[1861]: I0129 16:40:24.912969 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0598894a-40f7-4ecd-8a4d-913b5a8e613d-cni-bin-dir\") pod \"calico-node-plhx4\" (UID: \"0598894a-40f7-4ecd-8a4d-913b5a8e613d\") " pod="calico-system/calico-node-plhx4" Jan 29 16:40:24.913217 kubelet[1861]: I0129 16:40:24.913033 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0598894a-40f7-4ecd-8a4d-913b5a8e613d-flexvol-driver-host\") pod \"calico-node-plhx4\" (UID: \"0598894a-40f7-4ecd-8a4d-913b5a8e613d\") " pod="calico-system/calico-node-plhx4" Jan 29 16:40:24.913217 kubelet[1861]: I0129 16:40:24.913099 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4cbd60f8-abf3-4e44-b98f-73df647c2adc-registration-dir\") pod \"csi-node-driver-n9lss\" (UID: \"4cbd60f8-abf3-4e44-b98f-73df647c2adc\") " pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:24.913217 kubelet[1861]: I0129 16:40:24.913163 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgv6c\" (UniqueName: \"kubernetes.io/projected/0598894a-40f7-4ecd-8a4d-913b5a8e613d-kube-api-access-fgv6c\") pod \"calico-node-plhx4\" (UID: \"0598894a-40f7-4ecd-8a4d-913b5a8e613d\") " pod="calico-system/calico-node-plhx4" Jan 29 16:40:24.913597 kubelet[1861]: I0129 16:40:24.913277 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ddebdd4c-4145-4a82-baf4-edda221158f8-kube-proxy\") pod \"kube-proxy-42zhm\" (UID: \"ddebdd4c-4145-4a82-baf4-edda221158f8\") " pod="kube-system/kube-proxy-42zhm" Jan 29 16:40:24.913597 kubelet[1861]: I0129 16:40:24.913357 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp4xc\" (UniqueName: \"kubernetes.io/projected/ddebdd4c-4145-4a82-baf4-edda221158f8-kube-api-access-wp4xc\") pod \"kube-proxy-42zhm\" (UID: \"ddebdd4c-4145-4a82-baf4-edda221158f8\") " pod="kube-system/kube-proxy-42zhm" Jan 29 16:40:24.913597 kubelet[1861]: I0129 16:40:24.913428 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0598894a-40f7-4ecd-8a4d-913b5a8e613d-lib-modules\") pod \"calico-node-plhx4\" (UID: \"0598894a-40f7-4ecd-8a4d-913b5a8e613d\") " pod="calico-system/calico-node-plhx4" Jan 29 16:40:24.913597 kubelet[1861]: I0129 16:40:24.913493 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0598894a-40f7-4ecd-8a4d-913b5a8e613d-policysync\") pod \"calico-node-plhx4\" (UID: \"0598894a-40f7-4ecd-8a4d-913b5a8e613d\") " pod="calico-system/calico-node-plhx4" Jan 29 16:40:24.913597 kubelet[1861]: I0129 16:40:24.913553 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0598894a-40f7-4ecd-8a4d-913b5a8e613d-tigera-ca-bundle\") pod \"calico-node-plhx4\" (UID: \"0598894a-40f7-4ecd-8a4d-913b5a8e613d\") " pod="calico-system/calico-node-plhx4" Jan 29 16:40:24.913894 kubelet[1861]: I0129 16:40:24.913616 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0598894a-40f7-4ecd-8a4d-913b5a8e613d-var-run-calico\") pod \"calico-node-plhx4\" (UID: \"0598894a-40f7-4ecd-8a4d-913b5a8e613d\") " pod="calico-system/calico-node-plhx4" Jan 29 16:40:24.913894 kubelet[1861]: I0129 16:40:24.913680 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0598894a-40f7-4ecd-8a4d-913b5a8e613d-var-lib-calico\") pod \"calico-node-plhx4\" (UID: \"0598894a-40f7-4ecd-8a4d-913b5a8e613d\") " pod="calico-system/calico-node-plhx4" Jan 29 16:40:25.019989 kubelet[1861]: E0129 16:40:25.019336 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.019989 kubelet[1861]: W0129 16:40:25.019410 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.019989 kubelet[1861]: E0129 16:40:25.019455 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.020489 kubelet[1861]: E0129 16:40:25.020163 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.020489 kubelet[1861]: W0129 16:40:25.020188 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.020489 kubelet[1861]: E0129 16:40:25.020362 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.022389 kubelet[1861]: E0129 16:40:25.021054 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.022389 kubelet[1861]: W0129 16:40:25.021143 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.022389 kubelet[1861]: E0129 16:40:25.021179 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.022998 kubelet[1861]: E0129 16:40:25.022946 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.023353 kubelet[1861]: W0129 16:40:25.023226 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.023824 kubelet[1861]: E0129 16:40:25.023781 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.030627 kubelet[1861]: E0129 16:40:25.030579 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.031145 kubelet[1861]: W0129 16:40:25.030860 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.031145 kubelet[1861]: E0129 16:40:25.030943 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.033726 kubelet[1861]: E0129 16:40:25.033399 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.033726 kubelet[1861]: W0129 16:40:25.033443 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.033726 kubelet[1861]: E0129 16:40:25.033500 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.034362 kubelet[1861]: E0129 16:40:25.034332 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.034833 kubelet[1861]: W0129 16:40:25.034502 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.034833 kubelet[1861]: E0129 16:40:25.034552 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.037390 kubelet[1861]: E0129 16:40:25.037357 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.039273 kubelet[1861]: W0129 16:40:25.037547 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.039273 kubelet[1861]: E0129 16:40:25.037586 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.044569 kubelet[1861]: E0129 16:40:25.044528 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.050427 kubelet[1861]: W0129 16:40:25.050369 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.050691 kubelet[1861]: E0129 16:40:25.050650 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.051337 kubelet[1861]: E0129 16:40:25.051290 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.051541 kubelet[1861]: W0129 16:40:25.051511 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.051730 kubelet[1861]: E0129 16:40:25.051687 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.056411 kubelet[1861]: E0129 16:40:25.056372 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.056624 kubelet[1861]: W0129 16:40:25.056592 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.056804 kubelet[1861]: E0129 16:40:25.056775 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.057415 kubelet[1861]: E0129 16:40:25.057386 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.057597 kubelet[1861]: W0129 16:40:25.057569 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.057771 kubelet[1861]: E0129 16:40:25.057742 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.062119 kubelet[1861]: E0129 16:40:25.061829 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.062119 kubelet[1861]: W0129 16:40:25.061873 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.062119 kubelet[1861]: E0129 16:40:25.061908 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.063358 kubelet[1861]: E0129 16:40:25.062810 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.063358 kubelet[1861]: W0129 16:40:25.062861 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.063358 kubelet[1861]: E0129 16:40:25.062900 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.069657 kubelet[1861]: E0129 16:40:25.068384 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.069888 kubelet[1861]: W0129 16:40:25.069838 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.070041 kubelet[1861]: E0129 16:40:25.070013 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.070695 kubelet[1861]: E0129 16:40:25.070655 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.071095 kubelet[1861]: W0129 16:40:25.070850 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.071095 kubelet[1861]: E0129 16:40:25.070892 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.072767 kubelet[1861]: E0129 16:40:25.072394 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.072767 kubelet[1861]: W0129 16:40:25.072426 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.072767 kubelet[1861]: E0129 16:40:25.072549 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.074466 kubelet[1861]: E0129 16:40:25.074435 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.074860 kubelet[1861]: W0129 16:40:25.074626 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.074860 kubelet[1861]: E0129 16:40:25.074668 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.077761 kubelet[1861]: E0129 16:40:25.077456 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.077761 kubelet[1861]: W0129 16:40:25.077491 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.077761 kubelet[1861]: E0129 16:40:25.077519 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.079998 kubelet[1861]: E0129 16:40:25.078586 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.079998 kubelet[1861]: W0129 16:40:25.078629 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.079998 kubelet[1861]: E0129 16:40:25.078684 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.082702 kubelet[1861]: E0129 16:40:25.082558 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.082883 kubelet[1861]: W0129 16:40:25.082850 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.083360 kubelet[1861]: E0129 16:40:25.083029 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.092832 kubelet[1861]: E0129 16:40:25.092556 1861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:40:25.092832 kubelet[1861]: W0129 16:40:25.092580 1861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:40:25.092832 kubelet[1861]: E0129 16:40:25.092602 1861 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:40:25.192725 containerd[1486]: time="2025-01-29T16:40:25.192515431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-plhx4,Uid:0598894a-40f7-4ecd-8a4d-913b5a8e613d,Namespace:calico-system,Attempt:0,}" Jan 29 16:40:25.207553 containerd[1486]: time="2025-01-29T16:40:25.206840989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-42zhm,Uid:ddebdd4c-4145-4a82-baf4-edda221158f8,Namespace:kube-system,Attempt:0,}" Jan 29 16:40:25.831884 kubelet[1861]: E0129 16:40:25.831815 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:25.896985 containerd[1486]: time="2025-01-29T16:40:25.896814865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:40:25.899261 containerd[1486]: time="2025-01-29T16:40:25.899147079Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 16:40:25.901046 containerd[1486]: time="2025-01-29T16:40:25.900985477Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:40:25.906250 containerd[1486]: time="2025-01-29T16:40:25.905475999Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:40:25.906397 containerd[1486]: time="2025-01-29T16:40:25.906323108Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:40:25.908054 containerd[1486]: time="2025-01-29T16:40:25.908028306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:40:25.910486 containerd[1486]: time="2025-01-29T16:40:25.910426173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 703.394406ms" Jan 29 16:40:25.913000 containerd[1486]: time="2025-01-29T16:40:25.912970906Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 719.823639ms" Jan 29 16:40:26.053324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount776003020.mount: Deactivated successfully. Jan 29 16:40:26.087363 containerd[1486]: time="2025-01-29T16:40:26.087136804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:40:26.087593 containerd[1486]: time="2025-01-29T16:40:26.087567081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:40:26.087848 containerd[1486]: time="2025-01-29T16:40:26.087722141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:40:26.089328 containerd[1486]: time="2025-01-29T16:40:26.088290227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:40:26.101715 containerd[1486]: time="2025-01-29T16:40:26.101491806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:40:26.101715 containerd[1486]: time="2025-01-29T16:40:26.101544265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:40:26.101715 containerd[1486]: time="2025-01-29T16:40:26.101562669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:40:26.101715 containerd[1486]: time="2025-01-29T16:40:26.101635005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:40:26.188424 kubelet[1861]: E0129 16:40:26.188383 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:26.201435 systemd[1]: Started cri-containerd-32db68fb19ec8770f3a278450675a5e0904b51836a61db6272d0b7eb81b987be.scope - libcontainer container 32db68fb19ec8770f3a278450675a5e0904b51836a61db6272d0b7eb81b987be. Jan 29 16:40:26.203174 systemd[1]: Started cri-containerd-b54360c47fa9f5c37a1d6d85bb4b8f9e0e34339ad77f4cd31606b34bfa910b6b.scope - libcontainer container b54360c47fa9f5c37a1d6d85bb4b8f9e0e34339ad77f4cd31606b34bfa910b6b. Jan 29 16:40:26.231974 containerd[1486]: time="2025-01-29T16:40:26.231906118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-plhx4,Uid:0598894a-40f7-4ecd-8a4d-913b5a8e613d,Namespace:calico-system,Attempt:0,} returns sandbox id \"32db68fb19ec8770f3a278450675a5e0904b51836a61db6272d0b7eb81b987be\"" Jan 29 16:40:26.234591 containerd[1486]: time="2025-01-29T16:40:26.234345433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 16:40:26.239488 containerd[1486]: time="2025-01-29T16:40:26.239451901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-42zhm,Uid:ddebdd4c-4145-4a82-baf4-edda221158f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b54360c47fa9f5c37a1d6d85bb4b8f9e0e34339ad77f4cd31606b34bfa910b6b\"" Jan 29 16:40:26.832288 kubelet[1861]: E0129 16:40:26.831997 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:27.832616 kubelet[1861]: E0129 16:40:27.832515 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:27.994661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount538010050.mount: Deactivated successfully. Jan 29 16:40:28.137703 containerd[1486]: time="2025-01-29T16:40:28.137587634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:28.139115 containerd[1486]: time="2025-01-29T16:40:28.139061588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 16:40:28.140258 containerd[1486]: time="2025-01-29T16:40:28.140199843Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:28.142979 containerd[1486]: time="2025-01-29T16:40:28.142955301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:28.144042 containerd[1486]: time="2025-01-29T16:40:28.143645826Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.909266179s" Jan 29 16:40:28.144042 containerd[1486]: time="2025-01-29T16:40:28.143680300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 16:40:28.145551 containerd[1486]: time="2025-01-29T16:40:28.145302293Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 16:40:28.146595 containerd[1486]: time="2025-01-29T16:40:28.146462909Z" level=info msg="CreateContainer within sandbox \"32db68fb19ec8770f3a278450675a5e0904b51836a61db6272d0b7eb81b987be\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 16:40:28.170274 containerd[1486]: time="2025-01-29T16:40:28.170215357Z" level=info msg="CreateContainer within sandbox \"32db68fb19ec8770f3a278450675a5e0904b51836a61db6272d0b7eb81b987be\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7e2a3b2f7219c906f25f400653395dca057514308bc670a1a2d410fff4ae91c2\"" Jan 29 16:40:28.171034 containerd[1486]: time="2025-01-29T16:40:28.170999798Z" level=info msg="StartContainer for \"7e2a3b2f7219c906f25f400653395dca057514308bc670a1a2d410fff4ae91c2\"" Jan 29 16:40:28.190458 kubelet[1861]: E0129 16:40:28.190421 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:28.208819 systemd[1]: Started cri-containerd-7e2a3b2f7219c906f25f400653395dca057514308bc670a1a2d410fff4ae91c2.scope - libcontainer container 7e2a3b2f7219c906f25f400653395dca057514308bc670a1a2d410fff4ae91c2. Jan 29 16:40:28.256103 containerd[1486]: time="2025-01-29T16:40:28.255041488Z" level=info msg="StartContainer for \"7e2a3b2f7219c906f25f400653395dca057514308bc670a1a2d410fff4ae91c2\" returns successfully" Jan 29 16:40:28.261789 systemd[1]: cri-containerd-7e2a3b2f7219c906f25f400653395dca057514308bc670a1a2d410fff4ae91c2.scope: Deactivated successfully. Jan 29 16:40:28.284310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e2a3b2f7219c906f25f400653395dca057514308bc670a1a2d410fff4ae91c2-rootfs.mount: Deactivated successfully. Jan 29 16:40:28.394511 containerd[1486]: time="2025-01-29T16:40:28.393411301Z" level=info msg="shim disconnected" id=7e2a3b2f7219c906f25f400653395dca057514308bc670a1a2d410fff4ae91c2 namespace=k8s.io Jan 29 16:40:28.394511 containerd[1486]: time="2025-01-29T16:40:28.393670938Z" level=warning msg="cleaning up after shim disconnected" id=7e2a3b2f7219c906f25f400653395dca057514308bc670a1a2d410fff4ae91c2 namespace=k8s.io Jan 29 16:40:28.394511 containerd[1486]: time="2025-01-29T16:40:28.393890890Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:40:28.833744 kubelet[1861]: E0129 16:40:28.833601 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:29.559962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295336180.mount: Deactivated successfully. Jan 29 16:40:29.834841 kubelet[1861]: E0129 16:40:29.834622 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:30.061945 containerd[1486]: time="2025-01-29T16:40:30.061603254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:30.063738 containerd[1486]: time="2025-01-29T16:40:30.063691411Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 29 16:40:30.065205 containerd[1486]: time="2025-01-29T16:40:30.065140559Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:30.067650 containerd[1486]: time="2025-01-29T16:40:30.067600282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:30.068433 containerd[1486]: time="2025-01-29T16:40:30.068329069Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.922994255s" Jan 29 16:40:30.068433 containerd[1486]: time="2025-01-29T16:40:30.068368873Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 16:40:30.070060 containerd[1486]: time="2025-01-29T16:40:30.069593991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 16:40:30.070901 containerd[1486]: time="2025-01-29T16:40:30.070873831Z" level=info msg="CreateContainer within sandbox \"b54360c47fa9f5c37a1d6d85bb4b8f9e0e34339ad77f4cd31606b34bfa910b6b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:40:30.092862 containerd[1486]: time="2025-01-29T16:40:30.092704675Z" level=info msg="CreateContainer within sandbox \"b54360c47fa9f5c37a1d6d85bb4b8f9e0e34339ad77f4cd31606b34bfa910b6b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"663804729c11c4e1e0c56ddc019503d5c2adca8eac0bb0c5064559419820121d\"" Jan 29 16:40:30.094179 containerd[1486]: time="2025-01-29T16:40:30.094127313Z" level=info msg="StartContainer for \"663804729c11c4e1e0c56ddc019503d5c2adca8eac0bb0c5064559419820121d\"" Jan 29 16:40:30.141545 systemd[1]: Started cri-containerd-663804729c11c4e1e0c56ddc019503d5c2adca8eac0bb0c5064559419820121d.scope - libcontainer container 663804729c11c4e1e0c56ddc019503d5c2adca8eac0bb0c5064559419820121d. Jan 29 16:40:30.177953 containerd[1486]: time="2025-01-29T16:40:30.177905068Z" level=info msg="StartContainer for \"663804729c11c4e1e0c56ddc019503d5c2adca8eac0bb0c5064559419820121d\" returns successfully" Jan 29 16:40:30.188919 kubelet[1861]: E0129 16:40:30.188855 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:30.835804 kubelet[1861]: E0129 16:40:30.835739 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:32.001702 systemd-resolved[1390]: Clock change detected. Flushing caches. Jan 29 16:40:32.002460 systemd-timesyncd[1391]: Contacted time server 212.83.158.83:123 (2.flatcar.pool.ntp.org). Jan 29 16:40:32.002586 systemd-timesyncd[1391]: Initial clock synchronization to Wed 2025-01-29 16:40:32.001553 UTC. Jan 29 16:40:32.488438 kubelet[1861]: E0129 16:40:32.488208 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:32.840223 kubelet[1861]: E0129 16:40:32.839914 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:33.489347 kubelet[1861]: E0129 16:40:33.489282 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:34.489521 kubelet[1861]: E0129 16:40:34.489436 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:34.840406 kubelet[1861]: E0129 16:40:34.840249 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:35.490390 kubelet[1861]: E0129 16:40:35.490334 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:36.371407 containerd[1486]: time="2025-01-29T16:40:36.371307958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:36.372844 containerd[1486]: time="2025-01-29T16:40:36.372797090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 16:40:36.374352 containerd[1486]: time="2025-01-29T16:40:36.374256077Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:36.376996 containerd[1486]: time="2025-01-29T16:40:36.376967322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:36.377869 containerd[1486]: time="2025-01-29T16:40:36.377701168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.656686995s" Jan 29 16:40:36.377869 containerd[1486]: time="2025-01-29T16:40:36.377749929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 16:40:36.380675 containerd[1486]: time="2025-01-29T16:40:36.380599865Z" level=info msg="CreateContainer within sandbox \"32db68fb19ec8770f3a278450675a5e0904b51836a61db6272d0b7eb81b987be\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 16:40:36.403390 containerd[1486]: time="2025-01-29T16:40:36.403198889Z" level=info msg="CreateContainer within sandbox \"32db68fb19ec8770f3a278450675a5e0904b51836a61db6272d0b7eb81b987be\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"af25cdf36563bd4fe8596adbcdbb8f144132560247fb35526fce54798417f33d\"" Jan 29 16:40:36.404105 containerd[1486]: time="2025-01-29T16:40:36.404002306Z" level=info msg="StartContainer for \"af25cdf36563bd4fe8596adbcdbb8f144132560247fb35526fce54798417f33d\"" Jan 29 16:40:36.442847 systemd[1]: Started cri-containerd-af25cdf36563bd4fe8596adbcdbb8f144132560247fb35526fce54798417f33d.scope - libcontainer container af25cdf36563bd4fe8596adbcdbb8f144132560247fb35526fce54798417f33d. Jan 29 16:40:36.477348 containerd[1486]: time="2025-01-29T16:40:36.477171655Z" level=info msg="StartContainer for \"af25cdf36563bd4fe8596adbcdbb8f144132560247fb35526fce54798417f33d\" returns successfully" Jan 29 16:40:36.491181 kubelet[1861]: E0129 16:40:36.491144 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:36.841278 kubelet[1861]: E0129 16:40:36.840261 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:36.913752 kubelet[1861]: I0129 16:40:36.912539 1861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-42zhm" podStartSLOduration=10.084065936 podStartE2EDuration="13.912511144s" podCreationTimestamp="2025-01-29 16:40:23 +0000 UTC" firstStartedPulling="2025-01-29 16:40:26.241037775 +0000 UTC m=+4.442496702" lastFinishedPulling="2025-01-29 16:40:30.069482983 +0000 UTC m=+8.270941910" observedRunningTime="2025-01-29 16:40:30.243279558 +0000 UTC m=+8.444738515" watchObservedRunningTime="2025-01-29 16:40:36.912511144 +0000 UTC m=+14.462582380" Jan 29 16:40:37.492274 kubelet[1861]: E0129 16:40:37.492162 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:37.574127 systemd[1]: cri-containerd-af25cdf36563bd4fe8596adbcdbb8f144132560247fb35526fce54798417f33d.scope: Deactivated successfully. Jan 29 16:40:37.575528 systemd[1]: cri-containerd-af25cdf36563bd4fe8596adbcdbb8f144132560247fb35526fce54798417f33d.scope: Consumed 719ms CPU time, 169.6M memory peak, 151M written to disk. Jan 29 16:40:37.626099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af25cdf36563bd4fe8596adbcdbb8f144132560247fb35526fce54798417f33d-rootfs.mount: Deactivated successfully. Jan 29 16:40:37.657318 kubelet[1861]: I0129 16:40:37.657275 1861 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 16:40:38.492761 kubelet[1861]: E0129 16:40:38.492607 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:38.853109 systemd[1]: Created slice kubepods-besteffort-pod4cbd60f8_abf3_4e44_b98f_73df647c2adc.slice - libcontainer container kubepods-besteffort-pod4cbd60f8_abf3_4e44_b98f_73df647c2adc.slice. Jan 29 16:40:38.858729 containerd[1486]: time="2025-01-29T16:40:38.858410388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:0,}" Jan 29 16:40:38.895545 containerd[1486]: time="2025-01-29T16:40:38.895271380Z" level=info msg="shim disconnected" id=af25cdf36563bd4fe8596adbcdbb8f144132560247fb35526fce54798417f33d namespace=k8s.io Jan 29 16:40:38.895545 containerd[1486]: time="2025-01-29T16:40:38.895376878Z" level=warning msg="cleaning up after shim disconnected" id=af25cdf36563bd4fe8596adbcdbb8f144132560247fb35526fce54798417f33d namespace=k8s.io Jan 29 16:40:38.895545 containerd[1486]: time="2025-01-29T16:40:38.895401294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:40:39.019047 containerd[1486]: time="2025-01-29T16:40:39.018967162Z" level=error msg="Failed to destroy network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:39.020799 containerd[1486]: time="2025-01-29T16:40:39.019385917Z" level=error msg="encountered an error cleaning up failed sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:39.020799 containerd[1486]: time="2025-01-29T16:40:39.019457792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:39.021490 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156-shm.mount: Deactivated successfully. Jan 29 16:40:39.021612 kubelet[1861]: E0129 16:40:39.021482 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:39.021612 kubelet[1861]: E0129 16:40:39.021552 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:39.021612 kubelet[1861]: E0129 16:40:39.021574 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:39.021777 kubelet[1861]: E0129 16:40:39.021676 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:39.493982 kubelet[1861]: E0129 16:40:39.493809 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:39.903898 kubelet[1861]: I0129 16:40:39.903819 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156" Jan 29 16:40:39.905876 containerd[1486]: time="2025-01-29T16:40:39.905781081Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" Jan 29 16:40:39.909683 containerd[1486]: time="2025-01-29T16:40:39.907082201Z" level=info msg="Ensure that sandbox 43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156 in task-service has been cleanup successfully" Jan 29 16:40:39.910320 containerd[1486]: time="2025-01-29T16:40:39.910049106Z" level=info msg="TearDown network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" successfully" Jan 29 16:40:39.910320 containerd[1486]: time="2025-01-29T16:40:39.910114088Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" returns successfully" Jan 29 16:40:39.912779 systemd[1]: run-netns-cni\x2d3fe29a7e\x2da501\x2dc04a\x2dca74\x2d036171e01b6f.mount: Deactivated successfully. Jan 29 16:40:39.916782 containerd[1486]: time="2025-01-29T16:40:39.914053025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:1,}" Jan 29 16:40:39.922321 containerd[1486]: time="2025-01-29T16:40:39.922216215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 16:40:40.038299 containerd[1486]: time="2025-01-29T16:40:40.038241170Z" level=error msg="Failed to destroy network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:40.038927 containerd[1486]: time="2025-01-29T16:40:40.038898322Z" level=error msg="encountered an error cleaning up failed sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:40.039088 containerd[1486]: time="2025-01-29T16:40:40.039063402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:40.039525 kubelet[1861]: E0129 16:40:40.039462 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:40.039596 kubelet[1861]: E0129 16:40:40.039544 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:40.039596 kubelet[1861]: E0129 16:40:40.039570 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:40.039704 kubelet[1861]: E0129 16:40:40.039633 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:40.041171 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3-shm.mount: Deactivated successfully. Jan 29 16:40:40.495205 kubelet[1861]: E0129 16:40:40.494505 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:40.923323 kubelet[1861]: I0129 16:40:40.923276 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3" Jan 29 16:40:40.925878 containerd[1486]: time="2025-01-29T16:40:40.924387968Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\"" Jan 29 16:40:40.925878 containerd[1486]: time="2025-01-29T16:40:40.924824376Z" level=info msg="Ensure that sandbox c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3 in task-service has been cleanup successfully" Jan 29 16:40:40.928929 containerd[1486]: time="2025-01-29T16:40:40.928733748Z" level=info msg="TearDown network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" successfully" Jan 29 16:40:40.928929 containerd[1486]: time="2025-01-29T16:40:40.928785195Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" returns successfully" Jan 29 16:40:40.929594 containerd[1486]: time="2025-01-29T16:40:40.929525564Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" Jan 29 16:40:40.929868 containerd[1486]: time="2025-01-29T16:40:40.929754172Z" level=info msg="TearDown network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" successfully" Jan 29 16:40:40.929868 containerd[1486]: time="2025-01-29T16:40:40.929858217Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" returns successfully" Jan 29 16:40:40.932303 containerd[1486]: time="2025-01-29T16:40:40.930939245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:2,}" Jan 29 16:40:40.931556 systemd[1]: run-netns-cni\x2df93b2bec\x2d59c6\x2d9a95\x2d210d\x2dc96b9073bee1.mount: Deactivated successfully. Jan 29 16:40:41.048256 containerd[1486]: time="2025-01-29T16:40:41.048183877Z" level=error msg="Failed to destroy network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:41.048612 containerd[1486]: time="2025-01-29T16:40:41.048524997Z" level=error msg="encountered an error cleaning up failed sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:41.048612 containerd[1486]: time="2025-01-29T16:40:41.048598545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:41.051000 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af-shm.mount: Deactivated successfully. Jan 29 16:40:41.051887 kubelet[1861]: E0129 16:40:41.050999 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:41.051887 kubelet[1861]: E0129 16:40:41.051091 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:41.051887 kubelet[1861]: E0129 16:40:41.051139 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:41.052107 kubelet[1861]: E0129 16:40:41.051210 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:41.496185 kubelet[1861]: E0129 16:40:41.496088 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:41.928599 kubelet[1861]: I0129 16:40:41.928443 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af" Jan 29 16:40:41.929996 containerd[1486]: time="2025-01-29T16:40:41.929296193Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\"" Jan 29 16:40:41.929996 containerd[1486]: time="2025-01-29T16:40:41.929768970Z" level=info msg="Ensure that sandbox 7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af in task-service has been cleanup successfully" Jan 29 16:40:41.932939 containerd[1486]: time="2025-01-29T16:40:41.932730875Z" level=info msg="TearDown network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" successfully" Jan 29 16:40:41.932939 containerd[1486]: time="2025-01-29T16:40:41.932805264Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" returns successfully" Jan 29 16:40:41.935512 systemd[1]: run-netns-cni\x2db9039b12\x2d8db6\x2dfe70\x2d04a2\x2d3ae6640882a3.mount: Deactivated successfully. Jan 29 16:40:41.936493 containerd[1486]: time="2025-01-29T16:40:41.935788309Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\"" Jan 29 16:40:41.936493 containerd[1486]: time="2025-01-29T16:40:41.935947157Z" level=info msg="TearDown network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" successfully" Jan 29 16:40:41.936493 containerd[1486]: time="2025-01-29T16:40:41.935974148Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" returns successfully" Jan 29 16:40:41.941355 containerd[1486]: time="2025-01-29T16:40:41.940153576Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" Jan 29 16:40:41.941355 containerd[1486]: time="2025-01-29T16:40:41.940321701Z" level=info msg="TearDown network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" successfully" Jan 29 16:40:41.941355 containerd[1486]: time="2025-01-29T16:40:41.940348912Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" returns successfully" Jan 29 16:40:41.942152 containerd[1486]: time="2025-01-29T16:40:41.942103413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:3,}" Jan 29 16:40:42.048553 containerd[1486]: time="2025-01-29T16:40:42.048484200Z" level=error msg="Failed to destroy network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:42.048869 containerd[1486]: time="2025-01-29T16:40:42.048839136Z" level=error msg="encountered an error cleaning up failed sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:42.048933 containerd[1486]: time="2025-01-29T16:40:42.048903927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:42.051450 kubelet[1861]: E0129 16:40:42.049174 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:42.051450 kubelet[1861]: E0129 16:40:42.049232 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:42.051450 kubelet[1861]: E0129 16:40:42.049264 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:42.051220 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906-shm.mount: Deactivated successfully. Jan 29 16:40:42.051661 kubelet[1861]: E0129 16:40:42.049307 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:42.373917 kubelet[1861]: I0129 16:40:42.373212 1861 topology_manager.go:215] "Topology Admit Handler" podUID="7c5c4917-93fb-4240-8cff-3be51f6e8145" podNamespace="default" podName="nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:42.391468 systemd[1]: Created slice kubepods-besteffort-pod7c5c4917_93fb_4240_8cff_3be51f6e8145.slice - libcontainer container kubepods-besteffort-pod7c5c4917_93fb_4240_8cff_3be51f6e8145.slice. Jan 29 16:40:42.485267 kubelet[1861]: I0129 16:40:42.485138 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbw6h\" (UniqueName: \"kubernetes.io/projected/7c5c4917-93fb-4240-8cff-3be51f6e8145-kube-api-access-xbw6h\") pod \"nginx-deployment-85f456d6dd-kccdw\" (UID: \"7c5c4917-93fb-4240-8cff-3be51f6e8145\") " pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:42.497266 kubelet[1861]: E0129 16:40:42.497188 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:42.698996 containerd[1486]: time="2025-01-29T16:40:42.698425365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:0,}" Jan 29 16:40:42.801511 containerd[1486]: time="2025-01-29T16:40:42.801469995Z" level=error msg="Failed to destroy network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:42.802027 containerd[1486]: time="2025-01-29T16:40:42.802001462Z" level=error msg="encountered an error cleaning up failed sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:42.802795 containerd[1486]: time="2025-01-29T16:40:42.802128129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:42.802888 kubelet[1861]: E0129 16:40:42.802312 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:42.802888 kubelet[1861]: E0129 16:40:42.802368 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:42.802888 kubelet[1861]: E0129 16:40:42.802393 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:42.802989 kubelet[1861]: E0129 16:40:42.802438 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kccdw" podUID="7c5c4917-93fb-4240-8cff-3be51f6e8145" Jan 29 16:40:42.932363 kubelet[1861]: I0129 16:40:42.931366 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1" Jan 29 16:40:42.933913 containerd[1486]: time="2025-01-29T16:40:42.933694392Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\"" Jan 29 16:40:42.934893 containerd[1486]: time="2025-01-29T16:40:42.934134928Z" level=info msg="Ensure that sandbox 712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1 in task-service has been cleanup successfully" Jan 29 16:40:42.934893 containerd[1486]: time="2025-01-29T16:40:42.934372854Z" level=info msg="TearDown network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" successfully" Jan 29 16:40:42.934893 containerd[1486]: time="2025-01-29T16:40:42.934389255Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" returns successfully" Jan 29 16:40:42.934893 containerd[1486]: time="2025-01-29T16:40:42.934806077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:1,}" Jan 29 16:40:42.936542 kubelet[1861]: I0129 16:40:42.936502 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906" Jan 29 16:40:42.939684 containerd[1486]: time="2025-01-29T16:40:42.939640363Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\"" Jan 29 16:40:42.939860 containerd[1486]: time="2025-01-29T16:40:42.939823006Z" level=info msg="Ensure that sandbox 502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906 in task-service has been cleanup successfully" Jan 29 16:40:42.940327 containerd[1486]: time="2025-01-29T16:40:42.939991863Z" level=info msg="TearDown network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" successfully" Jan 29 16:40:42.940327 containerd[1486]: time="2025-01-29T16:40:42.940008684Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" returns successfully" Jan 29 16:40:42.941695 containerd[1486]: time="2025-01-29T16:40:42.941085924Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\"" Jan 29 16:40:42.941695 containerd[1486]: time="2025-01-29T16:40:42.941266884Z" level=info msg="TearDown network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" successfully" Jan 29 16:40:42.941695 containerd[1486]: time="2025-01-29T16:40:42.941295217Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" returns successfully" Jan 29 16:40:42.942995 containerd[1486]: time="2025-01-29T16:40:42.942479749Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\"" Jan 29 16:40:42.942995 containerd[1486]: time="2025-01-29T16:40:42.942837660Z" level=info msg="TearDown network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" successfully" Jan 29 16:40:42.942995 containerd[1486]: time="2025-01-29T16:40:42.942869369Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" returns successfully" Jan 29 16:40:42.944087 containerd[1486]: time="2025-01-29T16:40:42.943816576Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" Jan 29 16:40:42.944087 containerd[1486]: time="2025-01-29T16:40:42.943963702Z" level=info msg="TearDown network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" successfully" Jan 29 16:40:42.944087 containerd[1486]: time="2025-01-29T16:40:42.943988659Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" returns successfully" Jan 29 16:40:42.947833 containerd[1486]: time="2025-01-29T16:40:42.946915187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:4,}" Jan 29 16:40:42.966345 systemd[1]: run-netns-cni\x2deedf89e3\x2dc4b4\x2d7683\x2dcb11\x2d7323376cd6c5.mount: Deactivated successfully. Jan 29 16:40:42.966466 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1-shm.mount: Deactivated successfully. Jan 29 16:40:42.966570 systemd[1]: run-netns-cni\x2dc5eb1413\x2d212b\x2d4ded\x2dc7ab\x2dd9796830aa56.mount: Deactivated successfully. Jan 29 16:40:43.065387 containerd[1486]: time="2025-01-29T16:40:43.065332830Z" level=error msg="Failed to destroy network for sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:43.066752 containerd[1486]: time="2025-01-29T16:40:43.065741346Z" level=error msg="encountered an error cleaning up failed sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:43.066752 containerd[1486]: time="2025-01-29T16:40:43.065795868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:43.068147 kubelet[1861]: E0129 16:40:43.067080 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:43.068147 kubelet[1861]: E0129 16:40:43.067135 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:43.068147 kubelet[1861]: E0129 16:40:43.067162 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:43.067354 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196-shm.mount: Deactivated successfully. Jan 29 16:40:43.068351 kubelet[1861]: E0129 16:40:43.067203 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kccdw" podUID="7c5c4917-93fb-4240-8cff-3be51f6e8145" Jan 29 16:40:43.089416 containerd[1486]: time="2025-01-29T16:40:43.089375181Z" level=error msg="Failed to destroy network for sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:43.090477 containerd[1486]: time="2025-01-29T16:40:43.089851073Z" level=error msg="encountered an error cleaning up failed sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:43.090477 containerd[1486]: time="2025-01-29T16:40:43.089913120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:43.091116 kubelet[1861]: E0129 16:40:43.090723 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:43.091116 kubelet[1861]: E0129 16:40:43.090816 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:43.091116 kubelet[1861]: E0129 16:40:43.090861 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:43.091299 kubelet[1861]: E0129 16:40:43.091259 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:43.480799 kubelet[1861]: E0129 16:40:43.480729 1861 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:43.497397 kubelet[1861]: E0129 16:40:43.497345 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:43.956679 kubelet[1861]: I0129 16:40:43.955742 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69" Jan 29 16:40:43.957384 containerd[1486]: time="2025-01-29T16:40:43.957341451Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\"" Jan 29 16:40:43.959545 containerd[1486]: time="2025-01-29T16:40:43.958916806Z" level=info msg="Ensure that sandbox 11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69 in task-service has been cleanup successfully" Jan 29 16:40:43.962913 containerd[1486]: time="2025-01-29T16:40:43.962851105Z" level=info msg="TearDown network for sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" successfully" Jan 29 16:40:43.963065 containerd[1486]: time="2025-01-29T16:40:43.963036523Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" returns successfully" Jan 29 16:40:43.966294 containerd[1486]: time="2025-01-29T16:40:43.965357315Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\"" Jan 29 16:40:43.966294 containerd[1486]: time="2025-01-29T16:40:43.965531081Z" level=info msg="TearDown network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" successfully" Jan 29 16:40:43.966294 containerd[1486]: time="2025-01-29T16:40:43.965561178Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" returns successfully" Jan 29 16:40:43.966344 systemd[1]: run-netns-cni\x2d2a0e2f22\x2db602\x2d96c2\x2db05c\x2d5db4cce92d8a.mount: Deactivated successfully. Jan 29 16:40:43.966441 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69-shm.mount: Deactivated successfully. Jan 29 16:40:43.968045 containerd[1486]: time="2025-01-29T16:40:43.967594611Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\"" Jan 29 16:40:43.968470 containerd[1486]: time="2025-01-29T16:40:43.968376588Z" level=info msg="TearDown network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" successfully" Jan 29 16:40:43.968948 containerd[1486]: time="2025-01-29T16:40:43.968692581Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" returns successfully" Jan 29 16:40:43.970688 kubelet[1861]: I0129 16:40:43.970092 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196" Jan 29 16:40:43.971426 containerd[1486]: time="2025-01-29T16:40:43.970980632Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\"" Jan 29 16:40:43.971426 containerd[1486]: time="2025-01-29T16:40:43.971192649Z" level=info msg="TearDown network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" successfully" Jan 29 16:40:43.971426 containerd[1486]: time="2025-01-29T16:40:43.971283560Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" returns successfully" Jan 29 16:40:43.974234 containerd[1486]: time="2025-01-29T16:40:43.973591418Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\"" Jan 29 16:40:43.974234 containerd[1486]: time="2025-01-29T16:40:43.973999664Z" level=info msg="Ensure that sandbox 230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196 in task-service has been cleanup successfully" Jan 29 16:40:43.974712 containerd[1486]: time="2025-01-29T16:40:43.974613505Z" level=info msg="TearDown network for sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" successfully" Jan 29 16:40:43.975248 containerd[1486]: time="2025-01-29T16:40:43.974845390Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" returns successfully" Jan 29 16:40:43.975248 containerd[1486]: time="2025-01-29T16:40:43.974993438Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" Jan 29 16:40:43.975248 containerd[1486]: time="2025-01-29T16:40:43.975132779Z" level=info msg="TearDown network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" successfully" Jan 29 16:40:43.975248 containerd[1486]: time="2025-01-29T16:40:43.975157946Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" returns successfully" Jan 29 16:40:43.975711 systemd[1]: run-netns-cni\x2d4c979ed6\x2dbb08\x2d9206\x2d65dd\x2db5f457146c17.mount: Deactivated successfully. Jan 29 16:40:43.981732 containerd[1486]: time="2025-01-29T16:40:43.980966941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:5,}" Jan 29 16:40:43.998666 containerd[1486]: time="2025-01-29T16:40:43.998304748Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\"" Jan 29 16:40:43.998666 containerd[1486]: time="2025-01-29T16:40:43.998506296Z" level=info msg="TearDown network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" successfully" Jan 29 16:40:43.998666 containerd[1486]: time="2025-01-29T16:40:43.998569795Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" returns successfully" Jan 29 16:40:44.018015 containerd[1486]: time="2025-01-29T16:40:44.017948640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:2,}" Jan 29 16:40:44.122778 containerd[1486]: time="2025-01-29T16:40:44.122730499Z" level=error msg="Failed to destroy network for sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:44.123381 containerd[1486]: time="2025-01-29T16:40:44.123342036Z" level=error msg="encountered an error cleaning up failed sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:44.125679 containerd[1486]: time="2025-01-29T16:40:44.125651647Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:44.126001 kubelet[1861]: E0129 16:40:44.125958 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:44.126064 kubelet[1861]: E0129 16:40:44.126025 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:44.126064 kubelet[1861]: E0129 16:40:44.126050 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:44.126131 kubelet[1861]: E0129 16:40:44.126095 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:44.154029 containerd[1486]: time="2025-01-29T16:40:44.153969798Z" level=error msg="Failed to destroy network for sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:44.154311 containerd[1486]: time="2025-01-29T16:40:44.154276132Z" level=error msg="encountered an error cleaning up failed sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:44.154372 containerd[1486]: time="2025-01-29T16:40:44.154344000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:44.154607 kubelet[1861]: E0129 16:40:44.154573 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:44.154681 kubelet[1861]: E0129 16:40:44.154656 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:44.154718 kubelet[1861]: E0129 16:40:44.154681 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:44.155653 kubelet[1861]: E0129 16:40:44.154755 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kccdw" podUID="7c5c4917-93fb-4240-8cff-3be51f6e8145" Jan 29 16:40:44.498037 kubelet[1861]: E0129 16:40:44.497981 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:44.968709 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6-shm.mount: Deactivated successfully. Jan 29 16:40:44.968944 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523-shm.mount: Deactivated successfully. Jan 29 16:40:44.983253 kubelet[1861]: I0129 16:40:44.981976 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523" Jan 29 16:40:44.984404 containerd[1486]: time="2025-01-29T16:40:44.983756152Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\"" Jan 29 16:40:44.984404 containerd[1486]: time="2025-01-29T16:40:44.984148518Z" level=info msg="Ensure that sandbox 43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523 in task-service has been cleanup successfully" Jan 29 16:40:44.989591 containerd[1486]: time="2025-01-29T16:40:44.988871516Z" level=info msg="TearDown network for sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" successfully" Jan 29 16:40:44.989591 containerd[1486]: time="2025-01-29T16:40:44.988937259Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" returns successfully" Jan 29 16:40:44.999160 systemd[1]: run-netns-cni\x2d6ee08a8c\x2d087d\x2da015\x2d471c\x2df9c321b57491.mount: Deactivated successfully. Jan 29 16:40:45.003595 containerd[1486]: time="2025-01-29T16:40:45.002078496Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\"" Jan 29 16:40:45.003595 containerd[1486]: time="2025-01-29T16:40:45.002344224Z" level=info msg="TearDown network for sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" successfully" Jan 29 16:40:45.003595 containerd[1486]: time="2025-01-29T16:40:45.002383568Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" returns successfully" Jan 29 16:40:45.008493 containerd[1486]: time="2025-01-29T16:40:45.008050556Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\"" Jan 29 16:40:45.008493 containerd[1486]: time="2025-01-29T16:40:45.008282972Z" level=info msg="TearDown network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" successfully" Jan 29 16:40:45.008493 containerd[1486]: time="2025-01-29T16:40:45.008322737Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" returns successfully" Jan 29 16:40:45.011556 containerd[1486]: time="2025-01-29T16:40:45.010304654Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\"" Jan 29 16:40:45.011556 containerd[1486]: time="2025-01-29T16:40:45.010552178Z" level=info msg="TearDown network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" successfully" Jan 29 16:40:45.011556 containerd[1486]: time="2025-01-29T16:40:45.010594648Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" returns successfully" Jan 29 16:40:45.014722 containerd[1486]: time="2025-01-29T16:40:45.012572677Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\"" Jan 29 16:40:45.014722 containerd[1486]: time="2025-01-29T16:40:45.012833797Z" level=info msg="TearDown network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" successfully" Jan 29 16:40:45.014722 containerd[1486]: time="2025-01-29T16:40:45.012875525Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" returns successfully" Jan 29 16:40:45.014990 kubelet[1861]: I0129 16:40:45.013282 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6" Jan 29 16:40:45.015475 containerd[1486]: time="2025-01-29T16:40:45.015426520Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\"" Jan 29 16:40:45.016034 containerd[1486]: time="2025-01-29T16:40:45.015987502Z" level=info msg="Ensure that sandbox 08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6 in task-service has been cleanup successfully" Jan 29 16:40:45.020871 containerd[1486]: time="2025-01-29T16:40:45.020807612Z" level=info msg="TearDown network for sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" successfully" Jan 29 16:40:45.021113 containerd[1486]: time="2025-01-29T16:40:45.021069102Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" returns successfully" Jan 29 16:40:45.021457 containerd[1486]: time="2025-01-29T16:40:45.021411384Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" Jan 29 16:40:45.021827 containerd[1486]: time="2025-01-29T16:40:45.021784554Z" level=info msg="TearDown network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" successfully" Jan 29 16:40:45.022018 containerd[1486]: time="2025-01-29T16:40:45.021982185Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" returns successfully" Jan 29 16:40:45.027593 systemd[1]: run-netns-cni\x2d5674ca38\x2d1417\x2dafc7\x2d6280\x2dd17db81a956b.mount: Deactivated successfully. Jan 29 16:40:45.031218 containerd[1486]: time="2025-01-29T16:40:45.030214164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:6,}" Jan 29 16:40:45.047073 containerd[1486]: time="2025-01-29T16:40:45.046934343Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\"" Jan 29 16:40:45.048089 containerd[1486]: time="2025-01-29T16:40:45.047898561Z" level=info msg="TearDown network for sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" successfully" Jan 29 16:40:45.049022 containerd[1486]: time="2025-01-29T16:40:45.048931348Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" returns successfully" Jan 29 16:40:45.050600 containerd[1486]: time="2025-01-29T16:40:45.050511962Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\"" Jan 29 16:40:45.057142 containerd[1486]: time="2025-01-29T16:40:45.056899572Z" level=info msg="TearDown network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" successfully" Jan 29 16:40:45.057142 containerd[1486]: time="2025-01-29T16:40:45.056962030Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" returns successfully" Jan 29 16:40:45.070115 containerd[1486]: time="2025-01-29T16:40:45.068132390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:3,}" Jan 29 16:40:45.499294 kubelet[1861]: E0129 16:40:45.499186 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:45.744387 containerd[1486]: time="2025-01-29T16:40:45.744317414Z" level=error msg="Failed to destroy network for sandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:45.744787 containerd[1486]: time="2025-01-29T16:40:45.744742532Z" level=error msg="encountered an error cleaning up failed sandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:45.744837 containerd[1486]: time="2025-01-29T16:40:45.744805159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:45.745052 kubelet[1861]: E0129 16:40:45.745020 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:45.745345 kubelet[1861]: E0129 16:40:45.745249 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:45.745476 kubelet[1861]: E0129 16:40:45.745273 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:45.745476 kubelet[1861]: E0129 16:40:45.745443 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:45.748604 containerd[1486]: time="2025-01-29T16:40:45.748556986Z" level=error msg="Failed to destroy network for sandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:45.748858 containerd[1486]: time="2025-01-29T16:40:45.748822554Z" level=error msg="encountered an error cleaning up failed sandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:45.748897 containerd[1486]: time="2025-01-29T16:40:45.748873119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:45.749141 kubelet[1861]: E0129 16:40:45.749020 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:45.749141 kubelet[1861]: E0129 16:40:45.749075 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:45.749141 kubelet[1861]: E0129 16:40:45.749117 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:45.749379 kubelet[1861]: E0129 16:40:45.749281 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kccdw" podUID="7c5c4917-93fb-4240-8cff-3be51f6e8145" Jan 29 16:40:45.968892 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe-shm.mount: Deactivated successfully. Jan 29 16:40:46.017963 kubelet[1861]: I0129 16:40:46.017710 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1" Jan 29 16:40:46.019135 containerd[1486]: time="2025-01-29T16:40:46.018509983Z" level=info msg="StopPodSandbox for \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\"" Jan 29 16:40:46.021558 containerd[1486]: time="2025-01-29T16:40:46.019216227Z" level=info msg="Ensure that sandbox 7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1 in task-service has been cleanup successfully" Jan 29 16:40:46.021558 containerd[1486]: time="2025-01-29T16:40:46.020856203Z" level=info msg="TearDown network for sandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" successfully" Jan 29 16:40:46.021558 containerd[1486]: time="2025-01-29T16:40:46.020872724Z" level=info msg="StopPodSandbox for \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" returns successfully" Jan 29 16:40:46.021558 containerd[1486]: time="2025-01-29T16:40:46.021254660Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\"" Jan 29 16:40:46.021558 containerd[1486]: time="2025-01-29T16:40:46.021316546Z" level=info msg="TearDown network for sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" successfully" Jan 29 16:40:46.021558 containerd[1486]: time="2025-01-29T16:40:46.021327367Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" returns successfully" Jan 29 16:40:46.022155 containerd[1486]: time="2025-01-29T16:40:46.022104304Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\"" Jan 29 16:40:46.023326 containerd[1486]: time="2025-01-29T16:40:46.022200304Z" level=info msg="TearDown network for sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" successfully" Jan 29 16:40:46.023326 containerd[1486]: time="2025-01-29T16:40:46.022217746Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" returns successfully" Jan 29 16:40:46.023326 containerd[1486]: time="2025-01-29T16:40:46.022800990Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\"" Jan 29 16:40:46.023326 containerd[1486]: time="2025-01-29T16:40:46.022859480Z" level=info msg="TearDown network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" successfully" Jan 29 16:40:46.023326 containerd[1486]: time="2025-01-29T16:40:46.022869699Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" returns successfully" Jan 29 16:40:46.022556 systemd[1]: run-netns-cni\x2dee183d22\x2dccf7\x2d6884\x2dd6b2\x2d97620fe730d1.mount: Deactivated successfully. Jan 29 16:40:46.024253 containerd[1486]: time="2025-01-29T16:40:46.024045174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:4,}" Jan 29 16:40:46.025896 kubelet[1861]: I0129 16:40:46.025502 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe" Jan 29 16:40:46.026375 containerd[1486]: time="2025-01-29T16:40:46.026340128Z" level=info msg="StopPodSandbox for \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\"" Jan 29 16:40:46.027299 containerd[1486]: time="2025-01-29T16:40:46.027278798Z" level=info msg="Ensure that sandbox d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe in task-service has been cleanup successfully" Jan 29 16:40:46.029685 containerd[1486]: time="2025-01-29T16:40:46.029650907Z" level=info msg="TearDown network for sandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" successfully" Jan 29 16:40:46.029685 containerd[1486]: time="2025-01-29T16:40:46.029679391Z" level=info msg="StopPodSandbox for \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" returns successfully" Jan 29 16:40:46.031611 containerd[1486]: time="2025-01-29T16:40:46.030437953Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\"" Jan 29 16:40:46.031611 containerd[1486]: time="2025-01-29T16:40:46.030547178Z" level=info msg="TearDown network for sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" successfully" Jan 29 16:40:46.031611 containerd[1486]: time="2025-01-29T16:40:46.030561806Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" returns successfully" Jan 29 16:40:46.031611 containerd[1486]: time="2025-01-29T16:40:46.030924786Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\"" Jan 29 16:40:46.031611 containerd[1486]: time="2025-01-29T16:40:46.031069047Z" level=info msg="TearDown network for sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" successfully" Jan 29 16:40:46.031611 containerd[1486]: time="2025-01-29T16:40:46.031083674Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" returns successfully" Jan 29 16:40:46.031209 systemd[1]: run-netns-cni\x2d333a1060\x2ddbaf\x2d782a\x2d5a13\x2d7a252a2dfbb5.mount: Deactivated successfully. Jan 29 16:40:46.034384 containerd[1486]: time="2025-01-29T16:40:46.033953707Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\"" Jan 29 16:40:46.034384 containerd[1486]: time="2025-01-29T16:40:46.034024600Z" level=info msg="TearDown network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" successfully" Jan 29 16:40:46.034384 containerd[1486]: time="2025-01-29T16:40:46.034038757Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" returns successfully" Jan 29 16:40:46.037012 containerd[1486]: time="2025-01-29T16:40:46.034398291Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\"" Jan 29 16:40:46.037012 containerd[1486]: time="2025-01-29T16:40:46.034465647Z" level=info msg="TearDown network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" successfully" Jan 29 16:40:46.037012 containerd[1486]: time="2025-01-29T16:40:46.034477760Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" returns successfully" Jan 29 16:40:46.037012 containerd[1486]: time="2025-01-29T16:40:46.034952541Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\"" Jan 29 16:40:46.037012 containerd[1486]: time="2025-01-29T16:40:46.035197670Z" level=info msg="TearDown network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" successfully" Jan 29 16:40:46.037012 containerd[1486]: time="2025-01-29T16:40:46.035214452Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" returns successfully" Jan 29 16:40:46.037012 containerd[1486]: time="2025-01-29T16:40:46.035541485Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" Jan 29 16:40:46.037012 containerd[1486]: time="2025-01-29T16:40:46.035636834Z" level=info msg="TearDown network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" successfully" Jan 29 16:40:46.037012 containerd[1486]: time="2025-01-29T16:40:46.035649728Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" returns successfully" Jan 29 16:40:46.037012 containerd[1486]: time="2025-01-29T16:40:46.036803041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:7,}" Jan 29 16:40:46.181579 containerd[1486]: time="2025-01-29T16:40:46.180860561Z" level=error msg="Failed to destroy network for sandbox \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:46.181910 containerd[1486]: time="2025-01-29T16:40:46.181870084Z" level=error msg="encountered an error cleaning up failed sandbox \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:46.182041 containerd[1486]: time="2025-01-29T16:40:46.182017982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:46.183369 kubelet[1861]: E0129 16:40:46.182327 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:46.183369 kubelet[1861]: E0129 16:40:46.182382 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:46.183369 kubelet[1861]: E0129 16:40:46.182408 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:46.183505 kubelet[1861]: E0129 16:40:46.182448 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kccdw" podUID="7c5c4917-93fb-4240-8cff-3be51f6e8145" Jan 29 16:40:46.192014 containerd[1486]: time="2025-01-29T16:40:46.191976950Z" level=error msg="Failed to destroy network for sandbox \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:46.192558 containerd[1486]: time="2025-01-29T16:40:46.192520399Z" level=error msg="encountered an error cleaning up failed sandbox \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:46.192603 containerd[1486]: time="2025-01-29T16:40:46.192581464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:46.193054 kubelet[1861]: E0129 16:40:46.192773 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:46.193054 kubelet[1861]: E0129 16:40:46.192849 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:46.193054 kubelet[1861]: E0129 16:40:46.192871 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:46.193170 kubelet[1861]: E0129 16:40:46.192935 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:46.210111 update_engine[1465]: I20250129 16:40:46.209978 1465 update_attempter.cc:509] Updating boot flags... Jan 29 16:40:46.249193 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2733) Jan 29 16:40:46.359743 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2735) Jan 29 16:40:46.500229 kubelet[1861]: E0129 16:40:46.500168 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:46.967381 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c-shm.mount: Deactivated successfully. Jan 29 16:40:46.967743 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47-shm.mount: Deactivated successfully. Jan 29 16:40:47.033422 kubelet[1861]: I0129 16:40:47.032797 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47" Jan 29 16:40:47.033562 containerd[1486]: time="2025-01-29T16:40:47.033509754Z" level=info msg="StopPodSandbox for \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\"" Jan 29 16:40:47.033855 containerd[1486]: time="2025-01-29T16:40:47.033720329Z" level=info msg="Ensure that sandbox b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47 in task-service has been cleanup successfully" Jan 29 16:40:47.036846 containerd[1486]: time="2025-01-29T16:40:47.036784406Z" level=info msg="TearDown network for sandbox \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\" successfully" Jan 29 16:40:47.036846 containerd[1486]: time="2025-01-29T16:40:47.036839920Z" level=info msg="StopPodSandbox for \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\" returns successfully" Jan 29 16:40:47.037031 systemd[1]: run-netns-cni\x2d2af39abf\x2da798\x2d7461\x2d8b4f\x2da9900832f542.mount: Deactivated successfully. Jan 29 16:40:47.038780 containerd[1486]: time="2025-01-29T16:40:47.038754431Z" level=info msg="StopPodSandbox for \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\"" Jan 29 16:40:47.039063 containerd[1486]: time="2025-01-29T16:40:47.038938386Z" level=info msg="TearDown network for sandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" successfully" Jan 29 16:40:47.039063 containerd[1486]: time="2025-01-29T16:40:47.038954987Z" level=info msg="StopPodSandbox for \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" returns successfully" Jan 29 16:40:47.039646 containerd[1486]: time="2025-01-29T16:40:47.039590389Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\"" Jan 29 16:40:47.039779 containerd[1486]: time="2025-01-29T16:40:47.039762421Z" level=info msg="TearDown network for sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" successfully" Jan 29 16:40:47.040106 containerd[1486]: time="2025-01-29T16:40:47.039843613Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" returns successfully" Jan 29 16:40:47.040332 containerd[1486]: time="2025-01-29T16:40:47.040294449Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\"" Jan 29 16:40:47.040379 containerd[1486]: time="2025-01-29T16:40:47.040369981Z" level=info msg="TearDown network for sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" successfully" Jan 29 16:40:47.040408 containerd[1486]: time="2025-01-29T16:40:47.040382133Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" returns successfully" Jan 29 16:40:47.041333 containerd[1486]: time="2025-01-29T16:40:47.041199777Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\"" Jan 29 16:40:47.041333 containerd[1486]: time="2025-01-29T16:40:47.041282602Z" level=info msg="TearDown network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" successfully" Jan 29 16:40:47.041333 containerd[1486]: time="2025-01-29T16:40:47.041294404Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" returns successfully" Jan 29 16:40:47.042009 containerd[1486]: time="2025-01-29T16:40:47.041867139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:5,}" Jan 29 16:40:47.043714 kubelet[1861]: I0129 16:40:47.043313 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c" Jan 29 16:40:47.044825 containerd[1486]: time="2025-01-29T16:40:47.044670366Z" level=info msg="StopPodSandbox for \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\"" Jan 29 16:40:47.045180 containerd[1486]: time="2025-01-29T16:40:47.045110542Z" level=info msg="Ensure that sandbox 549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c in task-service has been cleanup successfully" Jan 29 16:40:47.047650 containerd[1486]: time="2025-01-29T16:40:47.047327670Z" level=info msg="TearDown network for sandbox \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\" successfully" Jan 29 16:40:47.047650 containerd[1486]: time="2025-01-29T16:40:47.047349641Z" level=info msg="StopPodSandbox for \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\" returns successfully" Jan 29 16:40:47.048093 systemd[1]: run-netns-cni\x2dcc75857d\x2dac14\x2d1ea2\x2d4cb7\x2de3f35bd3d507.mount: Deactivated successfully. Jan 29 16:40:47.049795 containerd[1486]: time="2025-01-29T16:40:47.049769189Z" level=info msg="StopPodSandbox for \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\"" Jan 29 16:40:47.051095 containerd[1486]: time="2025-01-29T16:40:47.051060010Z" level=info msg="TearDown network for sandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" successfully" Jan 29 16:40:47.051095 containerd[1486]: time="2025-01-29T16:40:47.051082662Z" level=info msg="StopPodSandbox for \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" returns successfully" Jan 29 16:40:47.052153 containerd[1486]: time="2025-01-29T16:40:47.052131439Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\"" Jan 29 16:40:47.052399 containerd[1486]: time="2025-01-29T16:40:47.052320043Z" level=info msg="TearDown network for sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" successfully" Jan 29 16:40:47.052399 containerd[1486]: time="2025-01-29T16:40:47.052337826Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" returns successfully" Jan 29 16:40:47.053090 containerd[1486]: time="2025-01-29T16:40:47.052804602Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\"" Jan 29 16:40:47.053090 containerd[1486]: time="2025-01-29T16:40:47.052896274Z" level=info msg="TearDown network for sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" successfully" Jan 29 16:40:47.053090 containerd[1486]: time="2025-01-29T16:40:47.052909338Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" returns successfully" Jan 29 16:40:47.053483 containerd[1486]: time="2025-01-29T16:40:47.053463858Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\"" Jan 29 16:40:47.053642 containerd[1486]: time="2025-01-29T16:40:47.053600985Z" level=info msg="TearDown network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" successfully" Jan 29 16:40:47.054569 containerd[1486]: time="2025-01-29T16:40:47.053684462Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" returns successfully" Jan 29 16:40:47.054569 containerd[1486]: time="2025-01-29T16:40:47.053904114Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\"" Jan 29 16:40:47.054569 containerd[1486]: time="2025-01-29T16:40:47.053971961Z" level=info msg="TearDown network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" successfully" Jan 29 16:40:47.054569 containerd[1486]: time="2025-01-29T16:40:47.053983373Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" returns successfully" Jan 29 16:40:47.054569 containerd[1486]: time="2025-01-29T16:40:47.054440580Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\"" Jan 29 16:40:47.055206 containerd[1486]: time="2025-01-29T16:40:47.054880766Z" level=info msg="TearDown network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" successfully" Jan 29 16:40:47.055206 containerd[1486]: time="2025-01-29T16:40:47.054897527Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" returns successfully" Jan 29 16:40:47.055462 containerd[1486]: time="2025-01-29T16:40:47.055340798Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" Jan 29 16:40:47.055462 containerd[1486]: time="2025-01-29T16:40:47.055407233Z" level=info msg="TearDown network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" successfully" Jan 29 16:40:47.055462 containerd[1486]: time="2025-01-29T16:40:47.055419917Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" returns successfully" Jan 29 16:40:47.056134 containerd[1486]: time="2025-01-29T16:40:47.055902913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:8,}" Jan 29 16:40:47.173447 containerd[1486]: time="2025-01-29T16:40:47.173380552Z" level=error msg="Failed to destroy network for sandbox \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:47.173768 containerd[1486]: time="2025-01-29T16:40:47.173735297Z" level=error msg="encountered an error cleaning up failed sandbox \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:47.173838 containerd[1486]: time="2025-01-29T16:40:47.173811801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:47.174430 kubelet[1861]: E0129 16:40:47.174032 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:47.174430 kubelet[1861]: E0129 16:40:47.174089 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:47.174430 kubelet[1861]: E0129 16:40:47.174115 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:47.174582 kubelet[1861]: E0129 16:40:47.174163 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kccdw" podUID="7c5c4917-93fb-4240-8cff-3be51f6e8145" Jan 29 16:40:47.192363 containerd[1486]: time="2025-01-29T16:40:47.192324812Z" level=error msg="Failed to destroy network for sandbox \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:47.192971 containerd[1486]: time="2025-01-29T16:40:47.192789523Z" level=error msg="encountered an error cleaning up failed sandbox \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:47.192971 containerd[1486]: time="2025-01-29T16:40:47.192845448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:47.193507 kubelet[1861]: E0129 16:40:47.193167 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:47.193507 kubelet[1861]: E0129 16:40:47.193220 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:47.193507 kubelet[1861]: E0129 16:40:47.193243 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:47.193655 kubelet[1861]: E0129 16:40:47.193285 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:47.500685 kubelet[1861]: E0129 16:40:47.500561 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:47.967515 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea-shm.mount: Deactivated successfully. Jan 29 16:40:48.048340 kubelet[1861]: I0129 16:40:48.047761 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f" Jan 29 16:40:48.048458 containerd[1486]: time="2025-01-29T16:40:48.048265559Z" level=info msg="StopPodSandbox for \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\"" Jan 29 16:40:48.050425 containerd[1486]: time="2025-01-29T16:40:48.048477867Z" level=info msg="Ensure that sandbox b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f in task-service has been cleanup successfully" Jan 29 16:40:48.050311 systemd[1]: run-netns-cni\x2d300687a0\x2dbf1b\x2de2be\x2d70e3\x2dc42446f96c1c.mount: Deactivated successfully. Jan 29 16:40:48.051634 containerd[1486]: time="2025-01-29T16:40:48.050784884Z" level=info msg="TearDown network for sandbox \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\" successfully" Jan 29 16:40:48.051634 containerd[1486]: time="2025-01-29T16:40:48.050806594Z" level=info msg="StopPodSandbox for \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\" returns successfully" Jan 29 16:40:48.052360 containerd[1486]: time="2025-01-29T16:40:48.052335993Z" level=info msg="StopPodSandbox for \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\"" Jan 29 16:40:48.052524 containerd[1486]: time="2025-01-29T16:40:48.052475865Z" level=info msg="TearDown network for sandbox \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\" successfully" Jan 29 16:40:48.052597 containerd[1486]: time="2025-01-29T16:40:48.052570332Z" level=info msg="StopPodSandbox for \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\" returns successfully" Jan 29 16:40:48.053288 containerd[1486]: time="2025-01-29T16:40:48.053270515Z" level=info msg="StopPodSandbox for \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\"" Jan 29 16:40:48.053419 containerd[1486]: time="2025-01-29T16:40:48.053402703Z" level=info msg="TearDown network for sandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" successfully" Jan 29 16:40:48.053498 containerd[1486]: time="2025-01-29T16:40:48.053485268Z" level=info msg="StopPodSandbox for \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" returns successfully" Jan 29 16:40:48.056024 containerd[1486]: time="2025-01-29T16:40:48.056004643Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\"" Jan 29 16:40:48.056319 containerd[1486]: time="2025-01-29T16:40:48.056151569Z" level=info msg="TearDown network for sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" successfully" Jan 29 16:40:48.056319 containerd[1486]: time="2025-01-29T16:40:48.056167118Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" returns successfully" Jan 29 16:40:48.056642 containerd[1486]: time="2025-01-29T16:40:48.056569533Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\"" Jan 29 16:40:48.056738 containerd[1486]: time="2025-01-29T16:40:48.056721398Z" level=info msg="TearDown network for sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" successfully" Jan 29 16:40:48.056965 containerd[1486]: time="2025-01-29T16:40:48.056786530Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" returns successfully" Jan 29 16:40:48.057179 containerd[1486]: time="2025-01-29T16:40:48.057161503Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\"" Jan 29 16:40:48.057603 containerd[1486]: time="2025-01-29T16:40:48.057283532Z" level=info msg="TearDown network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" successfully" Jan 29 16:40:48.057603 containerd[1486]: time="2025-01-29T16:40:48.057298239Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" returns successfully" Jan 29 16:40:48.058206 containerd[1486]: time="2025-01-29T16:40:48.057861586Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\"" Jan 29 16:40:48.058206 containerd[1486]: time="2025-01-29T16:40:48.057924764Z" level=info msg="TearDown network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" successfully" Jan 29 16:40:48.058206 containerd[1486]: time="2025-01-29T16:40:48.057935114Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" returns successfully" Jan 29 16:40:48.058416 containerd[1486]: time="2025-01-29T16:40:48.058398283Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\"" Jan 29 16:40:48.058558 containerd[1486]: time="2025-01-29T16:40:48.058537824Z" level=info msg="TearDown network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" successfully" Jan 29 16:40:48.059688 containerd[1486]: time="2025-01-29T16:40:48.058651618Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" returns successfully" Jan 29 16:40:48.059688 containerd[1486]: time="2025-01-29T16:40:48.059496683Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" Jan 29 16:40:48.059688 containerd[1486]: time="2025-01-29T16:40:48.059562456Z" level=info msg="TearDown network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" successfully" Jan 29 16:40:48.059688 containerd[1486]: time="2025-01-29T16:40:48.059578215Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" returns successfully" Jan 29 16:40:48.059688 containerd[1486]: time="2025-01-29T16:40:48.059666481Z" level=info msg="StopPodSandbox for \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\"" Jan 29 16:40:48.059932 kubelet[1861]: I0129 16:40:48.058845 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea" Jan 29 16:40:48.060003 containerd[1486]: time="2025-01-29T16:40:48.059869642Z" level=info msg="Ensure that sandbox df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea in task-service has been cleanup successfully" Jan 29 16:40:48.061870 systemd[1]: run-netns-cni\x2de1077c9e\x2d57d8\x2d3b32\x2dfd7b\x2dec2c72bd3bf0.mount: Deactivated successfully. Jan 29 16:40:48.063266 containerd[1486]: time="2025-01-29T16:40:48.062994302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:9,}" Jan 29 16:40:48.063266 containerd[1486]: time="2025-01-29T16:40:48.063176123Z" level=info msg="TearDown network for sandbox \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\" successfully" Jan 29 16:40:48.063266 containerd[1486]: time="2025-01-29T16:40:48.063190981Z" level=info msg="StopPodSandbox for \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\" returns successfully" Jan 29 16:40:48.064374 containerd[1486]: time="2025-01-29T16:40:48.064241872Z" level=info msg="StopPodSandbox for \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\"" Jan 29 16:40:48.064374 containerd[1486]: time="2025-01-29T16:40:48.064318275Z" level=info msg="TearDown network for sandbox \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\" successfully" Jan 29 16:40:48.064374 containerd[1486]: time="2025-01-29T16:40:48.064329747Z" level=info msg="StopPodSandbox for \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\" returns successfully" Jan 29 16:40:48.065220 containerd[1486]: time="2025-01-29T16:40:48.065116573Z" level=info msg="StopPodSandbox for \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\"" Jan 29 16:40:48.065220 containerd[1486]: time="2025-01-29T16:40:48.065186384Z" level=info msg="TearDown network for sandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" successfully" Jan 29 16:40:48.065220 containerd[1486]: time="2025-01-29T16:40:48.065197394Z" level=info msg="StopPodSandbox for \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" returns successfully" Jan 29 16:40:48.073612 containerd[1486]: time="2025-01-29T16:40:48.073502020Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\"" Jan 29 16:40:48.073612 containerd[1486]: time="2025-01-29T16:40:48.073588221Z" level=info msg="TearDown network for sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" successfully" Jan 29 16:40:48.073612 containerd[1486]: time="2025-01-29T16:40:48.073600705Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" returns successfully" Jan 29 16:40:48.074414 containerd[1486]: time="2025-01-29T16:40:48.074265872Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\"" Jan 29 16:40:48.074414 containerd[1486]: time="2025-01-29T16:40:48.074357344Z" level=info msg="TearDown network for sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" successfully" Jan 29 16:40:48.074414 containerd[1486]: time="2025-01-29T16:40:48.074386078Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" returns successfully" Jan 29 16:40:48.076151 containerd[1486]: time="2025-01-29T16:40:48.075908002Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\"" Jan 29 16:40:48.076151 containerd[1486]: time="2025-01-29T16:40:48.075997660Z" level=info msg="TearDown network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" successfully" Jan 29 16:40:48.076151 containerd[1486]: time="2025-01-29T16:40:48.076010063Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" returns successfully" Jan 29 16:40:48.076909 containerd[1486]: time="2025-01-29T16:40:48.076888481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:6,}" Jan 29 16:40:48.231218 containerd[1486]: time="2025-01-29T16:40:48.230903336Z" level=error msg="Failed to destroy network for sandbox \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:48.232356 containerd[1486]: time="2025-01-29T16:40:48.231930061Z" level=error msg="encountered an error cleaning up failed sandbox \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:48.232839 containerd[1486]: time="2025-01-29T16:40:48.232508877Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:48.233637 kubelet[1861]: E0129 16:40:48.233386 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:48.234040 kubelet[1861]: E0129 16:40:48.233870 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:48.234040 kubelet[1861]: E0129 16:40:48.233903 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:48.234040 kubelet[1861]: E0129 16:40:48.233974 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kccdw" podUID="7c5c4917-93fb-4240-8cff-3be51f6e8145" Jan 29 16:40:48.245284 containerd[1486]: time="2025-01-29T16:40:48.245056270Z" level=error msg="Failed to destroy network for sandbox \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:48.245552 containerd[1486]: time="2025-01-29T16:40:48.245526461Z" level=error msg="encountered an error cleaning up failed sandbox \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:48.246492 containerd[1486]: time="2025-01-29T16:40:48.245679338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:48.246559 kubelet[1861]: E0129 16:40:48.246102 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:48.246559 kubelet[1861]: E0129 16:40:48.246253 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:48.246559 kubelet[1861]: E0129 16:40:48.246276 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:48.246690 kubelet[1861]: E0129 16:40:48.246366 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:48.501336 kubelet[1861]: E0129 16:40:48.501237 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:48.969613 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46-shm.mount: Deactivated successfully. Jan 29 16:40:48.969748 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5-shm.mount: Deactivated successfully. Jan 29 16:40:49.064001 kubelet[1861]: I0129 16:40:49.063613 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5" Jan 29 16:40:49.064495 containerd[1486]: time="2025-01-29T16:40:49.064215623Z" level=info msg="StopPodSandbox for \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\"" Jan 29 16:40:49.064495 containerd[1486]: time="2025-01-29T16:40:49.064394739Z" level=info msg="Ensure that sandbox aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5 in task-service has been cleanup successfully" Jan 29 16:40:49.064921 containerd[1486]: time="2025-01-29T16:40:49.064891480Z" level=info msg="TearDown network for sandbox \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\" successfully" Jan 29 16:40:49.066738 containerd[1486]: time="2025-01-29T16:40:49.064981349Z" level=info msg="StopPodSandbox for \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\" returns successfully" Jan 29 16:40:49.067262 containerd[1486]: time="2025-01-29T16:40:49.067075486Z" level=info msg="StopPodSandbox for \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\"" Jan 29 16:40:49.067262 containerd[1486]: time="2025-01-29T16:40:49.067140999Z" level=info msg="TearDown network for sandbox \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\" successfully" Jan 29 16:40:49.067262 containerd[1486]: time="2025-01-29T16:40:49.067152381Z" level=info msg="StopPodSandbox for \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\" returns successfully" Jan 29 16:40:49.068671 containerd[1486]: time="2025-01-29T16:40:49.068652384Z" level=info msg="StopPodSandbox for \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\"" Jan 29 16:40:49.068784 systemd[1]: run-netns-cni\x2daa680150\x2dbd4c\x2db6ed\x2d79b0\x2de873aa9dce6c.mount: Deactivated successfully. Jan 29 16:40:49.069329 containerd[1486]: time="2025-01-29T16:40:49.068777799Z" level=info msg="TearDown network for sandbox \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\" successfully" Jan 29 16:40:49.069329 containerd[1486]: time="2025-01-29T16:40:49.068802004Z" level=info msg="StopPodSandbox for \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\" returns successfully" Jan 29 16:40:49.071983 containerd[1486]: time="2025-01-29T16:40:49.071849861Z" level=info msg="StopPodSandbox for \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\"" Jan 29 16:40:49.071983 containerd[1486]: time="2025-01-29T16:40:49.071942565Z" level=info msg="TearDown network for sandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" successfully" Jan 29 16:40:49.071983 containerd[1486]: time="2025-01-29T16:40:49.071958805Z" level=info msg="StopPodSandbox for \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" returns successfully" Jan 29 16:40:49.072652 containerd[1486]: time="2025-01-29T16:40:49.072593285Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\"" Jan 29 16:40:49.072985 containerd[1486]: time="2025-01-29T16:40:49.072970242Z" level=info msg="TearDown network for sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" successfully" Jan 29 16:40:49.073081 containerd[1486]: time="2025-01-29T16:40:49.073066783Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" returns successfully" Jan 29 16:40:49.073545 kubelet[1861]: I0129 16:40:49.073521 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46" Jan 29 16:40:49.073756 containerd[1486]: time="2025-01-29T16:40:49.073738643Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\"" Jan 29 16:40:49.074235 containerd[1486]: time="2025-01-29T16:40:49.074212852Z" level=info msg="TearDown network for sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" successfully" Jan 29 16:40:49.074235 containerd[1486]: time="2025-01-29T16:40:49.074232449Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" returns successfully" Jan 29 16:40:49.075078 containerd[1486]: time="2025-01-29T16:40:49.075040825Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\"" Jan 29 16:40:49.075134 containerd[1486]: time="2025-01-29T16:40:49.075118231Z" level=info msg="TearDown network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" successfully" Jan 29 16:40:49.075165 containerd[1486]: time="2025-01-29T16:40:49.075131165Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" returns successfully" Jan 29 16:40:49.075194 containerd[1486]: time="2025-01-29T16:40:49.075183493Z" level=info msg="StopPodSandbox for \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\"" Jan 29 16:40:49.075396 containerd[1486]: time="2025-01-29T16:40:49.075339996Z" level=info msg="Ensure that sandbox 76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46 in task-service has been cleanup successfully" Jan 29 16:40:49.078666 containerd[1486]: time="2025-01-29T16:40:49.077854102Z" level=info msg="TearDown network for sandbox \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\" successfully" Jan 29 16:40:49.078666 containerd[1486]: time="2025-01-29T16:40:49.077874320Z" level=info msg="StopPodSandbox for \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\" returns successfully" Jan 29 16:40:49.078666 containerd[1486]: time="2025-01-29T16:40:49.077984186Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\"" Jan 29 16:40:49.078666 containerd[1486]: time="2025-01-29T16:40:49.078048586Z" level=info msg="TearDown network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" successfully" Jan 29 16:40:49.078666 containerd[1486]: time="2025-01-29T16:40:49.078059106Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" returns successfully" Jan 29 16:40:49.078666 containerd[1486]: time="2025-01-29T16:40:49.078149957Z" level=info msg="StopPodSandbox for \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\"" Jan 29 16:40:49.078666 containerd[1486]: time="2025-01-29T16:40:49.078206783Z" level=info msg="TearDown network for sandbox \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\" successfully" Jan 29 16:40:49.078666 containerd[1486]: time="2025-01-29T16:40:49.078220048Z" level=info msg="StopPodSandbox for \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\" returns successfully" Jan 29 16:40:49.078666 containerd[1486]: time="2025-01-29T16:40:49.078296642Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\"" Jan 29 16:40:49.078666 containerd[1486]: time="2025-01-29T16:40:49.078350162Z" level=info msg="TearDown network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" successfully" Jan 29 16:40:49.078666 containerd[1486]: time="2025-01-29T16:40:49.078360000Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" returns successfully" Jan 29 16:40:49.078666 containerd[1486]: time="2025-01-29T16:40:49.078435552Z" level=info msg="StopPodSandbox for \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\"" Jan 29 16:40:49.078666 containerd[1486]: time="2025-01-29T16:40:49.078488652Z" level=info msg="TearDown network for sandbox \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\" successfully" Jan 29 16:40:49.078666 containerd[1486]: time="2025-01-29T16:40:49.078499542Z" level=info msg="StopPodSandbox for \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\" returns successfully" Jan 29 16:40:49.078666 containerd[1486]: time="2025-01-29T16:40:49.078591124Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" Jan 29 16:40:49.079530 containerd[1486]: time="2025-01-29T16:40:49.079511781Z" level=info msg="StopPodSandbox for \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\"" Jan 29 16:40:49.079675 containerd[1486]: time="2025-01-29T16:40:49.079658325Z" level=info msg="TearDown network for sandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" successfully" Jan 29 16:40:49.079743 containerd[1486]: time="2025-01-29T16:40:49.079729299Z" level=info msg="StopPodSandbox for \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" returns successfully" Jan 29 16:40:49.080096 containerd[1486]: time="2025-01-29T16:40:49.080038749Z" level=info msg="TearDown network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" successfully" Jan 29 16:40:49.080096 containerd[1486]: time="2025-01-29T16:40:49.080053587Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" returns successfully" Jan 29 16:40:49.080096 containerd[1486]: time="2025-01-29T16:40:49.080487140Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\"" Jan 29 16:40:49.080096 containerd[1486]: time="2025-01-29T16:40:49.080545008Z" level=info msg="TearDown network for sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" successfully" Jan 29 16:40:49.080096 containerd[1486]: time="2025-01-29T16:40:49.080555007Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" returns successfully" Jan 29 16:40:49.080527 systemd[1]: run-netns-cni\x2d6dc414e1\x2dd02a\x2deaaf\x2d706b\x2d7e2c5db25aa3.mount: Deactivated successfully. Jan 29 16:40:49.081313 containerd[1486]: time="2025-01-29T16:40:49.080969104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:10,}" Jan 29 16:40:49.081443 containerd[1486]: time="2025-01-29T16:40:49.081425640Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\"" Jan 29 16:40:49.081583 containerd[1486]: time="2025-01-29T16:40:49.081566805Z" level=info msg="TearDown network for sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" successfully" Jan 29 16:40:49.081761 containerd[1486]: time="2025-01-29T16:40:49.081715945Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" returns successfully" Jan 29 16:40:49.082265 containerd[1486]: time="2025-01-29T16:40:49.082247682Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\"" Jan 29 16:40:49.082474 containerd[1486]: time="2025-01-29T16:40:49.082457305Z" level=info msg="TearDown network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" successfully" Jan 29 16:40:49.082562 containerd[1486]: time="2025-01-29T16:40:49.082546472Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" returns successfully" Jan 29 16:40:49.083544 containerd[1486]: time="2025-01-29T16:40:49.083360088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:7,}" Jan 29 16:40:49.220392 containerd[1486]: time="2025-01-29T16:40:49.220262759Z" level=error msg="Failed to destroy network for sandbox \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:49.220648 containerd[1486]: time="2025-01-29T16:40:49.220597928Z" level=error msg="encountered an error cleaning up failed sandbox \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:49.220693 containerd[1486]: time="2025-01-29T16:40:49.220674601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:7,} failed, error" error="failed to setup network for sandbox \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:49.220945 kubelet[1861]: E0129 16:40:49.220888 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:49.221380 kubelet[1861]: E0129 16:40:49.220947 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:49.221380 kubelet[1861]: E0129 16:40:49.220971 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:49.221380 kubelet[1861]: E0129 16:40:49.221017 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kccdw" podUID="7c5c4917-93fb-4240-8cff-3be51f6e8145" Jan 29 16:40:49.240357 containerd[1486]: time="2025-01-29T16:40:49.240233695Z" level=error msg="Failed to destroy network for sandbox \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:49.240706 containerd[1486]: time="2025-01-29T16:40:49.240679120Z" level=error msg="encountered an error cleaning up failed sandbox \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:49.240891 containerd[1486]: time="2025-01-29T16:40:49.240806118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:10,} failed, error" error="failed to setup network for sandbox \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:49.241230 kubelet[1861]: E0129 16:40:49.241061 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:49.241230 kubelet[1861]: E0129 16:40:49.241120 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:49.241230 kubelet[1861]: E0129 16:40:49.241141 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:49.241345 kubelet[1861]: E0129 16:40:49.241181 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:49.504732 kubelet[1861]: E0129 16:40:49.503837 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:49.924092 containerd[1486]: time="2025-01-29T16:40:49.923995810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:49.925963 containerd[1486]: time="2025-01-29T16:40:49.925662807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 16:40:49.927693 containerd[1486]: time="2025-01-29T16:40:49.927650645Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:49.934117 containerd[1486]: time="2025-01-29T16:40:49.934048694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:49.936112 containerd[1486]: time="2025-01-29T16:40:49.934531881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.011808024s" Jan 29 16:40:49.936112 containerd[1486]: time="2025-01-29T16:40:49.934563099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 16:40:49.947962 containerd[1486]: time="2025-01-29T16:40:49.947912937Z" level=info msg="CreateContainer within sandbox \"32db68fb19ec8770f3a278450675a5e0904b51836a61db6272d0b7eb81b987be\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 16:40:49.977235 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38-shm.mount: Deactivated successfully. Jan 29 16:40:49.978322 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c-shm.mount: Deactivated successfully. Jan 29 16:40:49.978750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3349254359.mount: Deactivated successfully. Jan 29 16:40:49.984730 containerd[1486]: time="2025-01-29T16:40:49.982884095Z" level=info msg="CreateContainer within sandbox \"32db68fb19ec8770f3a278450675a5e0904b51836a61db6272d0b7eb81b987be\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b06b220ed574e6d8b8f0af0a9e9c1d31e8e08e5eb908adabd9a106bb7fa86e3f\"" Jan 29 16:40:49.985162 containerd[1486]: time="2025-01-29T16:40:49.985113917Z" level=info msg="StartContainer for \"b06b220ed574e6d8b8f0af0a9e9c1d31e8e08e5eb908adabd9a106bb7fa86e3f\"" Jan 29 16:40:50.044770 systemd[1]: Started cri-containerd-b06b220ed574e6d8b8f0af0a9e9c1d31e8e08e5eb908adabd9a106bb7fa86e3f.scope - libcontainer container b06b220ed574e6d8b8f0af0a9e9c1d31e8e08e5eb908adabd9a106bb7fa86e3f. Jan 29 16:40:50.083131 containerd[1486]: time="2025-01-29T16:40:50.082019884Z" level=info msg="StartContainer for \"b06b220ed574e6d8b8f0af0a9e9c1d31e8e08e5eb908adabd9a106bb7fa86e3f\" returns successfully" Jan 29 16:40:50.088759 kubelet[1861]: I0129 16:40:50.088727 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c" Jan 29 16:40:50.091649 containerd[1486]: time="2025-01-29T16:40:50.089410455Z" level=info msg="StopPodSandbox for \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\"" Jan 29 16:40:50.091649 containerd[1486]: time="2025-01-29T16:40:50.089608757Z" level=info msg="Ensure that sandbox 20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c in task-service has been cleanup successfully" Jan 29 16:40:50.091482 systemd[1]: run-netns-cni\x2d91b14229\x2d3086\x2dc393\x2d1f62\x2d839d23247b66.mount: Deactivated successfully. Jan 29 16:40:50.092236 containerd[1486]: time="2025-01-29T16:40:50.092175231Z" level=info msg="TearDown network for sandbox \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\" successfully" Jan 29 16:40:50.092236 containerd[1486]: time="2025-01-29T16:40:50.092215286Z" level=info msg="StopPodSandbox for \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\" returns successfully" Jan 29 16:40:50.093534 containerd[1486]: time="2025-01-29T16:40:50.092723509Z" level=info msg="StopPodSandbox for \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\"" Jan 29 16:40:50.093880 containerd[1486]: time="2025-01-29T16:40:50.093608369Z" level=info msg="TearDown network for sandbox \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\" successfully" Jan 29 16:40:50.093924 containerd[1486]: time="2025-01-29T16:40:50.093876722Z" level=info msg="StopPodSandbox for \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\" returns successfully" Jan 29 16:40:50.094099 containerd[1486]: time="2025-01-29T16:40:50.094074823Z" level=info msg="StopPodSandbox for \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\"" Jan 29 16:40:50.094230 containerd[1486]: time="2025-01-29T16:40:50.094214555Z" level=info msg="TearDown network for sandbox \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\" successfully" Jan 29 16:40:50.094384 containerd[1486]: time="2025-01-29T16:40:50.094287442Z" level=info msg="StopPodSandbox for \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\" returns successfully" Jan 29 16:40:50.095161 containerd[1486]: time="2025-01-29T16:40:50.095141003Z" level=info msg="StopPodSandbox for \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\"" Jan 29 16:40:50.095329 containerd[1486]: time="2025-01-29T16:40:50.095314218Z" level=info msg="TearDown network for sandbox \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\" successfully" Jan 29 16:40:50.095410 containerd[1486]: time="2025-01-29T16:40:50.095395841Z" level=info msg="StopPodSandbox for \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\" returns successfully" Jan 29 16:40:50.095881 containerd[1486]: time="2025-01-29T16:40:50.095855683Z" level=info msg="StopPodSandbox for \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\"" Jan 29 16:40:50.096004 containerd[1486]: time="2025-01-29T16:40:50.095923661Z" level=info msg="TearDown network for sandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" successfully" Jan 29 16:40:50.096004 containerd[1486]: time="2025-01-29T16:40:50.095939400Z" level=info msg="StopPodSandbox for \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" returns successfully" Jan 29 16:40:50.096425 containerd[1486]: time="2025-01-29T16:40:50.096396758Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\"" Jan 29 16:40:50.096685 kubelet[1861]: I0129 16:40:50.096583 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38" Jan 29 16:40:50.097757 containerd[1486]: time="2025-01-29T16:40:50.097365004Z" level=info msg="TearDown network for sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" successfully" Jan 29 16:40:50.097757 containerd[1486]: time="2025-01-29T16:40:50.097379571Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" returns successfully" Jan 29 16:40:50.097757 containerd[1486]: time="2025-01-29T16:40:50.097440646Z" level=info msg="StopPodSandbox for \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\"" Jan 29 16:40:50.097875 containerd[1486]: time="2025-01-29T16:40:50.097851286Z" level=info msg="Ensure that sandbox 04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38 in task-service has been cleanup successfully" Jan 29 16:40:50.098152 containerd[1486]: time="2025-01-29T16:40:50.098126041Z" level=info msg="TearDown network for sandbox \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\" successfully" Jan 29 16:40:50.098152 containerd[1486]: time="2025-01-29T16:40:50.098145999Z" level=info msg="StopPodSandbox for \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\" returns successfully" Jan 29 16:40:50.098930 containerd[1486]: time="2025-01-29T16:40:50.098739973Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\"" Jan 29 16:40:50.098930 containerd[1486]: time="2025-01-29T16:40:50.098787973Z" level=info msg="StopPodSandbox for \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\"" Jan 29 16:40:50.098930 containerd[1486]: time="2025-01-29T16:40:50.098811036Z" level=info msg="TearDown network for sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" successfully" Jan 29 16:40:50.098930 containerd[1486]: time="2025-01-29T16:40:50.098822608Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" returns successfully" Jan 29 16:40:50.098930 containerd[1486]: time="2025-01-29T16:40:50.098862092Z" level=info msg="TearDown network for sandbox \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\" successfully" Jan 29 16:40:50.098930 containerd[1486]: time="2025-01-29T16:40:50.098873834Z" level=info msg="StopPodSandbox for \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\" returns successfully" Jan 29 16:40:50.100335 systemd[1]: run-netns-cni\x2d6aee504a\x2d2313\x2d4cea\x2d5560\x2defd0384fbb02.mount: Deactivated successfully. Jan 29 16:40:50.101068 containerd[1486]: time="2025-01-29T16:40:50.101038914Z" level=info msg="StopPodSandbox for \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\"" Jan 29 16:40:50.101124 containerd[1486]: time="2025-01-29T16:40:50.101113845Z" level=info msg="TearDown network for sandbox \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\" successfully" Jan 29 16:40:50.101150 containerd[1486]: time="2025-01-29T16:40:50.101125667Z" level=info msg="StopPodSandbox for \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\" returns successfully" Jan 29 16:40:50.101400 containerd[1486]: time="2025-01-29T16:40:50.101331904Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\"" Jan 29 16:40:50.101592 containerd[1486]: time="2025-01-29T16:40:50.101568919Z" level=info msg="TearDown network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" successfully" Jan 29 16:40:50.101592 containerd[1486]: time="2025-01-29T16:40:50.101587874Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" returns successfully" Jan 29 16:40:50.102555 containerd[1486]: time="2025-01-29T16:40:50.101760668Z" level=info msg="StopPodSandbox for \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\"" Jan 29 16:40:50.102555 containerd[1486]: time="2025-01-29T16:40:50.101827003Z" level=info msg="TearDown network for sandbox \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\" successfully" Jan 29 16:40:50.102555 containerd[1486]: time="2025-01-29T16:40:50.101838975Z" level=info msg="StopPodSandbox for \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\" returns successfully" Jan 29 16:40:50.103254 containerd[1486]: time="2025-01-29T16:40:50.102805428Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\"" Jan 29 16:40:50.103332 containerd[1486]: time="2025-01-29T16:40:50.103308321Z" level=info msg="StopPodSandbox for \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\"" Jan 29 16:40:50.103544 containerd[1486]: time="2025-01-29T16:40:50.103411003Z" level=info msg="TearDown network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" successfully" Jan 29 16:40:50.103544 containerd[1486]: time="2025-01-29T16:40:50.103538072Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" returns successfully" Jan 29 16:40:50.103723 containerd[1486]: time="2025-01-29T16:40:50.103485022Z" level=info msg="TearDown network for sandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" successfully" Jan 29 16:40:50.103723 containerd[1486]: time="2025-01-29T16:40:50.103718520Z" level=info msg="StopPodSandbox for \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" returns successfully" Jan 29 16:40:50.105333 containerd[1486]: time="2025-01-29T16:40:50.104305160Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\"" Jan 29 16:40:50.105333 containerd[1486]: time="2025-01-29T16:40:50.104520795Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\"" Jan 29 16:40:50.105333 containerd[1486]: time="2025-01-29T16:40:50.104698889Z" level=info msg="TearDown network for sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" successfully" Jan 29 16:40:50.105333 containerd[1486]: time="2025-01-29T16:40:50.104712444Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" returns successfully" Jan 29 16:40:50.105333 containerd[1486]: time="2025-01-29T16:40:50.104751608Z" level=info msg="TearDown network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" successfully" Jan 29 16:40:50.105333 containerd[1486]: time="2025-01-29T16:40:50.104864319Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" returns successfully" Jan 29 16:40:50.105554 containerd[1486]: time="2025-01-29T16:40:50.105514789Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" Jan 29 16:40:50.105627 containerd[1486]: time="2025-01-29T16:40:50.105595450Z" level=info msg="TearDown network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" successfully" Jan 29 16:40:50.105627 containerd[1486]: time="2025-01-29T16:40:50.105606451Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" returns successfully" Jan 29 16:40:50.105686 containerd[1486]: time="2025-01-29T16:40:50.105667966Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\"" Jan 29 16:40:50.105998 containerd[1486]: time="2025-01-29T16:40:50.105721577Z" level=info msg="TearDown network for sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" successfully" Jan 29 16:40:50.105998 containerd[1486]: time="2025-01-29T16:40:50.105736936Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" returns successfully" Jan 29 16:40:50.106686 containerd[1486]: time="2025-01-29T16:40:50.106657252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:11,}" Jan 29 16:40:50.107101 containerd[1486]: time="2025-01-29T16:40:50.107072350Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\"" Jan 29 16:40:50.107165 containerd[1486]: time="2025-01-29T16:40:50.107143544Z" level=info msg="TearDown network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" successfully" Jan 29 16:40:50.107165 containerd[1486]: time="2025-01-29T16:40:50.107160395Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" returns successfully" Jan 29 16:40:50.107787 containerd[1486]: time="2025-01-29T16:40:50.107733009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:8,}" Jan 29 16:40:50.179152 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 16:40:50.179327 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 16:40:50.225164 containerd[1486]: time="2025-01-29T16:40:50.225112024Z" level=error msg="Failed to destroy network for sandbox \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:50.226165 containerd[1486]: time="2025-01-29T16:40:50.225511693Z" level=error msg="encountered an error cleaning up failed sandbox \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:50.226165 containerd[1486]: time="2025-01-29T16:40:50.225584079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:11,} failed, error" error="failed to setup network for sandbox \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:50.226304 kubelet[1861]: E0129 16:40:50.225810 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:50.226304 kubelet[1861]: E0129 16:40:50.225875 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:50.226304 kubelet[1861]: E0129 16:40:50.225900 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9lss" Jan 29 16:40:50.226410 kubelet[1861]: E0129 16:40:50.225942 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9lss_calico-system(4cbd60f8-abf3-4e44-b98f-73df647c2adc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9lss" podUID="4cbd60f8-abf3-4e44-b98f-73df647c2adc" Jan 29 16:40:50.237726 containerd[1486]: time="2025-01-29T16:40:50.237580669Z" level=error msg="Failed to destroy network for sandbox \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:50.238167 containerd[1486]: time="2025-01-29T16:40:50.238022387Z" level=error msg="encountered an error cleaning up failed sandbox \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:50.238167 containerd[1486]: time="2025-01-29T16:40:50.238080276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:8,} failed, error" error="failed to setup network for sandbox \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:50.238743 kubelet[1861]: E0129 16:40:50.238386 1861 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:40:50.238743 kubelet[1861]: E0129 16:40:50.238438 1861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:50.238743 kubelet[1861]: E0129 16:40:50.238458 1861 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kccdw" Jan 29 16:40:50.238861 kubelet[1861]: E0129 16:40:50.238496 1861 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kccdw_default(7c5c4917-93fb-4240-8cff-3be51f6e8145)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kccdw" podUID="7c5c4917-93fb-4240-8cff-3be51f6e8145" Jan 29 16:40:50.505251 kubelet[1861]: E0129 16:40:50.504997 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:51.133091 kubelet[1861]: I0129 16:40:51.133024 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6" Jan 29 16:40:51.135590 containerd[1486]: time="2025-01-29T16:40:51.135341098Z" level=info msg="StopPodSandbox for \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\"" Jan 29 16:40:51.137028 containerd[1486]: time="2025-01-29T16:40:51.136468893Z" level=info msg="Ensure that sandbox 9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6 in task-service has been cleanup successfully" Jan 29 16:40:51.141298 containerd[1486]: time="2025-01-29T16:40:51.139040767Z" level=info msg="TearDown network for sandbox \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\" successfully" Jan 29 16:40:51.141298 containerd[1486]: time="2025-01-29T16:40:51.139091452Z" level=info msg="StopPodSandbox for \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\" returns successfully" Jan 29 16:40:51.143005 containerd[1486]: time="2025-01-29T16:40:51.141817735Z" level=info msg="StopPodSandbox for \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\"" Jan 29 16:40:51.143005 containerd[1486]: time="2025-01-29T16:40:51.141976413Z" level=info msg="TearDown network for sandbox \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\" successfully" Jan 29 16:40:51.143005 containerd[1486]: time="2025-01-29T16:40:51.142004746Z" level=info msg="StopPodSandbox for \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\" returns successfully" Jan 29 16:40:51.148199 systemd[1]: run-netns-cni\x2dda2cefd4\x2d1e66\x2d6c9f\x2d8e6e\x2d486c5c9c3e3b.mount: Deactivated successfully. Jan 29 16:40:51.153810 kubelet[1861]: I0129 16:40:51.153353 1861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-plhx4" podStartSLOduration=5.101875028 podStartE2EDuration="28.153283269s" podCreationTimestamp="2025-01-29 16:40:23 +0000 UTC" firstStartedPulling="2025-01-29 16:40:26.233791324 +0000 UTC m=+4.435250261" lastFinishedPulling="2025-01-29 16:40:49.936587215 +0000 UTC m=+27.486658502" observedRunningTime="2025-01-29 16:40:51.150046328 +0000 UTC m=+28.700117704" watchObservedRunningTime="2025-01-29 16:40:51.153283269 +0000 UTC m=+28.703354555" Jan 29 16:40:51.155415 containerd[1486]: time="2025-01-29T16:40:51.155153025Z" level=info msg="StopPodSandbox for \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\"" Jan 29 16:40:51.155415 containerd[1486]: time="2025-01-29T16:40:51.155372066Z" level=info msg="TearDown network for sandbox \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\" successfully" Jan 29 16:40:51.155415 containerd[1486]: time="2025-01-29T16:40:51.155401762Z" level=info msg="StopPodSandbox for \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\" returns successfully" Jan 29 16:40:51.156699 containerd[1486]: time="2025-01-29T16:40:51.156490303Z" level=info msg="StopPodSandbox for \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\"" Jan 29 16:40:51.158381 containerd[1486]: time="2025-01-29T16:40:51.158200461Z" level=info msg="TearDown network for sandbox \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\" successfully" Jan 29 16:40:51.158381 containerd[1486]: time="2025-01-29T16:40:51.158296391Z" level=info msg="StopPodSandbox for \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\" returns successfully" Jan 29 16:40:51.159968 containerd[1486]: time="2025-01-29T16:40:51.159677641Z" level=info msg="StopPodSandbox for \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\"" Jan 29 16:40:51.159968 containerd[1486]: time="2025-01-29T16:40:51.159840957Z" level=info msg="TearDown network for sandbox \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\" successfully" Jan 29 16:40:51.159968 containerd[1486]: time="2025-01-29T16:40:51.159868319Z" level=info msg="StopPodSandbox for \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\" returns successfully" Jan 29 16:40:51.161209 containerd[1486]: time="2025-01-29T16:40:51.161129133Z" level=info msg="StopPodSandbox for \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\"" Jan 29 16:40:51.161750 containerd[1486]: time="2025-01-29T16:40:51.161601099Z" level=info msg="TearDown network for sandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" successfully" Jan 29 16:40:51.161987 containerd[1486]: time="2025-01-29T16:40:51.161708440Z" level=info msg="StopPodSandbox for \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" returns successfully" Jan 29 16:40:51.162749 kubelet[1861]: I0129 16:40:51.162704 1861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb" Jan 29 16:40:51.164175 containerd[1486]: time="2025-01-29T16:40:51.164129771Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\"" Jan 29 16:40:51.169679 containerd[1486]: time="2025-01-29T16:40:51.164798616Z" level=info msg="TearDown network for sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" successfully" Jan 29 16:40:51.169679 containerd[1486]: time="2025-01-29T16:40:51.164891480Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" returns successfully" Jan 29 16:40:51.169679 containerd[1486]: time="2025-01-29T16:40:51.164458417Z" level=info msg="StopPodSandbox for \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\"" Jan 29 16:40:51.169679 containerd[1486]: time="2025-01-29T16:40:51.165336414Z" level=info msg="Ensure that sandbox 428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb in task-service has been cleanup successfully" Jan 29 16:40:51.170355 containerd[1486]: time="2025-01-29T16:40:51.170308219Z" level=info msg="TearDown network for sandbox \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\" successfully" Jan 29 16:40:51.170642 containerd[1486]: time="2025-01-29T16:40:51.170539523Z" level=info msg="StopPodSandbox for \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\" returns successfully" Jan 29 16:40:51.174885 containerd[1486]: time="2025-01-29T16:40:51.172687702Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\"" Jan 29 16:40:51.175199 containerd[1486]: time="2025-01-29T16:40:51.175157634Z" level=info msg="TearDown network for sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" successfully" Jan 29 16:40:51.175381 containerd[1486]: time="2025-01-29T16:40:51.175338944Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" returns successfully" Jan 29 16:40:51.178175 systemd[1]: run-netns-cni\x2d29e83020\x2de608\x2d7f03\x2d4c9b\x2d03e6647c1873.mount: Deactivated successfully. Jan 29 16:40:51.183685 containerd[1486]: time="2025-01-29T16:40:51.180171698Z" level=info msg="StopPodSandbox for \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\"" Jan 29 16:40:51.183685 containerd[1486]: time="2025-01-29T16:40:51.180371963Z" level=info msg="TearDown network for sandbox \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\" successfully" Jan 29 16:40:51.183685 containerd[1486]: time="2025-01-29T16:40:51.180401308Z" level=info msg="StopPodSandbox for \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\" returns successfully" Jan 29 16:40:51.183685 containerd[1486]: time="2025-01-29T16:40:51.181462669Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\"" Jan 29 16:40:51.183685 containerd[1486]: time="2025-01-29T16:40:51.181662955Z" level=info msg="TearDown network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" successfully" Jan 29 16:40:51.183685 containerd[1486]: time="2025-01-29T16:40:51.181695135Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" returns successfully" Jan 29 16:40:51.183685 containerd[1486]: time="2025-01-29T16:40:51.181795653Z" level=info msg="StopPodSandbox for \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\"" Jan 29 16:40:51.183685 containerd[1486]: time="2025-01-29T16:40:51.181928362Z" level=info msg="TearDown network for sandbox \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\" successfully" Jan 29 16:40:51.183685 containerd[1486]: time="2025-01-29T16:40:51.181956335Z" level=info msg="StopPodSandbox for \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\" returns successfully" Jan 29 16:40:51.184294 containerd[1486]: time="2025-01-29T16:40:51.183913776Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\"" Jan 29 16:40:51.184294 containerd[1486]: time="2025-01-29T16:40:51.184072804Z" level=info msg="TearDown network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" successfully" Jan 29 16:40:51.184294 containerd[1486]: time="2025-01-29T16:40:51.184100136Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" returns successfully" Jan 29 16:40:51.187756 containerd[1486]: time="2025-01-29T16:40:51.186833803Z" level=info msg="StopPodSandbox for \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\"" Jan 29 16:40:51.187756 containerd[1486]: time="2025-01-29T16:40:51.186994815Z" level=info msg="TearDown network for sandbox \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\" successfully" Jan 29 16:40:51.187756 containerd[1486]: time="2025-01-29T16:40:51.187020352Z" level=info msg="StopPodSandbox for \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\" returns successfully" Jan 29 16:40:51.191019 containerd[1486]: time="2025-01-29T16:40:51.190959019Z" level=info msg="StopPodSandbox for \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\"" Jan 29 16:40:51.191316 containerd[1486]: time="2025-01-29T16:40:51.191181707Z" level=info msg="TearDown network for sandbox \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\" successfully" Jan 29 16:40:51.191316 containerd[1486]: time="2025-01-29T16:40:51.191219939Z" level=info msg="StopPodSandbox for \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\" returns successfully" Jan 29 16:40:51.191446 containerd[1486]: time="2025-01-29T16:40:51.191313184Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\"" Jan 29 16:40:51.191446 containerd[1486]: time="2025-01-29T16:40:51.191407200Z" level=info msg="TearDown network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" successfully" Jan 29 16:40:51.191446 containerd[1486]: time="2025-01-29T16:40:51.191426536Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" returns successfully" Jan 29 16:40:51.195527 containerd[1486]: time="2025-01-29T16:40:51.192011303Z" level=info msg="StopPodSandbox for \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\"" Jan 29 16:40:51.195527 containerd[1486]: time="2025-01-29T16:40:51.192131488Z" level=info msg="TearDown network for sandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" successfully" Jan 29 16:40:51.195527 containerd[1486]: time="2025-01-29T16:40:51.192150594Z" level=info msg="StopPodSandbox for \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" returns successfully" Jan 29 16:40:51.195527 containerd[1486]: time="2025-01-29T16:40:51.192223641Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" Jan 29 16:40:51.195527 containerd[1486]: time="2025-01-29T16:40:51.192316555Z" level=info msg="TearDown network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" successfully" Jan 29 16:40:51.195527 containerd[1486]: time="2025-01-29T16:40:51.192334649Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" returns successfully" Jan 29 16:40:51.195527 containerd[1486]: time="2025-01-29T16:40:51.193181117Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\"" Jan 29 16:40:51.195527 containerd[1486]: time="2025-01-29T16:40:51.193358409Z" level=info msg="TearDown network for sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" successfully" Jan 29 16:40:51.195527 containerd[1486]: time="2025-01-29T16:40:51.193380811Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" returns successfully" Jan 29 16:40:51.195527 containerd[1486]: time="2025-01-29T16:40:51.193590024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:12,}" Jan 29 16:40:51.196660 containerd[1486]: time="2025-01-29T16:40:51.196506934Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\"" Jan 29 16:40:51.196660 containerd[1486]: time="2025-01-29T16:40:51.196583278Z" level=info msg="TearDown network for sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" successfully" Jan 29 16:40:51.196660 containerd[1486]: time="2025-01-29T16:40:51.196595030Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" returns successfully" Jan 29 16:40:51.197943 containerd[1486]: time="2025-01-29T16:40:51.197688521Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\"" Jan 29 16:40:51.197943 containerd[1486]: time="2025-01-29T16:40:51.197788368Z" level=info msg="TearDown network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" successfully" Jan 29 16:40:51.197943 containerd[1486]: time="2025-01-29T16:40:51.197802053Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" returns successfully" Jan 29 16:40:51.199032 containerd[1486]: time="2025-01-29T16:40:51.199012393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:9,}" Jan 29 16:40:51.442093 systemd-networkd[1389]: calif36a43ab040: Link UP Jan 29 16:40:51.442602 systemd-networkd[1389]: calif36a43ab040: Gained carrier Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.268 [INFO][3070] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.293 [INFO][3070] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.158-k8s-nginx--deployment--85f456d6dd--kccdw-eth0 nginx-deployment-85f456d6dd- default 7c5c4917-93fb-4240-8cff-3be51f6e8145 1162 0 2025-01-29 16:40:42 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.158 nginx-deployment-85f456d6dd-kccdw eth0 default [] [] [kns.default ksa.default.default] calif36a43ab040 [] []}} ContainerID="147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" Namespace="default" Pod="nginx-deployment-85f456d6dd-kccdw" WorkloadEndpoint="172.24.4.158-k8s-nginx--deployment--85f456d6dd--kccdw-" Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.293 [INFO][3070] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" Namespace="default" Pod="nginx-deployment-85f456d6dd-kccdw" WorkloadEndpoint="172.24.4.158-k8s-nginx--deployment--85f456d6dd--kccdw-eth0" Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.332 [INFO][3092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" HandleID="k8s-pod-network.147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" Workload="172.24.4.158-k8s-nginx--deployment--85f456d6dd--kccdw-eth0" Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.355 [INFO][3092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" HandleID="k8s-pod-network.147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" Workload="172.24.4.158-k8s-nginx--deployment--85f456d6dd--kccdw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002907f0), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.158", "pod":"nginx-deployment-85f456d6dd-kccdw", "timestamp":"2025-01-29 16:40:51.332194126 +0000 UTC"}, Hostname:"172.24.4.158", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.355 [INFO][3092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.355 [INFO][3092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.355 [INFO][3092] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.158' Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.359 [INFO][3092] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" host="172.24.4.158" Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.366 [INFO][3092] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.158" Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.375 [INFO][3092] ipam/ipam.go 489: Trying affinity for 192.168.34.192/26 host="172.24.4.158" Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.378 [INFO][3092] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.192/26 host="172.24.4.158" Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.387 [INFO][3092] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="172.24.4.158" Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.387 [INFO][3092] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" host="172.24.4.158" Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.393 [INFO][3092] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47 Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.401 [INFO][3092] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" host="172.24.4.158" Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.418 [INFO][3092] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.193/26] block=192.168.34.192/26 handle="k8s-pod-network.147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" host="172.24.4.158" Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.418 [INFO][3092] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.193/26] handle="k8s-pod-network.147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" host="172.24.4.158" Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.418 [INFO][3092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:40:51.470904 containerd[1486]: 2025-01-29 16:40:51.418 [INFO][3092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.193/26] IPv6=[] ContainerID="147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" HandleID="k8s-pod-network.147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" Workload="172.24.4.158-k8s-nginx--deployment--85f456d6dd--kccdw-eth0" Jan 29 16:40:51.473776 containerd[1486]: 2025-01-29 16:40:51.423 [INFO][3070] cni-plugin/k8s.go 386: Populated endpoint ContainerID="147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" Namespace="default" Pod="nginx-deployment-85f456d6dd-kccdw" WorkloadEndpoint="172.24.4.158-k8s-nginx--deployment--85f456d6dd--kccdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.158-k8s-nginx--deployment--85f456d6dd--kccdw-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"7c5c4917-93fb-4240-8cff-3be51f6e8145", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 40, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.158", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-kccdw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif36a43ab040", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:40:51.473776 containerd[1486]: 2025-01-29 16:40:51.424 [INFO][3070] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.193/32] ContainerID="147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" Namespace="default" Pod="nginx-deployment-85f456d6dd-kccdw" WorkloadEndpoint="172.24.4.158-k8s-nginx--deployment--85f456d6dd--kccdw-eth0" Jan 29 16:40:51.473776 containerd[1486]: 2025-01-29 16:40:51.424 [INFO][3070] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif36a43ab040 ContainerID="147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" Namespace="default" Pod="nginx-deployment-85f456d6dd-kccdw" WorkloadEndpoint="172.24.4.158-k8s-nginx--deployment--85f456d6dd--kccdw-eth0" Jan 29 16:40:51.473776 containerd[1486]: 2025-01-29 16:40:51.445 [INFO][3070] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" Namespace="default" Pod="nginx-deployment-85f456d6dd-kccdw" WorkloadEndpoint="172.24.4.158-k8s-nginx--deployment--85f456d6dd--kccdw-eth0" Jan 29 16:40:51.473776 containerd[1486]: 2025-01-29 16:40:51.447 [INFO][3070] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" Namespace="default" Pod="nginx-deployment-85f456d6dd-kccdw" WorkloadEndpoint="172.24.4.158-k8s-nginx--deployment--85f456d6dd--kccdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.158-k8s-nginx--deployment--85f456d6dd--kccdw-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"7c5c4917-93fb-4240-8cff-3be51f6e8145", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 40, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.158", ContainerID:"147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47", Pod:"nginx-deployment-85f456d6dd-kccdw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif36a43ab040", MAC:"5a:d0:60:f0:ac:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:40:51.473776 containerd[1486]: 2025-01-29 16:40:51.468 [INFO][3070] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47" Namespace="default" Pod="nginx-deployment-85f456d6dd-kccdw" WorkloadEndpoint="172.24.4.158-k8s-nginx--deployment--85f456d6dd--kccdw-eth0" Jan 29 16:40:51.505529 kubelet[1861]: E0129 16:40:51.505390 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:51.511299 containerd[1486]: time="2025-01-29T16:40:51.510247314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:40:51.511299 containerd[1486]: time="2025-01-29T16:40:51.510388870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:40:51.511299 containerd[1486]: time="2025-01-29T16:40:51.510592472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:40:51.512464 containerd[1486]: time="2025-01-29T16:40:51.512290256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:40:51.537812 systemd[1]: Started cri-containerd-147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47.scope - libcontainer container 147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47. Jan 29 16:40:51.556158 systemd-networkd[1389]: cali0ee4708b236: Link UP Jan 29 16:40:51.557309 systemd-networkd[1389]: cali0ee4708b236: Gained carrier Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.276 [INFO][3065] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.299 [INFO][3065] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.158-k8s-csi--node--driver--n9lss-eth0 csi-node-driver- calico-system 4cbd60f8-abf3-4e44-b98f-73df647c2adc 1074 0 2025-01-29 16:40:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.24.4.158 csi-node-driver-n9lss eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0ee4708b236 [] []}} ContainerID="9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" Namespace="calico-system" Pod="csi-node-driver-n9lss" WorkloadEndpoint="172.24.4.158-k8s-csi--node--driver--n9lss-" Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.299 [INFO][3065] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" Namespace="calico-system" Pod="csi-node-driver-n9lss" WorkloadEndpoint="172.24.4.158-k8s-csi--node--driver--n9lss-eth0" Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.342 [INFO][3096] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" HandleID="k8s-pod-network.9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" Workload="172.24.4.158-k8s-csi--node--driver--n9lss-eth0" Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.363 [INFO][3096] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" HandleID="k8s-pod-network.9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" Workload="172.24.4.158-k8s-csi--node--driver--n9lss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003193a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.158", "pod":"csi-node-driver-n9lss", "timestamp":"2025-01-29 16:40:51.342413642 +0000 UTC"}, Hostname:"172.24.4.158", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.363 [INFO][3096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.418 [INFO][3096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.418 [INFO][3096] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.158' Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.424 [INFO][3096] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" host="172.24.4.158" Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.450 [INFO][3096] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.158" Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.468 [INFO][3096] ipam/ipam.go 489: Trying affinity for 192.168.34.192/26 host="172.24.4.158" Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.481 [INFO][3096] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.192/26 host="172.24.4.158" Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.496 [INFO][3096] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="172.24.4.158" Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.496 [INFO][3096] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" host="172.24.4.158" Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.500 [INFO][3096] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.521 [INFO][3096] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" host="172.24.4.158" Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.546 [INFO][3096] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.194/26] block=192.168.34.192/26 handle="k8s-pod-network.9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" host="172.24.4.158" Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.546 [INFO][3096] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.194/26] handle="k8s-pod-network.9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" host="172.24.4.158" Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.546 [INFO][3096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:40:51.579012 containerd[1486]: 2025-01-29 16:40:51.546 [INFO][3096] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.194/26] IPv6=[] ContainerID="9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" HandleID="k8s-pod-network.9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" Workload="172.24.4.158-k8s-csi--node--driver--n9lss-eth0" Jan 29 16:40:51.579764 containerd[1486]: 2025-01-29 16:40:51.548 [INFO][3065] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" Namespace="calico-system" Pod="csi-node-driver-n9lss" WorkloadEndpoint="172.24.4.158-k8s-csi--node--driver--n9lss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.158-k8s-csi--node--driver--n9lss-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4cbd60f8-abf3-4e44-b98f-73df647c2adc", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 40, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.158", ContainerID:"", Pod:"csi-node-driver-n9lss", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ee4708b236", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:40:51.579764 containerd[1486]: 2025-01-29 16:40:51.548 [INFO][3065] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.194/32] ContainerID="9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" Namespace="calico-system" Pod="csi-node-driver-n9lss" WorkloadEndpoint="172.24.4.158-k8s-csi--node--driver--n9lss-eth0" Jan 29 16:40:51.579764 containerd[1486]: 2025-01-29 16:40:51.548 [INFO][3065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ee4708b236 ContainerID="9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" Namespace="calico-system" Pod="csi-node-driver-n9lss" WorkloadEndpoint="172.24.4.158-k8s-csi--node--driver--n9lss-eth0" Jan 29 16:40:51.579764 containerd[1486]: 2025-01-29 16:40:51.555 [INFO][3065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" Namespace="calico-system" Pod="csi-node-driver-n9lss" WorkloadEndpoint="172.24.4.158-k8s-csi--node--driver--n9lss-eth0" Jan 29 16:40:51.579764 containerd[1486]: 2025-01-29 16:40:51.555 [INFO][3065] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" Namespace="calico-system" Pod="csi-node-driver-n9lss" WorkloadEndpoint="172.24.4.158-k8s-csi--node--driver--n9lss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.158-k8s-csi--node--driver--n9lss-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4cbd60f8-abf3-4e44-b98f-73df647c2adc", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 40, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.158", ContainerID:"9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a", Pod:"csi-node-driver-n9lss", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ee4708b236", MAC:"5e:f8:9a:29:be:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:40:51.579764 containerd[1486]: 2025-01-29 16:40:51.577 [INFO][3065] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a" Namespace="calico-system" Pod="csi-node-driver-n9lss" WorkloadEndpoint="172.24.4.158-k8s-csi--node--driver--n9lss-eth0" Jan 29 16:40:51.597599 containerd[1486]: time="2025-01-29T16:40:51.597270446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kccdw,Uid:7c5c4917-93fb-4240-8cff-3be51f6e8145,Namespace:default,Attempt:9,} returns sandbox id \"147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47\"" Jan 29 16:40:51.600953 containerd[1486]: time="2025-01-29T16:40:51.600859177Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 16:40:51.611834 containerd[1486]: time="2025-01-29T16:40:51.611595693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:40:51.612694 containerd[1486]: time="2025-01-29T16:40:51.611846544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:40:51.612694 containerd[1486]: time="2025-01-29T16:40:51.612487877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:40:51.613225 containerd[1486]: time="2025-01-29T16:40:51.613141082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:40:51.645879 systemd[1]: Started cri-containerd-9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a.scope - libcontainer container 9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a. Jan 29 16:40:51.695308 containerd[1486]: time="2025-01-29T16:40:51.693327371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9lss,Uid:4cbd60f8-abf3-4e44-b98f-73df647c2adc,Namespace:calico-system,Attempt:12,} returns sandbox id \"9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a\"" Jan 29 16:40:51.906716 kernel: bpftool[3327]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 16:40:52.186032 systemd-networkd[1389]: vxlan.calico: Link UP Jan 29 16:40:52.186042 systemd-networkd[1389]: vxlan.calico: Gained carrier Jan 29 16:40:52.506094 kubelet[1861]: E0129 16:40:52.505942 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:52.677957 systemd-networkd[1389]: cali0ee4708b236: Gained IPv6LL Jan 29 16:40:52.805938 systemd-networkd[1389]: calif36a43ab040: Gained IPv6LL Jan 29 16:40:53.318165 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Jan 29 16:40:53.507526 kubelet[1861]: E0129 16:40:53.507486 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:54.508669 kubelet[1861]: E0129 16:40:54.508550 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:55.508947 kubelet[1861]: E0129 16:40:55.508916 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:56.202140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount354974835.mount: Deactivated successfully. Jan 29 16:40:56.510503 kubelet[1861]: E0129 16:40:56.510357 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:57.430610 containerd[1486]: time="2025-01-29T16:40:57.430534892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:57.432379 containerd[1486]: time="2025-01-29T16:40:57.432248757Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 16:40:57.433844 containerd[1486]: time="2025-01-29T16:40:57.433764479Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:57.438346 containerd[1486]: time="2025-01-29T16:40:57.438300587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:57.440278 containerd[1486]: time="2025-01-29T16:40:57.440168069Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.839273206s" Jan 29 16:40:57.440278 containerd[1486]: time="2025-01-29T16:40:57.440198687Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 16:40:57.443069 containerd[1486]: time="2025-01-29T16:40:57.442941100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 16:40:57.443957 containerd[1486]: time="2025-01-29T16:40:57.443910859Z" level=info msg="CreateContainer within sandbox \"147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 16:40:57.470749 containerd[1486]: time="2025-01-29T16:40:57.470657081Z" level=info msg="CreateContainer within sandbox \"147c4d3ebfcd9721b77f06bd4da251cab908c85a1c2d062acf15c3ec46342f47\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b4107bdd12da624876322eb186baad47b39013520d60205d9783dafc2afac2e0\"" Jan 29 16:40:57.472682 containerd[1486]: time="2025-01-29T16:40:57.471333320Z" level=info msg="StartContainer for \"b4107bdd12da624876322eb186baad47b39013520d60205d9783dafc2afac2e0\"" Jan 29 16:40:57.511261 kubelet[1861]: E0129 16:40:57.511194 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:57.518050 systemd[1]: Started cri-containerd-b4107bdd12da624876322eb186baad47b39013520d60205d9783dafc2afac2e0.scope - libcontainer container b4107bdd12da624876322eb186baad47b39013520d60205d9783dafc2afac2e0. Jan 29 16:40:57.561686 containerd[1486]: time="2025-01-29T16:40:57.561633378Z" level=info msg="StartContainer for \"b4107bdd12da624876322eb186baad47b39013520d60205d9783dafc2afac2e0\" returns successfully" Jan 29 16:40:58.226406 kubelet[1861]: I0129 16:40:58.226013 1861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-kccdw" podStartSLOduration=10.384021809 podStartE2EDuration="16.225940175s" podCreationTimestamp="2025-01-29 16:40:42 +0000 UTC" firstStartedPulling="2025-01-29 16:40:51.599997531 +0000 UTC m=+29.150068767" lastFinishedPulling="2025-01-29 16:40:57.441915897 +0000 UTC m=+34.991987133" observedRunningTime="2025-01-29 16:40:58.223754465 +0000 UTC m=+35.773825822" watchObservedRunningTime="2025-01-29 16:40:58.225940175 +0000 UTC m=+35.776011471" Jan 29 16:40:58.512343 kubelet[1861]: E0129 16:40:58.512167 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:40:59.122160 containerd[1486]: time="2025-01-29T16:40:59.122079123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:59.123650 containerd[1486]: time="2025-01-29T16:40:59.123460183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 16:40:59.124939 containerd[1486]: time="2025-01-29T16:40:59.124893772Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:59.127686 containerd[1486]: time="2025-01-29T16:40:59.127579659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:40:59.128357 containerd[1486]: time="2025-01-29T16:40:59.128224779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.68525763s" Jan 29 16:40:59.128357 containerd[1486]: time="2025-01-29T16:40:59.128260446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 16:40:59.130858 containerd[1486]: time="2025-01-29T16:40:59.130723055Z" level=info msg="CreateContainer within sandbox \"9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 16:40:59.155800 containerd[1486]: time="2025-01-29T16:40:59.155747027Z" level=info msg="CreateContainer within sandbox \"9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"62f9708bfcd96d25fd300f49b7cbf32a18169e330b269eeda1ae42c58bfddea9\"" Jan 29 16:40:59.156654 containerd[1486]: time="2025-01-29T16:40:59.156244781Z" level=info msg="StartContainer for \"62f9708bfcd96d25fd300f49b7cbf32a18169e330b269eeda1ae42c58bfddea9\"" Jan 29 16:40:59.197798 systemd[1]: Started cri-containerd-62f9708bfcd96d25fd300f49b7cbf32a18169e330b269eeda1ae42c58bfddea9.scope - libcontainer container 62f9708bfcd96d25fd300f49b7cbf32a18169e330b269eeda1ae42c58bfddea9. Jan 29 16:40:59.235103 containerd[1486]: time="2025-01-29T16:40:59.234963327Z" level=info msg="StartContainer for \"62f9708bfcd96d25fd300f49b7cbf32a18169e330b269eeda1ae42c58bfddea9\" returns successfully" Jan 29 16:40:59.236597 containerd[1486]: time="2025-01-29T16:40:59.236558609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 16:40:59.513169 kubelet[1861]: E0129 16:40:59.512962 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:00.513944 kubelet[1861]: E0129 16:41:00.513860 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:01.345764 containerd[1486]: time="2025-01-29T16:41:01.345312958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:41:01.347253 containerd[1486]: time="2025-01-29T16:41:01.347179849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 16:41:01.350086 containerd[1486]: time="2025-01-29T16:41:01.350027921Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:41:01.353501 containerd[1486]: time="2025-01-29T16:41:01.353434369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:41:01.355180 containerd[1486]: time="2025-01-29T16:41:01.354959870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.118349214s" Jan 29 16:41:01.355180 containerd[1486]: time="2025-01-29T16:41:01.355027096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 16:41:01.358970 containerd[1486]: time="2025-01-29T16:41:01.358710154Z" level=info msg="CreateContainer within sandbox \"9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 16:41:01.389195 containerd[1486]: time="2025-01-29T16:41:01.389103948Z" level=info msg="CreateContainer within sandbox \"9bd063d2aaa2ae686fc4adfd0f10df43d36d368e4f1cb5bfccb1d0524f95049a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b63b05d01b1609bc41e42fadd7351aaa89e502c2f363509d7da4ebacf2bd717f\"" Jan 29 16:41:01.389822 containerd[1486]: time="2025-01-29T16:41:01.389766530Z" level=info msg="StartContainer for \"b63b05d01b1609bc41e42fadd7351aaa89e502c2f363509d7da4ebacf2bd717f\"" Jan 29 16:41:01.439238 systemd[1]: run-containerd-runc-k8s.io-b63b05d01b1609bc41e42fadd7351aaa89e502c2f363509d7da4ebacf2bd717f-runc.vsrMLH.mount: Deactivated successfully. Jan 29 16:41:01.452774 systemd[1]: Started cri-containerd-b63b05d01b1609bc41e42fadd7351aaa89e502c2f363509d7da4ebacf2bd717f.scope - libcontainer container b63b05d01b1609bc41e42fadd7351aaa89e502c2f363509d7da4ebacf2bd717f. Jan 29 16:41:01.486262 containerd[1486]: time="2025-01-29T16:41:01.486205722Z" level=info msg="StartContainer for \"b63b05d01b1609bc41e42fadd7351aaa89e502c2f363509d7da4ebacf2bd717f\" returns successfully" Jan 29 16:41:01.514632 kubelet[1861]: E0129 16:41:01.514550 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:01.859844 kubelet[1861]: I0129 16:41:01.859788 1861 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 16:41:01.859844 kubelet[1861]: I0129 16:41:01.859853 1861 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 16:41:02.272746 kubelet[1861]: I0129 16:41:02.272293 1861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-n9lss" podStartSLOduration=29.612303881 podStartE2EDuration="39.27226189s" podCreationTimestamp="2025-01-29 16:40:23 +0000 UTC" firstStartedPulling="2025-01-29 16:40:51.696312139 +0000 UTC m=+29.246383385" lastFinishedPulling="2025-01-29 16:41:01.356270098 +0000 UTC m=+38.906341394" observedRunningTime="2025-01-29 16:41:02.270932908 +0000 UTC m=+39.821004264" watchObservedRunningTime="2025-01-29 16:41:02.27226189 +0000 UTC m=+39.822333186" Jan 29 16:41:02.515394 kubelet[1861]: E0129 16:41:02.515319 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:03.480670 kubelet[1861]: E0129 16:41:03.480532 1861 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:03.516174 kubelet[1861]: E0129 16:41:03.516096 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:04.516885 kubelet[1861]: E0129 16:41:04.516755 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:05.517780 kubelet[1861]: E0129 16:41:05.517665 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:05.969267 kubelet[1861]: I0129 16:41:05.969175 1861 topology_manager.go:215] "Topology Admit Handler" podUID="7e4173cd-b58c-4244-aba3-2adacb14e8b9" podNamespace="default" podName="nfs-server-provisioner-0" Jan 29 16:41:05.984498 systemd[1]: Created slice kubepods-besteffort-pod7e4173cd_b58c_4244_aba3_2adacb14e8b9.slice - libcontainer container kubepods-besteffort-pod7e4173cd_b58c_4244_aba3_2adacb14e8b9.slice. Jan 29 16:41:06.150211 kubelet[1861]: I0129 16:41:06.150153 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7e4173cd-b58c-4244-aba3-2adacb14e8b9-data\") pod \"nfs-server-provisioner-0\" (UID: \"7e4173cd-b58c-4244-aba3-2adacb14e8b9\") " pod="default/nfs-server-provisioner-0" Jan 29 16:41:06.150707 kubelet[1861]: I0129 16:41:06.150609 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff6qq\" (UniqueName: \"kubernetes.io/projected/7e4173cd-b58c-4244-aba3-2adacb14e8b9-kube-api-access-ff6qq\") pod \"nfs-server-provisioner-0\" (UID: \"7e4173cd-b58c-4244-aba3-2adacb14e8b9\") " pod="default/nfs-server-provisioner-0" Jan 29 16:41:06.291512 containerd[1486]: time="2025-01-29T16:41:06.291307299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7e4173cd-b58c-4244-aba3-2adacb14e8b9,Namespace:default,Attempt:0,}" Jan 29 16:41:06.519354 kubelet[1861]: E0129 16:41:06.519236 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:06.600704 systemd-networkd[1389]: cali60e51b789ff: Link UP Jan 29 16:41:06.601504 systemd-networkd[1389]: cali60e51b789ff: Gained carrier Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.454 [INFO][3608] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.158-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 7e4173cd-b58c-4244-aba3-2adacb14e8b9 1290 0 2025-01-29 16:41:05 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.24.4.158 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.158-k8s-nfs--server--provisioner--0-" Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.454 [INFO][3608] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.158-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.517 [INFO][3619] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" HandleID="k8s-pod-network.6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" Workload="172.24.4.158-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.542 [INFO][3619] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" HandleID="k8s-pod-network.6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" Workload="172.24.4.158-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319990), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.158", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 16:41:06.517286654 +0000 UTC"}, Hostname:"172.24.4.158", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.542 [INFO][3619] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.542 [INFO][3619] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.542 [INFO][3619] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.158' Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.545 [INFO][3619] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" host="172.24.4.158" Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.556 [INFO][3619] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.158" Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.564 [INFO][3619] ipam/ipam.go 489: Trying affinity for 192.168.34.192/26 host="172.24.4.158" Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.567 [INFO][3619] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.192/26 host="172.24.4.158" Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.571 [INFO][3619] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="172.24.4.158" Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.571 [INFO][3619] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" host="172.24.4.158" Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.574 [INFO][3619] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45 Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.581 [INFO][3619] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" host="172.24.4.158" Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.594 [INFO][3619] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.195/26] block=192.168.34.192/26 handle="k8s-pod-network.6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" host="172.24.4.158" Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.594 [INFO][3619] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.195/26] handle="k8s-pod-network.6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" host="172.24.4.158" Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.595 [INFO][3619] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:41:06.627287 containerd[1486]: 2025-01-29 16:41:06.595 [INFO][3619] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.195/26] IPv6=[] ContainerID="6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" HandleID="k8s-pod-network.6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" Workload="172.24.4.158-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:41:06.629577 containerd[1486]: 2025-01-29 16:41:06.596 [INFO][3608] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.158-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.158-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7e4173cd-b58c-4244-aba3-2adacb14e8b9", ResourceVersion:"1290", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.158", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.34.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:41:06.629577 containerd[1486]: 2025-01-29 16:41:06.597 [INFO][3608] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.195/32] ContainerID="6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.158-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:41:06.629577 containerd[1486]: 2025-01-29 16:41:06.597 [INFO][3608] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.158-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:41:06.629577 containerd[1486]: 2025-01-29 16:41:06.600 [INFO][3608] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.158-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:41:06.630310 containerd[1486]: 2025-01-29 16:41:06.601 [INFO][3608] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.158-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.158-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7e4173cd-b58c-4244-aba3-2adacb14e8b9", ResourceVersion:"1290", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.158", ContainerID:"6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.34.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"f6:45:6e:50:1b:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:41:06.630310 containerd[1486]: 2025-01-29 16:41:06.623 [INFO][3608] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.158-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:41:06.692361 containerd[1486]: time="2025-01-29T16:41:06.691977756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:41:06.692361 containerd[1486]: time="2025-01-29T16:41:06.692034903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:41:06.692361 containerd[1486]: time="2025-01-29T16:41:06.692048599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:41:06.692361 containerd[1486]: time="2025-01-29T16:41:06.692195647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:41:06.723776 systemd[1]: Started cri-containerd-6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45.scope - libcontainer container 6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45. Jan 29 16:41:06.762464 containerd[1486]: time="2025-01-29T16:41:06.762428532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7e4173cd-b58c-4244-aba3-2adacb14e8b9,Namespace:default,Attempt:0,} returns sandbox id \"6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45\"" Jan 29 16:41:06.764605 containerd[1486]: time="2025-01-29T16:41:06.764318996Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 16:41:07.270529 systemd[1]: run-containerd-runc-k8s.io-6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45-runc.VgtV7M.mount: Deactivated successfully. Jan 29 16:41:07.520758 kubelet[1861]: E0129 16:41:07.519552 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:08.230551 systemd-networkd[1389]: cali60e51b789ff: Gained IPv6LL Jan 29 16:41:08.520055 kubelet[1861]: E0129 16:41:08.519825 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:09.520885 kubelet[1861]: E0129 16:41:09.520840 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:10.023546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3055344307.mount: Deactivated successfully. Jan 29 16:41:10.521875 kubelet[1861]: E0129 16:41:10.521818 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:10.594548 systemd[1]: run-containerd-runc-k8s.io-b06b220ed574e6d8b8f0af0a9e9c1d31e8e08e5eb908adabd9a106bb7fa86e3f-runc.MlxkWb.mount: Deactivated successfully. Jan 29 16:41:11.522679 kubelet[1861]: E0129 16:41:11.522630 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:12.391778 containerd[1486]: time="2025-01-29T16:41:12.391718157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:41:12.393201 containerd[1486]: time="2025-01-29T16:41:12.393169136Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 29 16:41:12.395461 containerd[1486]: time="2025-01-29T16:41:12.394411685Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:41:12.399176 containerd[1486]: time="2025-01-29T16:41:12.399112580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:41:12.400950 containerd[1486]: time="2025-01-29T16:41:12.400213853Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.635864139s" Jan 29 16:41:12.400950 containerd[1486]: time="2025-01-29T16:41:12.400252986Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 16:41:12.402964 containerd[1486]: time="2025-01-29T16:41:12.402942366Z" level=info msg="CreateContainer within sandbox \"6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 16:41:12.427836 containerd[1486]: time="2025-01-29T16:41:12.427790461Z" level=info msg="CreateContainer within sandbox \"6023ddf333dac00e5864a4e0556d1ea92d02bf9d37fd359f69e648b51e22ca45\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"47f55243bf909e462036610f54a384cffc97d721ec6d04ada905f2ae9a3fbebb\"" Jan 29 16:41:12.428292 containerd[1486]: time="2025-01-29T16:41:12.428238715Z" level=info msg="StartContainer for \"47f55243bf909e462036610f54a384cffc97d721ec6d04ada905f2ae9a3fbebb\"" Jan 29 16:41:12.458761 systemd[1]: Started cri-containerd-47f55243bf909e462036610f54a384cffc97d721ec6d04ada905f2ae9a3fbebb.scope - libcontainer container 47f55243bf909e462036610f54a384cffc97d721ec6d04ada905f2ae9a3fbebb. Jan 29 16:41:12.487468 containerd[1486]: time="2025-01-29T16:41:12.487426364Z" level=info msg="StartContainer for \"47f55243bf909e462036610f54a384cffc97d721ec6d04ada905f2ae9a3fbebb\" returns successfully" Jan 29 16:41:12.523383 kubelet[1861]: E0129 16:41:12.523335 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:13.376175 kubelet[1861]: I0129 16:41:13.376060 1861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.738569425 podStartE2EDuration="8.376031672s" podCreationTimestamp="2025-01-29 16:41:05 +0000 UTC" firstStartedPulling="2025-01-29 16:41:06.763898854 +0000 UTC m=+44.313970090" lastFinishedPulling="2025-01-29 16:41:12.401361091 +0000 UTC m=+49.951432337" observedRunningTime="2025-01-29 16:41:13.372937572 +0000 UTC m=+50.923008888" watchObservedRunningTime="2025-01-29 16:41:13.376031672 +0000 UTC m=+50.926102968" Jan 29 16:41:13.524083 kubelet[1861]: E0129 16:41:13.524015 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:14.525247 kubelet[1861]: E0129 16:41:14.525167 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:15.525740 kubelet[1861]: E0129 16:41:15.525601 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:16.526432 kubelet[1861]: E0129 16:41:16.526329 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:17.526842 kubelet[1861]: E0129 16:41:17.526721 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:18.527366 kubelet[1861]: E0129 16:41:18.527281 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:19.528001 kubelet[1861]: E0129 16:41:19.527897 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:20.528855 kubelet[1861]: E0129 16:41:20.528783 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:21.529119 kubelet[1861]: E0129 16:41:21.528986 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:22.530073 kubelet[1861]: E0129 16:41:22.530005 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:23.480551 kubelet[1861]: E0129 16:41:23.480492 1861 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:23.530738 kubelet[1861]: E0129 16:41:23.530530 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:23.592720 containerd[1486]: time="2025-01-29T16:41:23.592384085Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\"" Jan 29 16:41:23.593414 containerd[1486]: time="2025-01-29T16:41:23.592590183Z" level=info msg="TearDown network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" successfully" Jan 29 16:41:23.593527 containerd[1486]: time="2025-01-29T16:41:23.593407708Z" level=info msg="StopPodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" returns successfully" Jan 29 16:41:23.594797 containerd[1486]: time="2025-01-29T16:41:23.594728449Z" level=info msg="RemovePodSandbox for \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\"" Jan 29 16:41:23.594920 containerd[1486]: time="2025-01-29T16:41:23.594797379Z" level=info msg="Forcibly stopping sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\"" Jan 29 16:41:23.595056 containerd[1486]: time="2025-01-29T16:41:23.594967348Z" level=info msg="TearDown network for sandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" successfully" Jan 29 16:41:23.602341 containerd[1486]: time="2025-01-29T16:41:23.601714072Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.602341 containerd[1486]: time="2025-01-29T16:41:23.601811896Z" level=info msg="RemovePodSandbox \"712d67a78bfd2d84c2f8bc6b027c2e02bc716b5ae8b958dfc6bb70414cc32bb1\" returns successfully" Jan 29 16:41:23.602890 containerd[1486]: time="2025-01-29T16:41:23.602806934Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\"" Jan 29 16:41:23.603289 containerd[1486]: time="2025-01-29T16:41:23.603017791Z" level=info msg="TearDown network for sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" successfully" Jan 29 16:41:23.603289 containerd[1486]: time="2025-01-29T16:41:23.603119672Z" level=info msg="StopPodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" returns successfully" Jan 29 16:41:23.607302 containerd[1486]: time="2025-01-29T16:41:23.606074793Z" level=info msg="RemovePodSandbox for \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\"" Jan 29 16:41:23.607302 containerd[1486]: time="2025-01-29T16:41:23.606131971Z" level=info msg="Forcibly stopping sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\"" Jan 29 16:41:23.607302 containerd[1486]: time="2025-01-29T16:41:23.606259841Z" level=info msg="TearDown network for sandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" successfully" Jan 29 16:41:23.611781 containerd[1486]: time="2025-01-29T16:41:23.611681135Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.611781 containerd[1486]: time="2025-01-29T16:41:23.611772467Z" level=info msg="RemovePodSandbox \"230b9db21fb2cac9089cbec034c52bc2770b9906cd96363cb98cf748c2932196\" returns successfully" Jan 29 16:41:23.613035 containerd[1486]: time="2025-01-29T16:41:23.612657118Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\"" Jan 29 16:41:23.613035 containerd[1486]: time="2025-01-29T16:41:23.612863216Z" level=info msg="TearDown network for sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" successfully" Jan 29 16:41:23.613035 containerd[1486]: time="2025-01-29T16:41:23.612894725Z" level=info msg="StopPodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" returns successfully" Jan 29 16:41:23.613503 containerd[1486]: time="2025-01-29T16:41:23.613426003Z" level=info msg="RemovePodSandbox for \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\"" Jan 29 16:41:23.613503 containerd[1486]: time="2025-01-29T16:41:23.613491466Z" level=info msg="Forcibly stopping sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\"" Jan 29 16:41:23.613819 containerd[1486]: time="2025-01-29T16:41:23.613673788Z" level=info msg="TearDown network for sandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" successfully" Jan 29 16:41:23.619007 containerd[1486]: time="2025-01-29T16:41:23.618892692Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.619007 containerd[1486]: time="2025-01-29T16:41:23.618975888Z" level=info msg="RemovePodSandbox \"08a02ada0b995c6fd45994c5135609db6e61aec9329a8202dc81f9150b0705b6\" returns successfully" Jan 29 16:41:23.620450 containerd[1486]: time="2025-01-29T16:41:23.619700250Z" level=info msg="StopPodSandbox for \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\"" Jan 29 16:41:23.620450 containerd[1486]: time="2025-01-29T16:41:23.619955118Z" level=info msg="TearDown network for sandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" successfully" Jan 29 16:41:23.620450 containerd[1486]: time="2025-01-29T16:41:23.619995755Z" level=info msg="StopPodSandbox for \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" returns successfully" Jan 29 16:41:23.621166 containerd[1486]: time="2025-01-29T16:41:23.621110408Z" level=info msg="RemovePodSandbox for \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\"" Jan 29 16:41:23.621884 containerd[1486]: time="2025-01-29T16:41:23.621352764Z" level=info msg="Forcibly stopping sandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\"" Jan 29 16:41:23.621884 containerd[1486]: time="2025-01-29T16:41:23.621551206Z" level=info msg="TearDown network for sandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" successfully" Jan 29 16:41:23.627907 containerd[1486]: time="2025-01-29T16:41:23.627794494Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.628058 containerd[1486]: time="2025-01-29T16:41:23.627905933Z" level=info msg="RemovePodSandbox \"7add259d705b25d838459b6fcef2fdf2ffe0d75f7a395f03a0822fc8ee3738f1\" returns successfully" Jan 29 16:41:23.629117 containerd[1486]: time="2025-01-29T16:41:23.628731385Z" level=info msg="StopPodSandbox for \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\"" Jan 29 16:41:23.629117 containerd[1486]: time="2025-01-29T16:41:23.628924347Z" level=info msg="TearDown network for sandbox \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\" successfully" Jan 29 16:41:23.629117 containerd[1486]: time="2025-01-29T16:41:23.628953512Z" level=info msg="StopPodSandbox for \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\" returns successfully" Jan 29 16:41:23.630102 containerd[1486]: time="2025-01-29T16:41:23.629682381Z" level=info msg="RemovePodSandbox for \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\"" Jan 29 16:41:23.630102 containerd[1486]: time="2025-01-29T16:41:23.629761439Z" level=info msg="Forcibly stopping sandbox \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\"" Jan 29 16:41:23.630102 containerd[1486]: time="2025-01-29T16:41:23.629900151Z" level=info msg="TearDown network for sandbox \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\" successfully" Jan 29 16:41:23.635240 containerd[1486]: time="2025-01-29T16:41:23.635128551Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.635240 containerd[1486]: time="2025-01-29T16:41:23.635213392Z" level=info msg="RemovePodSandbox \"b61e8f972b5578c1f78ef9c1f089c1ab94e61d777f16ca5e1e99f893c58a2c47\" returns successfully" Jan 29 16:41:23.636585 containerd[1486]: time="2025-01-29T16:41:23.636322254Z" level=info msg="StopPodSandbox for \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\"" Jan 29 16:41:23.636585 containerd[1486]: time="2025-01-29T16:41:23.636525326Z" level=info msg="TearDown network for sandbox \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\" successfully" Jan 29 16:41:23.636585 containerd[1486]: time="2025-01-29T16:41:23.636554741Z" level=info msg="StopPodSandbox for \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\" returns successfully" Jan 29 16:41:23.637901 containerd[1486]: time="2025-01-29T16:41:23.637526938Z" level=info msg="RemovePodSandbox for \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\"" Jan 29 16:41:23.637901 containerd[1486]: time="2025-01-29T16:41:23.637582101Z" level=info msg="Forcibly stopping sandbox \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\"" Jan 29 16:41:23.637901 containerd[1486]: time="2025-01-29T16:41:23.637754635Z" level=info msg="TearDown network for sandbox \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\" successfully" Jan 29 16:41:23.642307 containerd[1486]: time="2025-01-29T16:41:23.642196670Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.642307 containerd[1486]: time="2025-01-29T16:41:23.642292590Z" level=info msg="RemovePodSandbox \"df7bf4bed7450373f1ec20250bd18976584b85ceb7b22473e52767da6cbffdea\" returns successfully" Jan 29 16:41:23.643885 containerd[1486]: time="2025-01-29T16:41:23.643307346Z" level=info msg="StopPodSandbox for \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\"" Jan 29 16:41:23.643885 containerd[1486]: time="2025-01-29T16:41:23.643478307Z" level=info msg="TearDown network for sandbox \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\" successfully" Jan 29 16:41:23.643885 containerd[1486]: time="2025-01-29T16:41:23.643506630Z" level=info msg="StopPodSandbox for \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\" returns successfully" Jan 29 16:41:23.645395 containerd[1486]: time="2025-01-29T16:41:23.644964569Z" level=info msg="RemovePodSandbox for \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\"" Jan 29 16:41:23.645395 containerd[1486]: time="2025-01-29T16:41:23.645019512Z" level=info msg="Forcibly stopping sandbox \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\"" Jan 29 16:41:23.645395 containerd[1486]: time="2025-01-29T16:41:23.645146862Z" level=info msg="TearDown network for sandbox \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\" successfully" Jan 29 16:41:23.650926 containerd[1486]: time="2025-01-29T16:41:23.650682850Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.650926 containerd[1486]: time="2025-01-29T16:41:23.650763643Z" level=info msg="RemovePodSandbox \"76cde31a5fae2840998f0a0a4c99591b92d6be8377d53ca7fb1328184fec2c46\" returns successfully" Jan 29 16:41:23.651717 containerd[1486]: time="2025-01-29T16:41:23.651449111Z" level=info msg="StopPodSandbox for \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\"" Jan 29 16:41:23.651717 containerd[1486]: time="2025-01-29T16:41:23.651674393Z" level=info msg="TearDown network for sandbox \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\" successfully" Jan 29 16:41:23.651717 containerd[1486]: time="2025-01-29T16:41:23.651706944Z" level=info msg="StopPodSandbox for \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\" returns successfully" Jan 29 16:41:23.652733 containerd[1486]: time="2025-01-29T16:41:23.652569455Z" level=info msg="RemovePodSandbox for \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\"" Jan 29 16:41:23.652733 containerd[1486]: time="2025-01-29T16:41:23.652656478Z" level=info msg="Forcibly stopping sandbox \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\"" Jan 29 16:41:23.652962 containerd[1486]: time="2025-01-29T16:41:23.652835996Z" level=info msg="TearDown network for sandbox \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\" successfully" Jan 29 16:41:23.657796 containerd[1486]: time="2025-01-29T16:41:23.657594995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.657796 containerd[1486]: time="2025-01-29T16:41:23.657717746Z" level=info msg="RemovePodSandbox \"04683ce4d2bd82be0644fdc0f99437560430ff116b7208325fce8d20b8dfcd38\" returns successfully" Jan 29 16:41:23.659168 containerd[1486]: time="2025-01-29T16:41:23.658696375Z" level=info msg="StopPodSandbox for \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\"" Jan 29 16:41:23.659168 containerd[1486]: time="2025-01-29T16:41:23.658868217Z" level=info msg="TearDown network for sandbox \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\" successfully" Jan 29 16:41:23.659168 containerd[1486]: time="2025-01-29T16:41:23.658895418Z" level=info msg="StopPodSandbox for \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\" returns successfully" Jan 29 16:41:23.660614 containerd[1486]: time="2025-01-29T16:41:23.660422577Z" level=info msg="RemovePodSandbox for \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\"" Jan 29 16:41:23.660614 containerd[1486]: time="2025-01-29T16:41:23.660478101Z" level=info msg="Forcibly stopping sandbox \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\"" Jan 29 16:41:23.661248 containerd[1486]: time="2025-01-29T16:41:23.660964535Z" level=info msg="TearDown network for sandbox \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\" successfully" Jan 29 16:41:23.666922 containerd[1486]: time="2025-01-29T16:41:23.666823090Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.666922 containerd[1486]: time="2025-01-29T16:41:23.666912248Z" level=info msg="RemovePodSandbox \"428273fcc6fa56e3609e2656dc3094e4e180154cccb540be2729b72dfe72f5fb\" returns successfully" Jan 29 16:41:23.667931 containerd[1486]: time="2025-01-29T16:41:23.667849518Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" Jan 29 16:41:23.668166 containerd[1486]: time="2025-01-29T16:41:23.668017945Z" level=info msg="TearDown network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" successfully" Jan 29 16:41:23.668166 containerd[1486]: time="2025-01-29T16:41:23.668045787Z" level=info msg="StopPodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" returns successfully" Jan 29 16:41:23.668874 containerd[1486]: time="2025-01-29T16:41:23.668822206Z" level=info msg="RemovePodSandbox for \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" Jan 29 16:41:23.668874 containerd[1486]: time="2025-01-29T16:41:23.668879092Z" level=info msg="Forcibly stopping sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\"" Jan 29 16:41:23.669081 containerd[1486]: time="2025-01-29T16:41:23.668998417Z" level=info msg="TearDown network for sandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" successfully" Jan 29 16:41:23.673832 containerd[1486]: time="2025-01-29T16:41:23.673767175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.674864 containerd[1486]: time="2025-01-29T16:41:23.673848648Z" level=info msg="RemovePodSandbox \"43e628c21e34b0098b4aa29c106afd8ab693f26dd856c0d083df06aff4136156\" returns successfully" Jan 29 16:41:23.674864 containerd[1486]: time="2025-01-29T16:41:23.674419871Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\"" Jan 29 16:41:23.674864 containerd[1486]: time="2025-01-29T16:41:23.674692203Z" level=info msg="TearDown network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" successfully" Jan 29 16:41:23.674864 containerd[1486]: time="2025-01-29T16:41:23.674724343Z" level=info msg="StopPodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" returns successfully" Jan 29 16:41:23.676971 containerd[1486]: time="2025-01-29T16:41:23.676775847Z" level=info msg="RemovePodSandbox for \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\"" Jan 29 16:41:23.676971 containerd[1486]: time="2025-01-29T16:41:23.676828305Z" level=info msg="Forcibly stopping sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\"" Jan 29 16:41:23.677242 containerd[1486]: time="2025-01-29T16:41:23.676991562Z" level=info msg="TearDown network for sandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" successfully" Jan 29 16:41:23.697207 containerd[1486]: time="2025-01-29T16:41:23.696726094Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.697207 containerd[1486]: time="2025-01-29T16:41:23.696812677Z" level=info msg="RemovePodSandbox \"c017a8f91852ffd92a6063420520e1edae30291fa5a12e37e6f5d6b841f844c3\" returns successfully" Jan 29 16:41:23.698229 containerd[1486]: time="2025-01-29T16:41:23.698154116Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\"" Jan 29 16:41:23.698406 containerd[1486]: time="2025-01-29T16:41:23.698356466Z" level=info msg="TearDown network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" successfully" Jan 29 16:41:23.698406 containerd[1486]: time="2025-01-29T16:41:23.698399578Z" level=info msg="StopPodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" returns successfully" Jan 29 16:41:23.699718 containerd[1486]: time="2025-01-29T16:41:23.699046452Z" level=info msg="RemovePodSandbox for \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\"" Jan 29 16:41:23.699718 containerd[1486]: time="2025-01-29T16:41:23.699109391Z" level=info msg="Forcibly stopping sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\"" Jan 29 16:41:23.699718 containerd[1486]: time="2025-01-29T16:41:23.699261176Z" level=info msg="TearDown network for sandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" successfully" Jan 29 16:41:23.704177 containerd[1486]: time="2025-01-29T16:41:23.704099745Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.704177 containerd[1486]: time="2025-01-29T16:41:23.704189524Z" level=info msg="RemovePodSandbox \"7c35cee6d03d84ecc37b842869a98e618bac200978e971ad4db2efbb5d1ca1af\" returns successfully" Jan 29 16:41:23.705267 containerd[1486]: time="2025-01-29T16:41:23.704894598Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\"" Jan 29 16:41:23.705267 containerd[1486]: time="2025-01-29T16:41:23.705045422Z" level=info msg="TearDown network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" successfully" Jan 29 16:41:23.705267 containerd[1486]: time="2025-01-29T16:41:23.705074106Z" level=info msg="StopPodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" returns successfully" Jan 29 16:41:23.706060 containerd[1486]: time="2025-01-29T16:41:23.705887383Z" level=info msg="RemovePodSandbox for \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\"" Jan 29 16:41:23.706060 containerd[1486]: time="2025-01-29T16:41:23.705991950Z" level=info msg="Forcibly stopping sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\"" Jan 29 16:41:23.706373 containerd[1486]: time="2025-01-29T16:41:23.706171737Z" level=info msg="TearDown network for sandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" successfully" Jan 29 16:41:23.711093 containerd[1486]: time="2025-01-29T16:41:23.710955464Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.711093 containerd[1486]: time="2025-01-29T16:41:23.711085648Z" level=info msg="RemovePodSandbox \"502801b86b18b83dbe35f26964cbdd0b8b4c28be799507a233047976fdef5906\" returns successfully" Jan 29 16:41:23.712185 containerd[1486]: time="2025-01-29T16:41:23.711800601Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\"" Jan 29 16:41:23.712185 containerd[1486]: time="2025-01-29T16:41:23.711965782Z" level=info msg="TearDown network for sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" successfully" Jan 29 16:41:23.712185 containerd[1486]: time="2025-01-29T16:41:23.712051783Z" level=info msg="StopPodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" returns successfully" Jan 29 16:41:23.712921 containerd[1486]: time="2025-01-29T16:41:23.712581478Z" level=info msg="RemovePodSandbox for \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\"" Jan 29 16:41:23.713528 containerd[1486]: time="2025-01-29T16:41:23.713252849Z" level=info msg="Forcibly stopping sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\"" Jan 29 16:41:23.713528 containerd[1486]: time="2025-01-29T16:41:23.713405045Z" level=info msg="TearDown network for sandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" successfully" Jan 29 16:41:23.718601 containerd[1486]: time="2025-01-29T16:41:23.718373228Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.718601 containerd[1486]: time="2025-01-29T16:41:23.718451996Z" level=info msg="RemovePodSandbox \"11a73d9e349d2479251ef84419bb8fc7434dd42676dbf26586b9249ff89fef69\" returns successfully" Jan 29 16:41:23.719731 containerd[1486]: time="2025-01-29T16:41:23.719285172Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\"" Jan 29 16:41:23.719731 containerd[1486]: time="2025-01-29T16:41:23.719446484Z" level=info msg="TearDown network for sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" successfully" Jan 29 16:41:23.719731 containerd[1486]: time="2025-01-29T16:41:23.719474366Z" level=info msg="StopPodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" returns successfully" Jan 29 16:41:23.720599 containerd[1486]: time="2025-01-29T16:41:23.720223374Z" level=info msg="RemovePodSandbox for \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\"" Jan 29 16:41:23.720599 containerd[1486]: time="2025-01-29T16:41:23.720329173Z" level=info msg="Forcibly stopping sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\"" Jan 29 16:41:23.720599 containerd[1486]: time="2025-01-29T16:41:23.720463745Z" level=info msg="TearDown network for sandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" successfully" Jan 29 16:41:23.725220 containerd[1486]: time="2025-01-29T16:41:23.725136132Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.725376 containerd[1486]: time="2025-01-29T16:41:23.725228066Z" level=info msg="RemovePodSandbox \"43b76fe8b3e17d463ff154868b29694c9d07d78eeeb0ebc67b08a0bfc678a523\" returns successfully" Jan 29 16:41:23.726450 containerd[1486]: time="2025-01-29T16:41:23.726028409Z" level=info msg="StopPodSandbox for \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\"" Jan 29 16:41:23.726450 containerd[1486]: time="2025-01-29T16:41:23.726191075Z" level=info msg="TearDown network for sandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" successfully" Jan 29 16:41:23.726450 containerd[1486]: time="2025-01-29T16:41:23.726217865Z" level=info msg="StopPodSandbox for \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" returns successfully" Jan 29 16:41:23.727117 containerd[1486]: time="2025-01-29T16:41:23.727036923Z" level=info msg="RemovePodSandbox for \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\"" Jan 29 16:41:23.727117 containerd[1486]: time="2025-01-29T16:41:23.727098468Z" level=info msg="Forcibly stopping sandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\"" Jan 29 16:41:23.727528 containerd[1486]: time="2025-01-29T16:41:23.727334923Z" level=info msg="TearDown network for sandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" successfully" Jan 29 16:41:23.732702 containerd[1486]: time="2025-01-29T16:41:23.732167341Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.732702 containerd[1486]: time="2025-01-29T16:41:23.732249495Z" level=info msg="RemovePodSandbox \"d2189c0e65d325b19e9da662f5a7e89297e21ff30ac9718da7750fc88d28d5fe\" returns successfully" Jan 29 16:41:23.732702 containerd[1486]: time="2025-01-29T16:41:23.733535862Z" level=info msg="StopPodSandbox for \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\"" Jan 29 16:41:23.732702 containerd[1486]: time="2025-01-29T16:41:23.733742800Z" level=info msg="TearDown network for sandbox \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\" successfully" Jan 29 16:41:23.732702 containerd[1486]: time="2025-01-29T16:41:23.733772025Z" level=info msg="StopPodSandbox for \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\" returns successfully" Jan 29 16:41:23.738156 containerd[1486]: time="2025-01-29T16:41:23.737807246Z" level=info msg="RemovePodSandbox for \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\"" Jan 29 16:41:23.738156 containerd[1486]: time="2025-01-29T16:41:23.737935777Z" level=info msg="Forcibly stopping sandbox \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\"" Jan 29 16:41:23.738429 containerd[1486]: time="2025-01-29T16:41:23.738135863Z" level=info msg="TearDown network for sandbox \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\" successfully" Jan 29 16:41:23.742842 containerd[1486]: time="2025-01-29T16:41:23.742753497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.743045 containerd[1486]: time="2025-01-29T16:41:23.742845790Z" level=info msg="RemovePodSandbox \"549e195df24b58b795cfc8ce8cb4d7927f8689ed3bbe686954f4ab7d13a5ef3c\" returns successfully" Jan 29 16:41:23.743983 containerd[1486]: time="2025-01-29T16:41:23.743677232Z" level=info msg="StopPodSandbox for \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\"" Jan 29 16:41:23.743983 containerd[1486]: time="2025-01-29T16:41:23.743836221Z" level=info msg="TearDown network for sandbox \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\" successfully" Jan 29 16:41:23.743983 containerd[1486]: time="2025-01-29T16:41:23.743863101Z" level=info msg="StopPodSandbox for \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\" returns successfully" Jan 29 16:41:23.744566 containerd[1486]: time="2025-01-29T16:41:23.744498235Z" level=info msg="RemovePodSandbox for \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\"" Jan 29 16:41:23.744566 containerd[1486]: time="2025-01-29T16:41:23.744545503Z" level=info msg="Forcibly stopping sandbox \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\"" Jan 29 16:41:23.744963 containerd[1486]: time="2025-01-29T16:41:23.744730030Z" level=info msg="TearDown network for sandbox \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\" successfully" Jan 29 16:41:23.749455 containerd[1486]: time="2025-01-29T16:41:23.749350400Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.749455 containerd[1486]: time="2025-01-29T16:41:23.749429318Z" level=info msg="RemovePodSandbox \"b6e3f7b94e769cb182bba9e1e39e7445fa5ef56b68f2181a9bbb9bbea532c38f\" returns successfully" Jan 29 16:41:23.750687 containerd[1486]: time="2025-01-29T16:41:23.750206247Z" level=info msg="StopPodSandbox for \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\"" Jan 29 16:41:23.750687 containerd[1486]: time="2025-01-29T16:41:23.750359736Z" level=info msg="TearDown network for sandbox \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\" successfully" Jan 29 16:41:23.750687 containerd[1486]: time="2025-01-29T16:41:23.750385905Z" level=info msg="StopPodSandbox for \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\" returns successfully" Jan 29 16:41:23.751843 containerd[1486]: time="2025-01-29T16:41:23.751786726Z" level=info msg="RemovePodSandbox for \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\"" Jan 29 16:41:23.752036 containerd[1486]: time="2025-01-29T16:41:23.751879560Z" level=info msg="Forcibly stopping sandbox \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\"" Jan 29 16:41:23.752160 containerd[1486]: time="2025-01-29T16:41:23.752063496Z" level=info msg="TearDown network for sandbox \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\" successfully" Jan 29 16:41:23.757182 containerd[1486]: time="2025-01-29T16:41:23.757068909Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.757182 containerd[1486]: time="2025-01-29T16:41:23.757164708Z" level=info msg="RemovePodSandbox \"aa6c500096afc48ade63208e77646d31cab619e3440417e2c8a42ad163c489c5\" returns successfully" Jan 29 16:41:23.758204 containerd[1486]: time="2025-01-29T16:41:23.757865064Z" level=info msg="StopPodSandbox for \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\"" Jan 29 16:41:23.758204 containerd[1486]: time="2025-01-29T16:41:23.758023462Z" level=info msg="TearDown network for sandbox \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\" successfully" Jan 29 16:41:23.758204 containerd[1486]: time="2025-01-29T16:41:23.758050473Z" level=info msg="StopPodSandbox for \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\" returns successfully" Jan 29 16:41:23.759135 containerd[1486]: time="2025-01-29T16:41:23.759054469Z" level=info msg="RemovePodSandbox for \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\"" Jan 29 16:41:23.759135 containerd[1486]: time="2025-01-29T16:41:23.759117026Z" level=info msg="Forcibly stopping sandbox \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\"" Jan 29 16:41:23.759325 containerd[1486]: time="2025-01-29T16:41:23.759252500Z" level=info msg="TearDown network for sandbox \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\" successfully" Jan 29 16:41:23.764508 containerd[1486]: time="2025-01-29T16:41:23.764404779Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.764508 containerd[1486]: time="2025-01-29T16:41:23.764491372Z" level=info msg="RemovePodSandbox \"20aae5e82eb911f2b24ec923868d792bb2bb4660d5b7ea521d5392baa0f4151c\" returns successfully" Jan 29 16:41:23.765558 containerd[1486]: time="2025-01-29T16:41:23.765216745Z" level=info msg="StopPodSandbox for \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\"" Jan 29 16:41:23.765558 containerd[1486]: time="2025-01-29T16:41:23.765372337Z" level=info msg="TearDown network for sandbox \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\" successfully" Jan 29 16:41:23.765558 containerd[1486]: time="2025-01-29T16:41:23.765397995Z" level=info msg="StopPodSandbox for \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\" returns successfully" Jan 29 16:41:23.766710 containerd[1486]: time="2025-01-29T16:41:23.766604131Z" level=info msg="RemovePodSandbox for \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\"" Jan 29 16:41:23.766881 containerd[1486]: time="2025-01-29T16:41:23.766719568Z" level=info msg="Forcibly stopping sandbox \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\"" Jan 29 16:41:23.767211 containerd[1486]: time="2025-01-29T16:41:23.767069916Z" level=info msg="TearDown network for sandbox \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\" successfully" Jan 29 16:41:23.772193 containerd[1486]: time="2025-01-29T16:41:23.772090717Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:41:23.772193 containerd[1486]: time="2025-01-29T16:41:23.772176779Z" level=info msg="RemovePodSandbox \"9416e8c62e012a7df05581d6d829051a0c958cad81e07807c3013b7c905b26b6\" returns successfully" Jan 29 16:41:24.531877 kubelet[1861]: E0129 16:41:24.531804 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:25.532110 kubelet[1861]: E0129 16:41:25.532022 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:26.532854 kubelet[1861]: E0129 16:41:26.532721 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:27.533340 kubelet[1861]: E0129 16:41:27.533168 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:28.533596 kubelet[1861]: E0129 16:41:28.533418 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:29.534485 kubelet[1861]: E0129 16:41:29.534389 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:30.535319 kubelet[1861]: E0129 16:41:30.535228 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:31.535519 kubelet[1861]: E0129 16:41:31.535434 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:32.536602 kubelet[1861]: E0129 16:41:32.536507 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:33.537168 kubelet[1861]: E0129 16:41:33.537058 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:34.537739 kubelet[1861]: E0129 16:41:34.537655 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:35.538694 kubelet[1861]: E0129 16:41:35.538479 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:36.538884 kubelet[1861]: E0129 16:41:36.538749 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:37.037710 kubelet[1861]: I0129 16:41:37.037550 1861 topology_manager.go:215] "Topology Admit Handler" podUID="3d49ec01-2d98-46dc-bcef-57b92bbe9dbf" podNamespace="default" podName="test-pod-1" Jan 29 16:41:37.053425 systemd[1]: Created slice kubepods-besteffort-pod3d49ec01_2d98_46dc_bcef_57b92bbe9dbf.slice - libcontainer container kubepods-besteffort-pod3d49ec01_2d98_46dc_bcef_57b92bbe9dbf.slice. Jan 29 16:41:37.176557 kubelet[1861]: I0129 16:41:37.176462 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fd946eb9-1c67-45a1-b305-d26cc6ace322\" (UniqueName: \"kubernetes.io/nfs/3d49ec01-2d98-46dc-bcef-57b92bbe9dbf-pvc-fd946eb9-1c67-45a1-b305-d26cc6ace322\") pod \"test-pod-1\" (UID: \"3d49ec01-2d98-46dc-bcef-57b92bbe9dbf\") " pod="default/test-pod-1" Jan 29 16:41:37.176557 kubelet[1861]: I0129 16:41:37.176554 1861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbtw9\" (UniqueName: \"kubernetes.io/projected/3d49ec01-2d98-46dc-bcef-57b92bbe9dbf-kube-api-access-fbtw9\") pod \"test-pod-1\" (UID: \"3d49ec01-2d98-46dc-bcef-57b92bbe9dbf\") " pod="default/test-pod-1" Jan 29 16:41:37.443678 kernel: FS-Cache: Loaded Jan 29 16:41:37.540025 kubelet[1861]: E0129 16:41:37.539908 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:37.548374 kernel: RPC: Registered named UNIX socket transport module. Jan 29 16:41:37.548541 kernel: RPC: Registered udp transport module. Jan 29 16:41:37.548589 kernel: RPC: Registered tcp transport module. Jan 29 16:41:37.549039 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 16:41:37.550564 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 16:41:37.860484 kernel: NFS: Registering the id_resolver key type Jan 29 16:41:37.860686 kernel: Key type id_resolver registered Jan 29 16:41:37.860748 kernel: Key type id_legacy registered Jan 29 16:41:37.907405 nfsidmap[3841]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 29 16:41:37.916385 nfsidmap[3842]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 29 16:41:37.960937 containerd[1486]: time="2025-01-29T16:41:37.960844374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3d49ec01-2d98-46dc-bcef-57b92bbe9dbf,Namespace:default,Attempt:0,}" Jan 29 16:41:38.187481 systemd-networkd[1389]: cali5ec59c6bf6e: Link UP Jan 29 16:41:38.188036 systemd-networkd[1389]: cali5ec59c6bf6e: Gained carrier Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.054 [INFO][3843] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.158-k8s-test--pod--1-eth0 default 3d49ec01-2d98-46dc-bcef-57b92bbe9dbf 1395 0 2025-01-29 16:41:08 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.158 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.158-k8s-test--pod--1-" Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.054 [INFO][3843] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.158-k8s-test--pod--1-eth0" Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.092 [INFO][3854] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" HandleID="k8s-pod-network.6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" Workload="172.24.4.158-k8s-test--pod--1-eth0" Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.115 [INFO][3854] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" HandleID="k8s-pod-network.6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" Workload="172.24.4.158-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a4240), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.158", "pod":"test-pod-1", "timestamp":"2025-01-29 16:41:38.092753854 +0000 UTC"}, Hostname:"172.24.4.158", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.116 [INFO][3854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.116 [INFO][3854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.116 [INFO][3854] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.158' Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.119 [INFO][3854] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" host="172.24.4.158" Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.129 [INFO][3854] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.158" Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.138 [INFO][3854] ipam/ipam.go 489: Trying affinity for 192.168.34.192/26 host="172.24.4.158" Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.142 [INFO][3854] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.192/26 host="172.24.4.158" Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.149 [INFO][3854] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="172.24.4.158" Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.149 [INFO][3854] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" host="172.24.4.158" Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.154 [INFO][3854] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232 Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.164 [INFO][3854] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" host="172.24.4.158" Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.176 [INFO][3854] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.196/26] block=192.168.34.192/26 handle="k8s-pod-network.6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" host="172.24.4.158" Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.176 [INFO][3854] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.196/26] handle="k8s-pod-network.6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" host="172.24.4.158" Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.176 [INFO][3854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.176 [INFO][3854] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.196/26] IPv6=[] ContainerID="6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" HandleID="k8s-pod-network.6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" Workload="172.24.4.158-k8s-test--pod--1-eth0" Jan 29 16:41:38.216187 containerd[1486]: 2025-01-29 16:41:38.180 [INFO][3843] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.158-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.158-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3d49ec01-2d98-46dc-bcef-57b92bbe9dbf", ResourceVersion:"1395", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 41, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.158", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:41:38.221487 containerd[1486]: 2025-01-29 16:41:38.181 [INFO][3843] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.196/32] ContainerID="6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.158-k8s-test--pod--1-eth0" Jan 29 16:41:38.221487 containerd[1486]: 2025-01-29 16:41:38.181 [INFO][3843] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.158-k8s-test--pod--1-eth0" Jan 29 16:41:38.221487 containerd[1486]: 2025-01-29 16:41:38.187 [INFO][3843] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.158-k8s-test--pod--1-eth0" Jan 29 16:41:38.221487 containerd[1486]: 2025-01-29 16:41:38.189 [INFO][3843] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.158-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.158-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3d49ec01-2d98-46dc-bcef-57b92bbe9dbf", ResourceVersion:"1395", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 41, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.158", ContainerID:"6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"ba:06:6c:b0:15:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:41:38.221487 containerd[1486]: 2025-01-29 16:41:38.213 [INFO][3843] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.158-k8s-test--pod--1-eth0" Jan 29 16:41:38.261575 containerd[1486]: time="2025-01-29T16:41:38.260496738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:41:38.261575 containerd[1486]: time="2025-01-29T16:41:38.261349268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:41:38.261575 containerd[1486]: time="2025-01-29T16:41:38.261365729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:41:38.261801 containerd[1486]: time="2025-01-29T16:41:38.261453163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:41:38.281845 systemd[1]: Started cri-containerd-6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232.scope - libcontainer container 6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232. Jan 29 16:41:38.327232 containerd[1486]: time="2025-01-29T16:41:38.327196560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3d49ec01-2d98-46dc-bcef-57b92bbe9dbf,Namespace:default,Attempt:0,} returns sandbox id \"6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232\"" Jan 29 16:41:38.328771 containerd[1486]: time="2025-01-29T16:41:38.328749965Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 16:41:38.541185 kubelet[1861]: E0129 16:41:38.540974 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:38.840933 containerd[1486]: time="2025-01-29T16:41:38.840776267Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:41:38.843766 containerd[1486]: time="2025-01-29T16:41:38.843656172Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 16:41:38.851069 containerd[1486]: time="2025-01-29T16:41:38.850996277Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 522.204984ms" Jan 29 16:41:38.851234 containerd[1486]: time="2025-01-29T16:41:38.851070045Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 16:41:38.855770 containerd[1486]: time="2025-01-29T16:41:38.855487746Z" level=info msg="CreateContainer within sandbox \"6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 16:41:38.887250 containerd[1486]: time="2025-01-29T16:41:38.887184248Z" level=info msg="CreateContainer within sandbox \"6ca4f31c28ced766438498af31a3f925d8b36191c5e9b6547f222a45ccea5232\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"aac1252b918a18b0b8c8b9ef86d25df4abed25f571de09a0d31d74ef0742690f\"" Jan 29 16:41:38.888842 containerd[1486]: time="2025-01-29T16:41:38.888663385Z" level=info msg="StartContainer for \"aac1252b918a18b0b8c8b9ef86d25df4abed25f571de09a0d31d74ef0742690f\"" Jan 29 16:41:38.947853 systemd[1]: Started cri-containerd-aac1252b918a18b0b8c8b9ef86d25df4abed25f571de09a0d31d74ef0742690f.scope - libcontainer container aac1252b918a18b0b8c8b9ef86d25df4abed25f571de09a0d31d74ef0742690f. Jan 29 16:41:38.975254 containerd[1486]: time="2025-01-29T16:41:38.975203099Z" level=info msg="StartContainer for \"aac1252b918a18b0b8c8b9ef86d25df4abed25f571de09a0d31d74ef0742690f\" returns successfully" Jan 29 16:41:39.453482 kubelet[1861]: I0129 16:41:39.453094 1861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=30.929058379 podStartE2EDuration="31.453060906s" podCreationTimestamp="2025-01-29 16:41:08 +0000 UTC" firstStartedPulling="2025-01-29 16:41:38.328288229 +0000 UTC m=+75.878359475" lastFinishedPulling="2025-01-29 16:41:38.852290706 +0000 UTC m=+76.402362002" observedRunningTime="2025-01-29 16:41:39.452738732 +0000 UTC m=+77.002810088" watchObservedRunningTime="2025-01-29 16:41:39.453060906 +0000 UTC m=+77.003132192" Jan 29 16:41:39.541972 kubelet[1861]: E0129 16:41:39.541894 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:39.590069 systemd-networkd[1389]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 16:41:40.543164 kubelet[1861]: E0129 16:41:40.543048 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:41.544228 kubelet[1861]: E0129 16:41:41.544097 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:42.545033 kubelet[1861]: E0129 16:41:42.544927 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:43.480010 kubelet[1861]: E0129 16:41:43.479903 1861 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:43.545913 kubelet[1861]: E0129 16:41:43.545833 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:44.546493 kubelet[1861]: E0129 16:41:44.546422 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:45.547486 kubelet[1861]: E0129 16:41:45.547382 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:46.548947 kubelet[1861]: E0129 16:41:46.548816 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:47.549350 kubelet[1861]: E0129 16:41:47.549243 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:48.549504 kubelet[1861]: E0129 16:41:48.549431 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:49.550612 kubelet[1861]: E0129 16:41:49.550485 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:50.551858 kubelet[1861]: E0129 16:41:50.551804 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:51.552446 kubelet[1861]: E0129 16:41:51.552387 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:52.553157 kubelet[1861]: E0129 16:41:52.553103 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:53.553311 kubelet[1861]: E0129 16:41:53.553241 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:54.553877 kubelet[1861]: E0129 16:41:54.553809 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:55.554892 kubelet[1861]: E0129 16:41:55.554756 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:56.555990 kubelet[1861]: E0129 16:41:56.555898 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:57.557202 kubelet[1861]: E0129 16:41:57.557124 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:58.557901 kubelet[1861]: E0129 16:41:58.557827 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:41:59.558756 kubelet[1861]: E0129 16:41:59.558673 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:42:00.559386 kubelet[1861]: E0129 16:42:00.559323 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:42:01.560190 kubelet[1861]: E0129 16:42:01.560116 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:42:02.560966 kubelet[1861]: E0129 16:42:02.560896 1861 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"