Dec 13 13:38:41.013320 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:38:41.013345 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:38:41.013358 kernel: BIOS-provided physical RAM map: Dec 13 13:38:41.013367 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 13:38:41.013376 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 13:38:41.013384 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 13:38:41.013394 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 13:38:41.013403 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 13:38:41.013411 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:38:41.013420 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 13:38:41.013431 kernel: NX (Execute Disable) protection: active Dec 13 13:38:41.013439 kernel: APIC: Static calls initialized Dec 13 13:38:41.013448 kernel: SMBIOS 2.8 present. Dec 13 13:38:41.013457 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 13:38:41.013467 kernel: Hypervisor detected: KVM Dec 13 13:38:41.013478 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 13:38:41.013487 kernel: kvm-clock: using sched offset of 4736671944 cycles Dec 13 13:38:41.013496 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 13:38:41.013506 kernel: tsc: Detected 1996.249 MHz processor Dec 13 13:38:41.013515 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:38:41.013525 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:38:41.013534 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 13:38:41.013555 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 13:38:41.013564 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:38:41.013576 kernel: ACPI: Early table checksum verification disabled Dec 13 13:38:41.013585 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 13:38:41.013594 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:38:41.013604 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:38:41.013613 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:38:41.013622 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 13:38:41.013631 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:38:41.013640 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:38:41.013649 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 13:38:41.013661 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 13:38:41.013670 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 13:38:41.013679 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 13:38:41.013688 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 13:38:41.013697 kernel: No NUMA configuration found Dec 13 13:38:41.013706 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 13:38:41.013715 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 13:38:41.013728 kernel: Zone ranges: Dec 13 13:38:41.013740 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:38:41.013750 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 13:38:41.013759 kernel: Normal empty Dec 13 13:38:41.013769 kernel: Movable zone start for each node Dec 13 13:38:41.013778 kernel: Early memory node ranges Dec 13 13:38:41.013788 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 13:38:41.013799 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 13:38:41.013809 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 13:38:41.013818 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:38:41.013828 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 13:38:41.013837 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 13:38:41.013847 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 13:38:41.013856 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 13:38:41.013866 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:38:41.013962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 13:38:41.013975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 13:38:41.013988 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:38:41.013997 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 13:38:41.014007 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 13:38:41.014017 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:38:41.014026 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 13:38:41.014036 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 13:38:41.014045 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 13:38:41.014055 kernel: Booting paravirtualized kernel on KVM Dec 13 13:38:41.014065 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:38:41.014077 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 13:38:41.014087 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 13:38:41.014096 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 13:38:41.014106 kernel: pcpu-alloc: [0] 0 1 Dec 13 13:38:41.014115 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 13:38:41.014127 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:38:41.014137 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:38:41.014146 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:38:41.014158 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 13:38:41.014168 kernel: Fallback order for Node 0: 0 Dec 13 13:38:41.014178 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 13:38:41.014187 kernel: Policy zone: DMA32 Dec 13 13:38:41.014197 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:38:41.014207 kernel: Memory: 1969164K/2096620K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 127196K reserved, 0K cma-reserved) Dec 13 13:38:41.014216 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 13:38:41.014226 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:38:41.014237 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:38:41.014247 kernel: Dynamic Preempt: voluntary Dec 13 13:38:41.014257 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:38:41.014267 kernel: rcu: RCU event tracing is enabled. Dec 13 13:38:41.014277 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 13:38:41.014287 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:38:41.014296 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:38:41.014306 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:38:41.014315 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:38:41.014325 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 13:38:41.014337 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 13:38:41.014346 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:38:41.014356 kernel: Console: colour VGA+ 80x25 Dec 13 13:38:41.014365 kernel: printk: console [tty0] enabled Dec 13 13:38:41.014375 kernel: printk: console [ttyS0] enabled Dec 13 13:38:41.014386 kernel: ACPI: Core revision 20230628 Dec 13 13:38:41.014396 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:38:41.014404 kernel: x2apic enabled Dec 13 13:38:41.014413 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:38:41.014424 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 13:38:41.014433 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 13:38:41.014442 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 13:38:41.014451 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 13:38:41.014460 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 13:38:41.014469 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:38:41.014478 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:38:41.014487 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:38:41.014496 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:38:41.014507 kernel: Speculative Store Bypass: Vulnerable Dec 13 13:38:41.014516 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 13:38:41.014524 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:38:41.014533 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:38:41.014542 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:38:41.014551 kernel: landlock: Up and running. Dec 13 13:38:41.014560 kernel: SELinux: Initializing. Dec 13 13:38:41.014569 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:38:41.014586 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:38:41.014596 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 13:38:41.014605 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:38:41.014615 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:38:41.014626 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:38:41.014635 kernel: Performance Events: AMD PMU driver. Dec 13 13:38:41.014645 kernel: ... version: 0 Dec 13 13:38:41.014654 kernel: ... bit width: 48 Dec 13 13:38:41.014665 kernel: ... generic registers: 4 Dec 13 13:38:41.014675 kernel: ... value mask: 0000ffffffffffff Dec 13 13:38:41.014684 kernel: ... max period: 00007fffffffffff Dec 13 13:38:41.014693 kernel: ... fixed-purpose events: 0 Dec 13 13:38:41.014703 kernel: ... event mask: 000000000000000f Dec 13 13:38:41.014712 kernel: signal: max sigframe size: 1440 Dec 13 13:38:41.014721 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:38:41.014730 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:38:41.014740 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:38:41.014749 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:38:41.014760 kernel: .... node #0, CPUs: #1 Dec 13 13:38:41.014769 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:38:41.014778 kernel: smpboot: Max logical packages: 2 Dec 13 13:38:41.014788 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 13:38:41.014797 kernel: devtmpfs: initialized Dec 13 13:38:41.014806 kernel: x86/mm: Memory block size: 128MB Dec 13 13:38:41.014815 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:38:41.014825 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 13:38:41.014834 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:38:41.014846 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:38:41.014855 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:38:41.014864 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:38:41.014874 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:38:41.016429 kernel: audit: type=2000 audit(1734097120.278:1): state=initialized audit_enabled=0 res=1 Dec 13 13:38:41.016439 kernel: cpuidle: using governor menu Dec 13 13:38:41.016448 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:38:41.016457 kernel: dca service started, version 1.12.1 Dec 13 13:38:41.016467 kernel: PCI: Using configuration type 1 for base access Dec 13 13:38:41.016480 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:38:41.016490 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:38:41.016499 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:38:41.016508 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:38:41.016517 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:38:41.016527 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:38:41.016536 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:38:41.016545 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:38:41.016555 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:38:41.016566 kernel: ACPI: Interpreter enabled Dec 13 13:38:41.016575 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 13:38:41.016584 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:38:41.016594 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:38:41.016603 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 13:38:41.016613 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 13:38:41.016622 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:38:41.016767 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:38:41.016871 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 13:38:41.016987 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 13:38:41.017001 kernel: acpiphp: Slot [3] registered Dec 13 13:38:41.017011 kernel: acpiphp: Slot [4] registered Dec 13 13:38:41.017020 kernel: acpiphp: Slot [5] registered Dec 13 13:38:41.017030 kernel: acpiphp: Slot [6] registered Dec 13 13:38:41.017039 kernel: acpiphp: Slot [7] registered Dec 13 13:38:41.017048 kernel: acpiphp: Slot [8] registered Dec 13 13:38:41.017061 kernel: acpiphp: Slot [9] registered Dec 13 13:38:41.017070 kernel: acpiphp: Slot [10] registered Dec 13 13:38:41.017079 kernel: acpiphp: Slot [11] registered Dec 13 13:38:41.017089 kernel: acpiphp: Slot [12] registered Dec 13 13:38:41.017098 kernel: acpiphp: Slot [13] registered Dec 13 13:38:41.017107 kernel: acpiphp: Slot [14] registered Dec 13 13:38:41.017116 kernel: acpiphp: Slot [15] registered Dec 13 13:38:41.017125 kernel: acpiphp: Slot [16] registered Dec 13 13:38:41.017135 kernel: acpiphp: Slot [17] registered Dec 13 13:38:41.017144 kernel: acpiphp: Slot [18] registered Dec 13 13:38:41.017155 kernel: acpiphp: Slot [19] registered Dec 13 13:38:41.017164 kernel: acpiphp: Slot [20] registered Dec 13 13:38:41.017173 kernel: acpiphp: Slot [21] registered Dec 13 13:38:41.017182 kernel: acpiphp: Slot [22] registered Dec 13 13:38:41.017192 kernel: acpiphp: Slot [23] registered Dec 13 13:38:41.017201 kernel: acpiphp: Slot [24] registered Dec 13 13:38:41.017210 kernel: acpiphp: Slot [25] registered Dec 13 13:38:41.017219 kernel: acpiphp: Slot [26] registered Dec 13 13:38:41.017228 kernel: acpiphp: Slot [27] registered Dec 13 13:38:41.017239 kernel: acpiphp: Slot [28] registered Dec 13 13:38:41.017249 kernel: acpiphp: Slot [29] registered Dec 13 13:38:41.017258 kernel: acpiphp: Slot [30] registered Dec 13 13:38:41.017267 kernel: acpiphp: Slot [31] registered Dec 13 13:38:41.017276 kernel: PCI host bridge to bus 0000:00 Dec 13 13:38:41.017367 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:38:41.017449 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 13:38:41.017531 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:38:41.017632 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 13:38:41.017713 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 13:38:41.017791 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:38:41.017924 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 13:38:41.018027 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 13:38:41.018125 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 13:38:41.018222 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 13:38:41.018312 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 13:38:41.018402 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 13:38:41.018492 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 13:38:41.018581 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 13:38:41.018677 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 13:38:41.018768 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 13:38:41.018862 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 13:38:41.019374 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 13:38:41.019475 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 13:38:41.019566 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 13:38:41.019656 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 13:38:41.019746 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 13:38:41.019837 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 13:38:41.019969 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 13:38:41.020060 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 13:38:41.020148 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 13:38:41.020238 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 13:38:41.020330 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 13:38:41.020428 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 13:38:41.020519 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 13:38:41.020615 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 13:38:41.020705 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 13:38:41.020802 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 13:38:41.020919 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 13:38:41.021014 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 13:38:41.021114 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:38:41.021207 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 13:38:41.021303 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 13:38:41.021318 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 13:38:41.021327 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 13:38:41.021337 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:38:41.021346 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 13:38:41.021355 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 13:38:41.021365 kernel: iommu: Default domain type: Translated Dec 13 13:38:41.021375 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:38:41.021388 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:38:41.021397 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:38:41.021407 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 13:38:41.021416 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 13:38:41.021505 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 13:38:41.021608 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 13:38:41.021700 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 13:38:41.021714 kernel: vgaarb: loaded Dec 13 13:38:41.021724 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 13:38:41.021737 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:38:41.021746 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:38:41.021756 kernel: pnp: PnP ACPI init Dec 13 13:38:41.021847 kernel: pnp 00:03: [dma 2] Dec 13 13:38:41.021862 kernel: pnp: PnP ACPI: found 5 devices Dec 13 13:38:41.021872 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:38:41.021936 kernel: NET: Registered PF_INET protocol family Dec 13 13:38:41.021946 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:38:41.021959 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 13:38:41.021969 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:38:41.021978 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:38:41.021988 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 13:38:41.021997 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 13:38:41.022006 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:38:41.022016 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:38:41.022025 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:38:41.022034 kernel: NET: Registered PF_XDP protocol family Dec 13 13:38:41.022122 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 13:38:41.022201 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 13:38:41.022278 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 13:38:41.022355 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 13:38:41.022431 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 13:38:41.022520 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 13:38:41.022610 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 13:38:41.022624 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:38:41.022637 kernel: Initialise system trusted keyrings Dec 13 13:38:41.022646 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 13:38:41.022656 kernel: Key type asymmetric registered Dec 13 13:38:41.022666 kernel: Asymmetric key parser 'x509' registered Dec 13 13:38:41.022675 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:38:41.022684 kernel: io scheduler mq-deadline registered Dec 13 13:38:41.022693 kernel: io scheduler kyber registered Dec 13 13:38:41.022703 kernel: io scheduler bfq registered Dec 13 13:38:41.022712 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:38:41.022724 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 13:38:41.022734 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 13:38:41.022743 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 13:38:41.022753 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 13:38:41.022762 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:38:41.022772 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:38:41.022781 kernel: random: crng init done Dec 13 13:38:41.022790 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 13:38:41.022799 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:38:41.022811 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:38:41.022925 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 13:38:41.023013 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 13:38:41.023027 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:38:41.023104 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T13:38:40 UTC (1734097120) Dec 13 13:38:41.023183 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 13:38:41.023197 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 13:38:41.023206 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:38:41.023220 kernel: Segment Routing with IPv6 Dec 13 13:38:41.023229 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:38:41.023238 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:38:41.023248 kernel: Key type dns_resolver registered Dec 13 13:38:41.023257 kernel: IPI shorthand broadcast: enabled Dec 13 13:38:41.023266 kernel: sched_clock: Marking stable (892007971, 125878603)->(1021635106, -3748532) Dec 13 13:38:41.023276 kernel: registered taskstats version 1 Dec 13 13:38:41.023285 kernel: Loading compiled-in X.509 certificates Dec 13 13:38:41.023295 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:38:41.023306 kernel: Key type .fscrypt registered Dec 13 13:38:41.023315 kernel: Key type fscrypt-provisioning registered Dec 13 13:38:41.023325 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:38:41.023334 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:38:41.023343 kernel: ima: No architecture policies found Dec 13 13:38:41.023353 kernel: clk: Disabling unused clocks Dec 13 13:38:41.023362 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:38:41.023372 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:38:41.023383 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:38:41.023393 kernel: Run /init as init process Dec 13 13:38:41.023402 kernel: with arguments: Dec 13 13:38:41.023411 kernel: /init Dec 13 13:38:41.023420 kernel: with environment: Dec 13 13:38:41.023429 kernel: HOME=/ Dec 13 13:38:41.023438 kernel: TERM=linux Dec 13 13:38:41.023447 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:38:41.023459 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:38:41.023474 systemd[1]: Detected virtualization kvm. Dec 13 13:38:41.023484 systemd[1]: Detected architecture x86-64. Dec 13 13:38:41.023495 systemd[1]: Running in initrd. Dec 13 13:38:41.023505 systemd[1]: No hostname configured, using default hostname. Dec 13 13:38:41.023515 systemd[1]: Hostname set to . Dec 13 13:38:41.023525 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:38:41.023536 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:38:41.023548 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:38:41.023558 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:38:41.023569 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:38:41.023580 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:38:41.023611 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:38:41.023624 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:38:41.023636 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:38:41.023649 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:38:41.023659 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:38:41.023670 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:38:41.023680 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:38:41.023699 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:38:41.023711 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:38:41.023724 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:38:41.023734 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:38:41.023745 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:38:41.023755 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:38:41.023766 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:38:41.023776 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:38:41.023787 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:38:41.023798 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:38:41.023810 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:38:41.023821 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:38:41.023833 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:38:41.023844 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:38:41.023856 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:38:41.023867 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:38:41.024285 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:38:41.024300 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:38:41.024311 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:38:41.024345 systemd-journald[186]: Collecting audit messages is disabled. Dec 13 13:38:41.024371 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:38:41.024383 systemd-journald[186]: Journal started Dec 13 13:38:41.024409 systemd-journald[186]: Runtime Journal (/run/log/journal/b9013d79fa004980afe88ea0f47cdf8f) is 4.9M, max 39.3M, 34.4M free. Dec 13 13:38:41.028918 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:38:41.028171 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:38:41.032952 systemd-modules-load[187]: Inserted module 'overlay' Dec 13 13:38:41.036263 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:38:41.041100 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:38:41.045089 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:38:41.054525 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:38:41.104036 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:38:41.104065 kernel: Bridge firewalling registered Dec 13 13:38:41.077332 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:38:41.082983 systemd-modules-load[187]: Inserted module 'br_netfilter' Dec 13 13:38:41.106154 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:38:41.107000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:38:41.108177 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:38:41.115094 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:38:41.117341 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:38:41.129917 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:38:41.139033 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:38:41.140932 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:38:41.143028 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:38:41.162995 dracut-cmdline[222]: dracut-dracut-053 Dec 13 13:38:41.168616 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:38:41.183761 systemd-resolved[217]: Positive Trust Anchors: Dec 13 13:38:41.183777 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:38:41.183821 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:38:41.191206 systemd-resolved[217]: Defaulting to hostname 'linux'. Dec 13 13:38:41.192490 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:38:41.193483 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:38:41.264955 kernel: SCSI subsystem initialized Dec 13 13:38:41.276937 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:38:41.290949 kernel: iscsi: registered transport (tcp) Dec 13 13:38:41.316009 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:38:41.316103 kernel: QLogic iSCSI HBA Driver Dec 13 13:38:41.376998 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:38:41.389127 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:38:41.439147 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:38:41.439267 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:38:41.441149 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:38:41.507001 kernel: raid6: sse2x4 gen() 5039 MB/s Dec 13 13:38:41.523935 kernel: raid6: sse2x2 gen() 6685 MB/s Dec 13 13:38:41.541179 kernel: raid6: sse2x1 gen() 9121 MB/s Dec 13 13:38:41.541261 kernel: raid6: using algorithm sse2x1 gen() 9121 MB/s Dec 13 13:38:41.560005 kernel: raid6: .... xor() 6760 MB/s, rmw enabled Dec 13 13:38:41.560123 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 13:38:41.585387 kernel: xor: measuring software checksum speed Dec 13 13:38:41.585498 kernel: prefetch64-sse : 17183 MB/sec Dec 13 13:38:41.585970 kernel: generic_sse : 14762 MB/sec Dec 13 13:38:41.587559 kernel: xor: using function: prefetch64-sse (17183 MB/sec) Dec 13 13:38:41.798981 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:38:41.818505 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:38:41.827066 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:38:41.875360 systemd-udevd[404]: Using default interface naming scheme 'v255'. Dec 13 13:38:41.884628 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:38:41.896203 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:38:41.926340 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Dec 13 13:38:41.964594 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:38:41.972302 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:38:42.020512 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:38:42.032286 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:38:42.083219 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:38:42.084451 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:38:42.086248 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:38:42.088825 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:38:42.093708 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:38:42.123297 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:38:42.134959 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Dec 13 13:38:42.162583 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 13:38:42.162710 kernel: libata version 3.00 loaded. Dec 13 13:38:42.162727 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:38:42.162741 kernel: GPT:17805311 != 41943039 Dec 13 13:38:42.162754 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:38:42.162767 kernel: GPT:17805311 != 41943039 Dec 13 13:38:42.162787 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:38:42.162800 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:38:42.162813 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 13:38:42.172098 kernel: scsi host0: ata_piix Dec 13 13:38:42.172263 kernel: scsi host1: ata_piix Dec 13 13:38:42.172390 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 13:38:42.172408 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 13:38:42.139576 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:38:42.140033 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:38:42.141362 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:38:42.143907 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:38:42.144055 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:38:42.144551 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:38:42.153235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:38:42.229998 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (454) Dec 13 13:38:42.238411 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (462) Dec 13 13:38:42.252789 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:38:42.260389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:38:42.267714 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:38:42.273830 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:38:42.278993 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:38:42.279612 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:38:42.287050 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:38:42.290022 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:38:42.303263 disk-uuid[500]: Primary Header is updated. Dec 13 13:38:42.303263 disk-uuid[500]: Secondary Entries is updated. Dec 13 13:38:42.303263 disk-uuid[500]: Secondary Header is updated. Dec 13 13:38:42.310946 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:38:42.311255 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:38:43.329806 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:38:43.329934 disk-uuid[506]: The operation has completed successfully. Dec 13 13:38:43.415725 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:38:43.416041 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:38:43.446128 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:38:43.465086 sh[522]: Success Dec 13 13:38:43.488946 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 13:38:43.577749 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:38:43.580107 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:38:43.581496 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:38:43.604870 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:38:43.604968 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:38:43.607773 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:38:43.607800 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:38:43.610173 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:38:43.625589 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:38:43.626769 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:38:43.633049 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:38:43.635528 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:38:43.645251 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:38:43.645291 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:38:43.647355 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:38:43.654060 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:38:43.668665 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:38:43.672374 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:38:43.682441 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:38:43.690103 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:38:43.766139 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:38:43.775098 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:38:43.797265 systemd-networkd[706]: lo: Link UP Dec 13 13:38:43.797955 systemd-networkd[706]: lo: Gained carrier Dec 13 13:38:43.799783 systemd-networkd[706]: Enumeration completed Dec 13 13:38:43.800132 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:38:43.800700 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:38:43.800703 systemd-networkd[706]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:38:43.801544 systemd-networkd[706]: eth0: Link UP Dec 13 13:38:43.801549 systemd-networkd[706]: eth0: Gained carrier Dec 13 13:38:43.801556 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:38:43.801809 systemd[1]: Reached target network.target - Network. Dec 13 13:38:43.820929 systemd-networkd[706]: eth0: DHCPv4 address 172.24.4.147/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 13:38:43.851483 ignition[617]: Ignition 2.20.0 Dec 13 13:38:43.851494 ignition[617]: Stage: fetch-offline Dec 13 13:38:43.851532 ignition[617]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:38:43.851542 ignition[617]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:38:43.855202 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:38:43.852411 ignition[617]: parsed url from cmdline: "" Dec 13 13:38:43.852416 ignition[617]: no config URL provided Dec 13 13:38:43.852423 ignition[617]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:38:43.852434 ignition[617]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:38:43.852439 ignition[617]: failed to fetch config: resource requires networking Dec 13 13:38:43.852723 ignition[617]: Ignition finished successfully Dec 13 13:38:43.861151 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:38:43.879172 ignition[716]: Ignition 2.20.0 Dec 13 13:38:43.879190 ignition[716]: Stage: fetch Dec 13 13:38:43.879449 ignition[716]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:38:43.879464 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:38:43.879584 ignition[716]: parsed url from cmdline: "" Dec 13 13:38:43.879592 ignition[716]: no config URL provided Dec 13 13:38:43.879602 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:38:43.879616 ignition[716]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:38:43.879743 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 13:38:43.879863 ignition[716]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 13:38:43.879945 ignition[716]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 13:38:44.332727 ignition[716]: GET result: OK Dec 13 13:38:44.332831 ignition[716]: parsing config with SHA512: b89f0c7f17453f8c074af4fb8fdfa5a655e220332b96aecea8e98a67696df245c55d2520b34d02ffc572ec274f27626fb9a1c5a955123ee5290e33ee010227b6 Dec 13 13:38:44.342408 unknown[716]: fetched base config from "system" Dec 13 13:38:44.342450 unknown[716]: fetched base config from "system" Dec 13 13:38:44.343514 ignition[716]: fetch: fetch complete Dec 13 13:38:44.342470 unknown[716]: fetched user config from "openstack" Dec 13 13:38:44.343534 ignition[716]: fetch: fetch passed Dec 13 13:38:44.347617 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:38:44.343654 ignition[716]: Ignition finished successfully Dec 13 13:38:44.357252 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:38:44.404957 ignition[722]: Ignition 2.20.0 Dec 13 13:38:44.405008 ignition[722]: Stage: kargs Dec 13 13:38:44.405588 ignition[722]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:38:44.405623 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:38:44.411655 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:38:44.408455 ignition[722]: kargs: kargs passed Dec 13 13:38:44.408586 ignition[722]: Ignition finished successfully Dec 13 13:38:44.422251 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:38:44.462728 ignition[728]: Ignition 2.20.0 Dec 13 13:38:44.462745 ignition[728]: Stage: disks Dec 13 13:38:44.463003 ignition[728]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:38:44.463017 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:38:44.465287 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:38:44.464102 ignition[728]: disks: disks passed Dec 13 13:38:44.464152 ignition[728]: Ignition finished successfully Dec 13 13:38:44.467263 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:38:44.468358 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:38:44.469520 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:38:44.470817 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:38:44.472154 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:38:44.485047 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:38:44.768975 systemd-fsck[737]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 13:38:44.783237 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:38:44.794860 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:38:44.944973 kernel: EXT4-fs (vda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:38:44.946222 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:38:44.947701 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:38:44.953980 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:38:44.958065 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:38:44.959476 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:38:44.963309 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 13:38:44.979902 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (745) Dec 13 13:38:44.979937 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:38:44.979958 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:38:44.979980 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:38:44.980001 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:38:44.970739 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:38:44.970790 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:38:44.979662 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:38:44.982162 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:38:44.989075 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:38:45.120844 initrd-setup-root[773]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:38:45.131270 initrd-setup-root[780]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:38:45.134964 initrd-setup-root[787]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:38:45.140179 systemd-networkd[706]: eth0: Gained IPv6LL Dec 13 13:38:45.142291 initrd-setup-root[794]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:38:45.235496 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:38:45.242001 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:38:45.244742 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:38:45.252527 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:38:45.254953 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:38:45.279491 ignition[862]: INFO : Ignition 2.20.0 Dec 13 13:38:45.279491 ignition[862]: INFO : Stage: mount Dec 13 13:38:45.281228 ignition[862]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:38:45.281228 ignition[862]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:38:45.281228 ignition[862]: INFO : mount: mount passed Dec 13 13:38:45.281228 ignition[862]: INFO : Ignition finished successfully Dec 13 13:38:45.281211 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:38:45.282034 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:38:52.222450 coreos-metadata[747]: Dec 13 13:38:52.222 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:38:52.270989 coreos-metadata[747]: Dec 13 13:38:52.270 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 13:38:52.290495 coreos-metadata[747]: Dec 13 13:38:52.290 INFO Fetch successful Dec 13 13:38:52.292006 coreos-metadata[747]: Dec 13 13:38:52.291 INFO wrote hostname ci-4186-0-0-2-5444734329.novalocal to /sysroot/etc/hostname Dec 13 13:38:52.294724 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 13:38:52.294987 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 13:38:52.309183 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:38:52.337564 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:38:52.355027 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (878) Dec 13 13:38:52.367170 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:38:52.367246 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:38:52.367278 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:38:52.374914 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:38:52.381041 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:38:52.433076 ignition[896]: INFO : Ignition 2.20.0 Dec 13 13:38:52.433076 ignition[896]: INFO : Stage: files Dec 13 13:38:52.435955 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:38:52.435955 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:38:52.435955 ignition[896]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:38:52.442471 ignition[896]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:38:52.442471 ignition[896]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:38:52.447196 ignition[896]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:38:52.447196 ignition[896]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:38:52.447196 ignition[896]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:38:52.445075 unknown[896]: wrote ssh authorized keys file for user: core Dec 13 13:38:52.456028 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:38:52.456028 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:38:52.456028 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:38:52.456028 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:38:52.456028 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:38:52.456028 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:38:52.456028 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:38:52.456028 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 13:38:52.847025 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 13:38:54.415706 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:38:54.418182 ignition[896]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:38:54.418182 ignition[896]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:38:54.418182 ignition[896]: INFO : files: files passed Dec 13 13:38:54.418182 ignition[896]: INFO : Ignition finished successfully Dec 13 13:38:54.418189 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:38:54.426073 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:38:54.434773 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:38:54.439359 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:38:54.440101 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:38:54.445346 initrd-setup-root-after-ignition[924]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:38:54.445346 initrd-setup-root-after-ignition[924]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:38:54.447569 initrd-setup-root-after-ignition[928]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:38:54.449200 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:38:54.451423 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:38:54.458143 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:38:54.494263 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:38:54.494491 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:38:54.496417 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:38:54.497845 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:38:54.499558 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:38:54.505110 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:38:54.519078 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:38:54.527141 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:38:54.545726 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:38:54.547734 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:38:54.549727 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:38:54.551437 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:38:54.551702 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:38:54.553750 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:38:54.555676 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:38:54.557433 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:38:54.559489 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:38:54.561473 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:38:54.563449 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:38:54.565344 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:38:54.572654 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:38:54.573302 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:38:54.574997 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:38:54.576366 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:38:54.576516 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:38:54.578439 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:38:54.579376 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:38:54.581083 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:38:54.581183 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:38:54.582997 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:38:54.583146 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:38:54.585432 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:38:54.585604 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:38:54.586443 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:38:54.586598 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:38:54.595967 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:38:54.596917 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:38:54.597116 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:38:54.602028 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:38:54.604252 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:38:54.604460 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:38:54.605203 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:38:54.605363 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:38:54.614456 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:38:54.614560 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:38:54.624865 ignition[948]: INFO : Ignition 2.20.0 Dec 13 13:38:54.624865 ignition[948]: INFO : Stage: umount Dec 13 13:38:54.626905 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:38:54.626905 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:38:54.626905 ignition[948]: INFO : umount: umount passed Dec 13 13:38:54.626905 ignition[948]: INFO : Ignition finished successfully Dec 13 13:38:54.627786 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:38:54.627932 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:38:54.629081 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:38:54.629127 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:38:54.629820 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:38:54.629861 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:38:54.630359 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:38:54.630398 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:38:54.630901 systemd[1]: Stopped target network.target - Network. Dec 13 13:38:54.631358 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:38:54.631399 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:38:54.632283 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:38:54.633101 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:38:54.636910 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:38:54.637723 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:38:54.638619 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:38:54.639642 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:38:54.639673 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:38:54.640644 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:38:54.640676 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:38:54.641497 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:38:54.641549 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:38:54.642605 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:38:54.642645 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:38:54.643932 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:38:54.644748 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:38:54.656283 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:38:54.656400 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:38:54.656920 systemd-networkd[706]: eth0: DHCPv6 lease lost Dec 13 13:38:54.659227 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:38:54.659333 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:38:54.660714 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:38:54.661019 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:38:54.670193 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:38:54.670670 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:38:54.670720 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:38:54.671303 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:38:54.671346 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:38:54.672189 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:38:54.672228 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:38:54.673211 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:38:54.673250 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:38:54.674486 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:38:54.682274 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:38:54.682928 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:38:54.683695 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:38:54.683770 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:38:54.685453 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:38:54.685509 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:38:54.686408 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:38:54.686440 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:38:54.689662 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:38:54.689710 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:38:54.691480 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:38:54.691520 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:38:54.692581 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:38:54.692620 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:38:54.705999 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:38:54.707894 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:38:54.707950 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:38:54.712023 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:38:54.712064 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:38:54.713406 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:38:54.713497 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:38:54.791392 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:38:55.444867 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:38:55.445149 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:38:55.447387 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:38:55.449188 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:38:55.449314 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:38:55.458245 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:38:55.519117 systemd[1]: Switching root. Dec 13 13:38:55.573604 systemd-journald[186]: Journal stopped Dec 13 13:38:59.189425 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Dec 13 13:38:59.189479 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:38:59.189496 kernel: SELinux: policy capability open_perms=1 Dec 13 13:38:59.189526 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:38:59.189539 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:38:59.189551 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:38:59.189566 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:38:59.189581 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:38:59.189593 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:38:59.189605 kernel: audit: type=1403 audit(1734097136.508:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:38:59.189618 systemd[1]: Successfully loaded SELinux policy in 75.722ms. Dec 13 13:38:59.189635 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.779ms. Dec 13 13:38:59.189649 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:38:59.189663 systemd[1]: Detected virtualization kvm. Dec 13 13:38:59.189677 systemd[1]: Detected architecture x86-64. Dec 13 13:38:59.189693 systemd[1]: Detected first boot. Dec 13 13:38:59.189707 systemd[1]: Hostname set to . Dec 13 13:38:59.189721 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:38:59.189736 zram_generator::config[991]: No configuration found. Dec 13 13:38:59.189755 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:38:59.189771 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:38:59.189784 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:38:59.189798 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:38:59.189812 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:38:59.189826 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:38:59.189839 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:38:59.189853 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:38:59.189867 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:38:59.197972 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:38:59.198004 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:38:59.198019 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:38:59.198033 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:38:59.198048 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:38:59.198063 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:38:59.198077 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:38:59.198092 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:38:59.198106 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:38:59.198122 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:38:59.198136 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:38:59.198149 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:38:59.198164 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:38:59.198177 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:38:59.198191 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:38:59.198207 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:38:59.198221 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:38:59.198235 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:38:59.198249 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:38:59.198264 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:38:59.198278 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:38:59.198293 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:38:59.198306 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:38:59.198320 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:38:59.198334 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:38:59.198351 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:38:59.198365 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:38:59.198378 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:38:59.198392 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:38:59.198407 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:38:59.198421 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:38:59.198435 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:38:59.198450 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:38:59.198466 systemd[1]: Reached target machines.target - Containers. Dec 13 13:38:59.198480 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:38:59.198494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:38:59.198508 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:38:59.198523 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:38:59.198537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:38:59.198551 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:38:59.198565 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:38:59.198579 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:38:59.198595 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:38:59.198610 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:38:59.198624 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:38:59.198638 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:38:59.198652 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:38:59.198666 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:38:59.198680 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:38:59.198694 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:38:59.198711 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:38:59.198726 kernel: loop: module loaded Dec 13 13:38:59.198742 kernel: fuse: init (API version 7.39) Dec 13 13:38:59.198757 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:38:59.198773 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:38:59.198788 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:38:59.198803 systemd[1]: Stopped verity-setup.service. Dec 13 13:38:59.198818 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:38:59.198833 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:38:59.198850 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:38:59.198865 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:38:59.203981 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:38:59.204014 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:38:59.204028 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:38:59.204048 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:38:59.204061 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:38:59.204074 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:38:59.204086 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:38:59.204099 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:38:59.204113 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:38:59.204126 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:38:59.204167 systemd-journald[1070]: Collecting audit messages is disabled. Dec 13 13:38:59.204194 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:38:59.204207 kernel: ACPI: bus type drm_connector registered Dec 13 13:38:59.204220 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:38:59.204234 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:38:59.204247 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:38:59.204262 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:38:59.204275 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:38:59.204287 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:38:59.204300 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:38:59.204313 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:38:59.204327 systemd-journald[1070]: Journal started Dec 13 13:38:59.204353 systemd-journald[1070]: Runtime Journal (/run/log/journal/b9013d79fa004980afe88ea0f47cdf8f) is 4.9M, max 39.3M, 34.4M free. Dec 13 13:38:58.776080 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:38:58.835941 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:38:58.836611 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:38:59.212261 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:38:59.223926 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:38:59.227896 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:38:59.234904 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:38:59.241589 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:38:59.241963 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:38:59.242892 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:38:59.244056 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:38:59.244619 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:38:59.257750 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:38:59.257809 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:38:59.261842 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:38:59.268062 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:38:59.269775 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:38:59.270429 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:38:59.276180 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:38:59.281066 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:38:59.281699 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:38:59.283337 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:38:59.288085 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:38:59.291064 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:38:59.293565 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:38:59.295308 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:38:59.296974 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:38:59.309265 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:38:59.317071 systemd-journald[1070]: Time spent on flushing to /var/log/journal/b9013d79fa004980afe88ea0f47cdf8f is 31.253ms for 922 entries. Dec 13 13:38:59.317071 systemd-journald[1070]: System Journal (/var/log/journal/b9013d79fa004980afe88ea0f47cdf8f) is 8.0M, max 584.8M, 576.8M free. Dec 13 13:38:59.425004 systemd-journald[1070]: Received client request to flush runtime journal. Dec 13 13:38:59.425107 kernel: loop0: detected capacity change from 0 to 138184 Dec 13 13:38:59.313009 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:38:59.315520 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:38:59.325113 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:38:59.333294 udevadm[1129]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 13:38:59.428926 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:38:59.441837 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:38:59.444364 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:38:59.474000 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:38:59.486432 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:38:59.496063 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:38:59.502918 kernel: loop1: detected capacity change from 0 to 141000 Dec 13 13:38:59.544577 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Dec 13 13:38:59.544600 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Dec 13 13:38:59.551518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:38:59.625941 kernel: loop2: detected capacity change from 0 to 8 Dec 13 13:38:59.667314 kernel: loop3: detected capacity change from 0 to 211296 Dec 13 13:38:59.782034 kernel: loop4: detected capacity change from 0 to 138184 Dec 13 13:38:59.893027 kernel: loop5: detected capacity change from 0 to 141000 Dec 13 13:38:59.961188 kernel: loop6: detected capacity change from 0 to 8 Dec 13 13:38:59.961363 kernel: loop7: detected capacity change from 0 to 211296 Dec 13 13:38:59.997949 (sd-merge)[1150]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 13:38:59.998402 (sd-merge)[1150]: Merged extensions into '/usr'. Dec 13 13:39:00.007039 systemd[1]: Reloading requested from client PID 1128 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:39:00.007134 systemd[1]: Reloading... Dec 13 13:39:00.097955 zram_generator::config[1175]: No configuration found. Dec 13 13:39:00.276485 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:39:00.335563 systemd[1]: Reloading finished in 328 ms. Dec 13 13:39:00.364790 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:39:00.365674 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:39:00.374024 systemd[1]: Starting ensure-sysext.service... Dec 13 13:39:00.376035 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:39:00.386019 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:39:00.407333 systemd[1]: Reloading requested from client PID 1232 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:39:00.407354 systemd[1]: Reloading... Dec 13 13:39:00.432127 systemd-udevd[1234]: Using default interface naming scheme 'v255'. Dec 13 13:39:00.437432 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:39:00.439415 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:39:00.440325 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:39:00.440617 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Dec 13 13:39:00.440676 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Dec 13 13:39:00.443987 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:39:00.446034 systemd-tmpfiles[1233]: Skipping /boot Dec 13 13:39:00.456814 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:39:00.456974 systemd-tmpfiles[1233]: Skipping /boot Dec 13 13:39:00.497967 zram_generator::config[1262]: No configuration found. Dec 13 13:39:00.511124 ldconfig[1124]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:39:00.558939 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1272) Dec 13 13:39:00.613905 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1281) Dec 13 13:39:00.617549 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1281) Dec 13 13:39:00.651910 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 13:39:00.659921 kernel: ACPI: button: Power Button [PWRF] Dec 13 13:39:00.699907 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 13:39:00.742923 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 13:39:00.743134 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:39:00.754918 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:39:00.761970 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 13:39:00.762020 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 13:39:00.767322 kernel: Console: switching to colour dummy device 80x25 Dec 13 13:39:00.768404 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 13:39:00.768442 kernel: [drm] features: -context_init Dec 13 13:39:00.769910 kernel: [drm] number of scanouts: 1 Dec 13 13:39:00.769946 kernel: [drm] number of cap sets: 0 Dec 13 13:39:00.772911 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 13:39:00.780899 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 13:39:00.780982 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 13:39:00.787104 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 13:39:00.822981 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:39:00.823182 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:39:00.825555 systemd[1]: Reloading finished in 417 ms. Dec 13 13:39:00.843510 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:39:00.845748 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:39:00.849282 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:39:00.888047 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:39:00.894023 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:39:00.895970 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:39:00.901354 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:39:00.903758 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:39:00.908426 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:39:00.913030 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:39:00.917030 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:39:00.925755 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:39:00.926033 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:39:00.931703 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:39:00.944276 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:39:00.946161 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:39:00.947733 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:39:00.948179 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:39:00.951260 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:39:00.951445 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:39:00.957820 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:39:00.958069 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:39:00.962211 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:39:00.963122 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:39:00.966919 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:39:00.968001 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:39:00.980525 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:39:00.980770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:39:00.988773 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:39:00.989697 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:39:00.990376 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:39:00.991233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:39:00.991935 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:39:00.992938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:39:00.994117 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:39:00.996778 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:39:00.996949 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:39:00.997759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:39:01.001658 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:39:01.011196 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:39:01.017175 systemd[1]: Finished ensure-sysext.service. Dec 13 13:39:01.022405 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:39:01.022907 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:39:01.032208 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:39:01.044512 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:39:01.045762 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:39:01.045837 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:39:01.059058 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:39:01.069070 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:39:01.072037 augenrules[1393]: No rules Dec 13 13:39:01.072752 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:39:01.074864 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:39:01.077957 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:39:01.079355 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:39:01.080024 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:39:01.096156 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:39:01.102652 lvm[1387]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:39:01.100148 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:39:01.100304 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:39:01.119061 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:39:01.135995 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:39:01.137714 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:39:01.148034 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:39:01.175426 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:39:01.203576 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:39:01.219764 systemd-networkd[1350]: lo: Link UP Dec 13 13:39:01.219771 systemd-networkd[1350]: lo: Gained carrier Dec 13 13:39:01.221229 systemd-networkd[1350]: Enumeration completed Dec 13 13:39:01.221393 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:39:01.226115 systemd-networkd[1350]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:39:01.226120 systemd-networkd[1350]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:39:01.227975 systemd-networkd[1350]: eth0: Link UP Dec 13 13:39:01.228045 systemd-networkd[1350]: eth0: Gained carrier Dec 13 13:39:01.228105 systemd-networkd[1350]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:39:01.232064 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:39:01.233521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:39:01.237922 systemd-networkd[1350]: eth0: DHCPv4 address 172.24.4.147/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 13:39:01.238094 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:39:01.238669 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Dec 13 13:39:01.239434 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:39:01.254366 systemd-resolved[1351]: Positive Trust Anchors: Dec 13 13:39:01.254385 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:39:01.254431 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:39:01.259576 systemd-resolved[1351]: Using system hostname 'ci-4186-0-0-2-5444734329.novalocal'. Dec 13 13:39:01.819111 systemd-timesyncd[1390]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Dec 13 13:39:01.819156 systemd-timesyncd[1390]: Initial clock synchronization to Fri 2024-12-13 13:39:01.819023 UTC. Dec 13 13:39:01.820085 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:39:01.820761 systemd[1]: Reached target network.target - Network. Dec 13 13:39:01.820930 systemd-resolved[1351]: Clock change detected. Flushing caches. Dec 13 13:39:01.821190 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:39:01.821600 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:39:01.823739 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:39:01.824718 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:39:01.825838 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:39:01.827152 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:39:01.828149 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:39:01.829381 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:39:01.829494 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:39:01.830447 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:39:01.833730 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:39:01.838097 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:39:01.843808 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:39:01.845655 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:39:01.846189 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:39:01.846604 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:39:01.848023 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:39:01.848059 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:39:01.857783 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:39:01.862382 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 13:39:01.870812 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:39:01.873599 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:39:01.878797 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:39:01.880360 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:39:01.886809 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:39:01.889824 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:39:01.895049 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:39:01.905828 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:39:01.910246 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:39:01.918444 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:39:01.920835 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:39:01.931678 jq[1426]: false Dec 13 13:39:01.932140 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:39:01.941243 dbus-daemon[1425]: [system] SELinux support is enabled Dec 13 13:39:01.943891 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:39:01.950358 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:39:01.950834 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:39:01.951070 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:39:01.951315 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:39:01.958204 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:39:01.964218 extend-filesystems[1429]: Found loop4 Dec 13 13:39:01.964218 extend-filesystems[1429]: Found loop5 Dec 13 13:39:01.964218 extend-filesystems[1429]: Found loop6 Dec 13 13:39:01.964218 extend-filesystems[1429]: Found loop7 Dec 13 13:39:01.964218 extend-filesystems[1429]: Found vda Dec 13 13:39:01.964218 extend-filesystems[1429]: Found vda1 Dec 13 13:39:01.964218 extend-filesystems[1429]: Found vda2 Dec 13 13:39:01.964218 extend-filesystems[1429]: Found vda3 Dec 13 13:39:01.964218 extend-filesystems[1429]: Found usr Dec 13 13:39:01.964218 extend-filesystems[1429]: Found vda4 Dec 13 13:39:01.964218 extend-filesystems[1429]: Found vda6 Dec 13 13:39:01.964218 extend-filesystems[1429]: Found vda7 Dec 13 13:39:01.964218 extend-filesystems[1429]: Found vda9 Dec 13 13:39:01.964218 extend-filesystems[1429]: Checking size of /dev/vda9 Dec 13 13:39:02.032346 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 13:39:01.958248 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:39:02.032529 extend-filesystems[1429]: Resized partition /dev/vda9 Dec 13 13:39:02.048368 update_engine[1437]: I20241213 13:39:01.978659 1437 main.cc:92] Flatcar Update Engine starting Dec 13 13:39:02.048368 update_engine[1437]: I20241213 13:39:01.988930 1437 update_check_scheduler.cc:74] Next update check in 3m42s Dec 13 13:39:02.051526 jq[1438]: true Dec 13 13:39:01.961161 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:39:02.052411 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:39:01.961182 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:39:02.058771 jq[1452]: true Dec 13 13:39:01.984354 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:39:01.984546 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:39:01.995725 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:39:02.009885 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:39:02.022804 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:39:02.045277 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:39:02.061087 systemd-logind[1433]: New seat seat0. Dec 13 13:39:02.064741 systemd-logind[1433]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 13:39:02.064758 systemd-logind[1433]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:39:02.064964 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:39:02.093659 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1275) Dec 13 13:39:02.197486 locksmithd[1460]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:39:02.245646 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 13:39:02.328972 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:39:02.328972 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 13:39:02.328972 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 13:39:02.342910 extend-filesystems[1429]: Resized filesystem in /dev/vda9 Dec 13 13:39:02.340952 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:39:02.353603 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:39:02.348009 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:39:02.361355 bash[1479]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:39:02.364167 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:39:02.381742 systemd[1]: Starting sshkeys.service... Dec 13 13:39:02.386603 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:39:02.408832 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:39:02.418998 systemd[1]: Started sshd@0-172.24.4.147:22-172.24.4.1:48098.service - OpenSSH per-connection server daemon (172.24.4.1:48098). Dec 13 13:39:02.427192 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 13:39:02.441063 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 13:39:02.442252 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:39:02.443464 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:39:02.458127 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:39:02.478604 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:39:02.485641 containerd[1455]: time="2024-12-13T13:39:02.484174541Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:39:02.488142 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:39:02.499061 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:39:02.501093 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:39:02.519956 containerd[1455]: time="2024-12-13T13:39:02.519912558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:39:02.521517 containerd[1455]: time="2024-12-13T13:39:02.521469918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:39:02.521517 containerd[1455]: time="2024-12-13T13:39:02.521512759Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:39:02.521609 containerd[1455]: time="2024-12-13T13:39:02.521536273Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:39:02.521750 containerd[1455]: time="2024-12-13T13:39:02.521721200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:39:02.521779 containerd[1455]: time="2024-12-13T13:39:02.521749032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:39:02.521838 containerd[1455]: time="2024-12-13T13:39:02.521814284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:39:02.521872 containerd[1455]: time="2024-12-13T13:39:02.521838680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:39:02.522041 containerd[1455]: time="2024-12-13T13:39:02.522009480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:39:02.522041 containerd[1455]: time="2024-12-13T13:39:02.522036190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:39:02.522093 containerd[1455]: time="2024-12-13T13:39:02.522053593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:39:02.522093 containerd[1455]: time="2024-12-13T13:39:02.522065966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:39:02.522198 containerd[1455]: time="2024-12-13T13:39:02.522176874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:39:02.522432 containerd[1455]: time="2024-12-13T13:39:02.522403489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:39:02.522537 containerd[1455]: time="2024-12-13T13:39:02.522514097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:39:02.522564 containerd[1455]: time="2024-12-13T13:39:02.522537621Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:39:02.522683 containerd[1455]: time="2024-12-13T13:39:02.522658187Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:39:02.522757 containerd[1455]: time="2024-12-13T13:39:02.522737876Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:39:02.530840 containerd[1455]: time="2024-12-13T13:39:02.530807221Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:39:02.530894 containerd[1455]: time="2024-12-13T13:39:02.530855491Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:39:02.530894 containerd[1455]: time="2024-12-13T13:39:02.530873856Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:39:02.530952 containerd[1455]: time="2024-12-13T13:39:02.530891028Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:39:02.530952 containerd[1455]: time="2024-12-13T13:39:02.530907238Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:39:02.531082 containerd[1455]: time="2024-12-13T13:39:02.531050306Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:39:02.531334 containerd[1455]: time="2024-12-13T13:39:02.531302409Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:39:02.531431 containerd[1455]: time="2024-12-13T13:39:02.531409941Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:39:02.531458 containerd[1455]: time="2024-12-13T13:39:02.531435669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:39:02.531487 containerd[1455]: time="2024-12-13T13:39:02.531459113Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:39:02.531487 containerd[1455]: time="2024-12-13T13:39:02.531477177Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:39:02.531533 containerd[1455]: time="2024-12-13T13:39:02.531491835Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:39:02.531533 containerd[1455]: time="2024-12-13T13:39:02.531505941Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:39:02.531533 containerd[1455]: time="2024-12-13T13:39:02.531520438Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:39:02.531604 containerd[1455]: time="2024-12-13T13:39:02.531535797Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:39:02.531604 containerd[1455]: time="2024-12-13T13:39:02.531550414Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:39:02.531604 containerd[1455]: time="2024-12-13T13:39:02.531570672Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:39:02.531604 containerd[1455]: time="2024-12-13T13:39:02.531585470Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:39:02.531718 containerd[1455]: time="2024-12-13T13:39:02.531607301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.531718 containerd[1455]: time="2024-12-13T13:39:02.531646374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.531718 containerd[1455]: time="2024-12-13T13:39:02.531664418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.531718 containerd[1455]: time="2024-12-13T13:39:02.531679226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.531718 containerd[1455]: time="2024-12-13T13:39:02.531693773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.531718 containerd[1455]: time="2024-12-13T13:39:02.531708090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.531851 containerd[1455]: time="2024-12-13T13:39:02.531722297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.531851 containerd[1455]: time="2024-12-13T13:39:02.531737836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.531851 containerd[1455]: time="2024-12-13T13:39:02.531755499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.531851 containerd[1455]: time="2024-12-13T13:39:02.531772120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.531851 containerd[1455]: time="2024-12-13T13:39:02.531785736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.531851 containerd[1455]: time="2024-12-13T13:39:02.531799211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.531851 containerd[1455]: time="2024-12-13T13:39:02.531813748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.531851 containerd[1455]: time="2024-12-13T13:39:02.531830890Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:39:02.532033 containerd[1455]: time="2024-12-13T13:39:02.531852431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.532033 containerd[1455]: time="2024-12-13T13:39:02.531868401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.532033 containerd[1455]: time="2024-12-13T13:39:02.531881265Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:39:02.532033 containerd[1455]: time="2024-12-13T13:39:02.531932311Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:39:02.532033 containerd[1455]: time="2024-12-13T13:39:02.531951807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:39:02.532033 containerd[1455]: time="2024-12-13T13:39:02.531963539Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:39:02.532033 containerd[1455]: time="2024-12-13T13:39:02.531977706Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:39:02.532033 containerd[1455]: time="2024-12-13T13:39:02.531988606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.532033 containerd[1455]: time="2024-12-13T13:39:02.532002743Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:39:02.532033 containerd[1455]: time="2024-12-13T13:39:02.532017440Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:39:02.532033 containerd[1455]: time="2024-12-13T13:39:02.532030956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:39:02.532455 containerd[1455]: time="2024-12-13T13:39:02.532365453Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:39:02.532455 containerd[1455]: time="2024-12-13T13:39:02.532431567Z" level=info msg="Connect containerd service" Dec 13 13:39:02.532643 containerd[1455]: time="2024-12-13T13:39:02.532465881Z" level=info msg="using legacy CRI server" Dec 13 13:39:02.532643 containerd[1455]: time="2024-12-13T13:39:02.532474858Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:39:02.532643 containerd[1455]: time="2024-12-13T13:39:02.532582650Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:39:02.533527 containerd[1455]: time="2024-12-13T13:39:02.533493368Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:39:02.534012 containerd[1455]: time="2024-12-13T13:39:02.533671953Z" level=info msg="Start subscribing containerd event" Dec 13 13:39:02.534012 containerd[1455]: time="2024-12-13T13:39:02.533728710Z" level=info msg="Start recovering state" Dec 13 13:39:02.534012 containerd[1455]: time="2024-12-13T13:39:02.533810243Z" level=info msg="Start event monitor" Dec 13 13:39:02.534012 containerd[1455]: time="2024-12-13T13:39:02.533826553Z" level=info msg="Start snapshots syncer" Dec 13 13:39:02.534012 containerd[1455]: time="2024-12-13T13:39:02.533839177Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:39:02.534012 containerd[1455]: time="2024-12-13T13:39:02.533848154Z" level=info msg="Start streaming server" Dec 13 13:39:02.534012 containerd[1455]: time="2024-12-13T13:39:02.533933314Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:39:02.534224 containerd[1455]: time="2024-12-13T13:39:02.534014356Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:39:02.534154 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:39:02.534367 containerd[1455]: time="2024-12-13T13:39:02.534338393Z" level=info msg="containerd successfully booted in 0.050950s" Dec 13 13:39:02.914346 systemd-networkd[1350]: eth0: Gained IPv6LL Dec 13 13:39:02.919830 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:39:02.925959 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:39:02.951590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:39:02.964707 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:39:03.017751 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:39:03.517436 sshd[1501]: Accepted publickey for core from 172.24.4.1 port 48098 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:39:03.523043 sshd-session[1501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:39:03.551266 systemd-logind[1433]: New session 1 of user core. Dec 13 13:39:03.556755 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:39:03.567789 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:39:03.608100 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:39:03.623986 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:39:03.644584 (systemd)[1530]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:39:03.765124 systemd[1530]: Queued start job for default target default.target. Dec 13 13:39:03.772532 systemd[1530]: Created slice app.slice - User Application Slice. Dec 13 13:39:03.772555 systemd[1530]: Reached target paths.target - Paths. Dec 13 13:39:03.772570 systemd[1530]: Reached target timers.target - Timers. Dec 13 13:39:03.774394 systemd[1530]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:39:03.805569 systemd[1530]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:39:03.805850 systemd[1530]: Reached target sockets.target - Sockets. Dec 13 13:39:03.805872 systemd[1530]: Reached target basic.target - Basic System. Dec 13 13:39:03.805913 systemd[1530]: Reached target default.target - Main User Target. Dec 13 13:39:03.805940 systemd[1530]: Startup finished in 154ms. Dec 13 13:39:03.806225 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:39:03.823781 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:39:04.294519 systemd[1]: Started sshd@1-172.24.4.147:22-172.24.4.1:48106.service - OpenSSH per-connection server daemon (172.24.4.1:48106). Dec 13 13:39:05.369978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:39:05.372381 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:39:05.818230 sshd[1541]: Accepted publickey for core from 172.24.4.1 port 48106 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:39:05.820887 sshd-session[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:39:05.834523 systemd-logind[1433]: New session 2 of user core. Dec 13 13:39:05.839086 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:39:06.462079 sshd[1551]: Connection closed by 172.24.4.1 port 48106 Dec 13 13:39:06.463102 sshd-session[1541]: pam_unix(sshd:session): session closed for user core Dec 13 13:39:06.476109 systemd[1]: sshd@1-172.24.4.147:22-172.24.4.1:48106.service: Deactivated successfully. Dec 13 13:39:06.480613 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:39:06.483947 systemd-logind[1433]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:39:06.492982 systemd[1]: Started sshd@2-172.24.4.147:22-172.24.4.1:52970.service - OpenSSH per-connection server daemon (172.24.4.1:52970). Dec 13 13:39:06.500082 systemd-logind[1433]: Removed session 2. Dec 13 13:39:07.517461 agetty[1511]: failed to open credentials directory Dec 13 13:39:07.533760 agetty[1512]: failed to open credentials directory Dec 13 13:39:07.536649 login[1511]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:39:07.544682 systemd-logind[1433]: New session 3 of user core. Dec 13 13:39:07.555052 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:39:07.564452 login[1512]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:39:07.576936 systemd-logind[1433]: New session 4 of user core. Dec 13 13:39:07.583958 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:39:07.626126 kubelet[1549]: E1213 13:39:07.626045 1549 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:39:07.628590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:39:07.628763 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:39:07.629040 systemd[1]: kubelet.service: Consumed 2.291s CPU time. Dec 13 13:39:07.812067 sshd[1560]: Accepted publickey for core from 172.24.4.1 port 52970 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:39:07.815183 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:39:07.828128 systemd-logind[1433]: New session 5 of user core. Dec 13 13:39:07.838052 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:39:08.346083 sshd[1591]: Connection closed by 172.24.4.1 port 52970 Dec 13 13:39:08.347184 sshd-session[1560]: pam_unix(sshd:session): session closed for user core Dec 13 13:39:08.354897 systemd[1]: sshd@2-172.24.4.147:22-172.24.4.1:52970.service: Deactivated successfully. Dec 13 13:39:08.359719 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:39:08.363031 systemd-logind[1433]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:39:08.365906 systemd-logind[1433]: Removed session 5. Dec 13 13:39:08.959756 coreos-metadata[1424]: Dec 13 13:39:08.959 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:39:09.033073 coreos-metadata[1424]: Dec 13 13:39:09.032 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 13:39:09.243244 coreos-metadata[1424]: Dec 13 13:39:09.243 INFO Fetch successful Dec 13 13:39:09.243244 coreos-metadata[1424]: Dec 13 13:39:09.243 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 13:39:09.261363 coreos-metadata[1424]: Dec 13 13:39:09.261 INFO Fetch successful Dec 13 13:39:09.261363 coreos-metadata[1424]: Dec 13 13:39:09.261 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 13:39:09.279035 coreos-metadata[1424]: Dec 13 13:39:09.278 INFO Fetch successful Dec 13 13:39:09.279035 coreos-metadata[1424]: Dec 13 13:39:09.279 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 13:39:09.297524 coreos-metadata[1424]: Dec 13 13:39:09.297 INFO Fetch successful Dec 13 13:39:09.297524 coreos-metadata[1424]: Dec 13 13:39:09.297 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 13:39:09.312998 coreos-metadata[1424]: Dec 13 13:39:09.312 INFO Fetch successful Dec 13 13:39:09.312998 coreos-metadata[1424]: Dec 13 13:39:09.312 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 13:39:09.327292 coreos-metadata[1424]: Dec 13 13:39:09.327 INFO Fetch successful Dec 13 13:39:09.369988 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 13:39:09.371371 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:39:09.546128 coreos-metadata[1504]: Dec 13 13:39:09.545 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:39:09.588915 coreos-metadata[1504]: Dec 13 13:39:09.588 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 13:39:09.605376 coreos-metadata[1504]: Dec 13 13:39:09.605 INFO Fetch successful Dec 13 13:39:09.605376 coreos-metadata[1504]: Dec 13 13:39:09.605 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 13:39:09.621381 coreos-metadata[1504]: Dec 13 13:39:09.621 INFO Fetch successful Dec 13 13:39:09.906682 unknown[1504]: wrote ssh authorized keys file for user: core Dec 13 13:39:10.732865 update-ssh-keys[1604]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:39:10.734117 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 13:39:10.737770 systemd[1]: Finished sshkeys.service. Dec 13 13:39:10.742572 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:39:10.743302 systemd[1]: Startup finished in 1.112s (kernel) + 15.689s (initrd) + 13.750s (userspace) = 30.552s. Dec 13 13:39:17.879793 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:39:17.889027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:39:18.320070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:39:18.340687 (kubelet)[1616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:39:18.369277 systemd[1]: Started sshd@3-172.24.4.147:22-172.24.4.1:33340.service - OpenSSH per-connection server daemon (172.24.4.1:33340). Dec 13 13:39:18.648249 kubelet[1616]: E1213 13:39:18.648125 1616 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:39:18.651990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:39:18.652140 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:39:19.872023 sshd[1622]: Accepted publickey for core from 172.24.4.1 port 33340 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:39:19.874791 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:39:19.886268 systemd-logind[1433]: New session 6 of user core. Dec 13 13:39:19.893925 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:39:20.609873 sshd[1627]: Connection closed by 172.24.4.1 port 33340 Dec 13 13:39:20.610955 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Dec 13 13:39:20.624160 systemd[1]: sshd@3-172.24.4.147:22-172.24.4.1:33340.service: Deactivated successfully. Dec 13 13:39:20.627141 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:39:20.630908 systemd-logind[1433]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:39:20.636253 systemd[1]: Started sshd@4-172.24.4.147:22-172.24.4.1:33342.service - OpenSSH per-connection server daemon (172.24.4.1:33342). Dec 13 13:39:20.639576 systemd-logind[1433]: Removed session 6. Dec 13 13:39:22.078406 sshd[1632]: Accepted publickey for core from 172.24.4.1 port 33342 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:39:22.081023 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:39:22.090058 systemd-logind[1433]: New session 7 of user core. Dec 13 13:39:22.102976 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:39:22.734685 sshd[1634]: Connection closed by 172.24.4.1 port 33342 Dec 13 13:39:22.735131 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Dec 13 13:39:22.747813 systemd[1]: sshd@4-172.24.4.147:22-172.24.4.1:33342.service: Deactivated successfully. Dec 13 13:39:22.751956 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:39:22.755839 systemd-logind[1433]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:39:22.770289 systemd[1]: Started sshd@5-172.24.4.147:22-172.24.4.1:33352.service - OpenSSH per-connection server daemon (172.24.4.1:33352). Dec 13 13:39:22.774850 systemd-logind[1433]: Removed session 7. Dec 13 13:39:24.114329 sshd[1639]: Accepted publickey for core from 172.24.4.1 port 33352 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:39:24.115970 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:39:24.121562 systemd-logind[1433]: New session 8 of user core. Dec 13 13:39:24.128839 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:39:24.722045 sshd[1641]: Connection closed by 172.24.4.1 port 33352 Dec 13 13:39:24.722810 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Dec 13 13:39:24.732879 systemd[1]: sshd@5-172.24.4.147:22-172.24.4.1:33352.service: Deactivated successfully. Dec 13 13:39:24.735842 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:39:24.738937 systemd-logind[1433]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:39:24.746216 systemd[1]: Started sshd@6-172.24.4.147:22-172.24.4.1:37484.service - OpenSSH per-connection server daemon (172.24.4.1:37484). Dec 13 13:39:24.749391 systemd-logind[1433]: Removed session 8. Dec 13 13:39:25.975972 sshd[1646]: Accepted publickey for core from 172.24.4.1 port 37484 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:39:25.979142 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:39:25.989565 systemd-logind[1433]: New session 9 of user core. Dec 13 13:39:26.005024 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:39:26.445396 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:39:26.446107 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:39:26.466462 sudo[1649]: pam_unix(sudo:session): session closed for user root Dec 13 13:39:26.671723 sshd[1648]: Connection closed by 172.24.4.1 port 37484 Dec 13 13:39:26.672501 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Dec 13 13:39:26.681901 systemd[1]: sshd@6-172.24.4.147:22-172.24.4.1:37484.service: Deactivated successfully. Dec 13 13:39:26.685135 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:39:26.687124 systemd-logind[1433]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:39:26.695233 systemd[1]: Started sshd@7-172.24.4.147:22-172.24.4.1:37492.service - OpenSSH per-connection server daemon (172.24.4.1:37492). Dec 13 13:39:26.699352 systemd-logind[1433]: Removed session 9. Dec 13 13:39:27.884439 sshd[1654]: Accepted publickey for core from 172.24.4.1 port 37492 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:39:27.887365 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:39:27.896165 systemd-logind[1433]: New session 10 of user core. Dec 13 13:39:27.904897 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:39:28.298186 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:39:28.299391 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:39:28.305982 sudo[1658]: pam_unix(sudo:session): session closed for user root Dec 13 13:39:28.316397 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:39:28.317084 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:39:28.340302 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:39:28.414096 augenrules[1680]: No rules Dec 13 13:39:28.415327 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:39:28.415894 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:39:28.418248 sudo[1657]: pam_unix(sudo:session): session closed for user root Dec 13 13:39:28.626086 sshd[1656]: Connection closed by 172.24.4.1 port 37492 Dec 13 13:39:28.626009 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Dec 13 13:39:28.639229 systemd[1]: sshd@7-172.24.4.147:22-172.24.4.1:37492.service: Deactivated successfully. Dec 13 13:39:28.642726 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:39:28.647025 systemd-logind[1433]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:39:28.653219 systemd[1]: Started sshd@8-172.24.4.147:22-172.24.4.1:37496.service - OpenSSH per-connection server daemon (172.24.4.1:37496). Dec 13 13:39:28.656592 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:39:28.668805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:39:28.670566 systemd-logind[1433]: Removed session 10. Dec 13 13:39:29.006053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:39:29.022450 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:39:29.652762 kubelet[1698]: E1213 13:39:29.652691 1698 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:39:29.658540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:39:29.659245 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:39:29.662511 sshd[1688]: Accepted publickey for core from 172.24.4.1 port 37496 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:39:29.665106 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:39:29.675893 systemd-logind[1433]: New session 11 of user core. Dec 13 13:39:29.682898 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:39:30.128511 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:39:30.129192 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:39:31.707046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:39:31.721170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:39:31.781447 systemd[1]: Reloading requested from client PID 1746 ('systemctl') (unit session-11.scope)... Dec 13 13:39:31.781489 systemd[1]: Reloading... Dec 13 13:39:31.884666 zram_generator::config[1782]: No configuration found. Dec 13 13:39:32.546390 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:39:32.630366 systemd[1]: Reloading finished in 848 ms. Dec 13 13:39:32.679384 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 13:39:32.679468 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 13:39:32.679712 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:39:32.686883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:39:32.790752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:39:32.796558 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:39:32.848356 kubelet[1849]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:39:32.848356 kubelet[1849]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:39:32.848356 kubelet[1849]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:39:32.848742 kubelet[1849]: I1213 13:39:32.848416 1849 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:39:33.414734 kubelet[1849]: I1213 13:39:33.414505 1849 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:39:33.414734 kubelet[1849]: I1213 13:39:33.414559 1849 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:39:33.415076 kubelet[1849]: I1213 13:39:33.415030 1849 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:39:34.476101 kubelet[1849]: I1213 13:39:34.475157 1849 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:39:34.515850 kubelet[1849]: I1213 13:39:34.515804 1849 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:39:34.516594 kubelet[1849]: I1213 13:39:34.516560 1849 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:39:34.517249 kubelet[1849]: I1213 13:39:34.517209 1849 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:39:34.517572 kubelet[1849]: I1213 13:39:34.517546 1849 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:39:34.517768 kubelet[1849]: I1213 13:39:34.517742 1849 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:39:34.518961 kubelet[1849]: I1213 13:39:34.518084 1849 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:39:34.518961 kubelet[1849]: I1213 13:39:34.518280 1849 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:39:34.518961 kubelet[1849]: I1213 13:39:34.518315 1849 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:39:34.518961 kubelet[1849]: I1213 13:39:34.518375 1849 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:39:34.518961 kubelet[1849]: I1213 13:39:34.518407 1849 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:39:34.519391 kubelet[1849]: E1213 13:39:34.519106 1849 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:34.519391 kubelet[1849]: E1213 13:39:34.519206 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:34.522684 kubelet[1849]: I1213 13:39:34.522489 1849 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:39:34.530715 kubelet[1849]: W1213 13:39:34.530430 1849 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.24.4.147" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 13:39:34.530715 kubelet[1849]: E1213 13:39:34.530491 1849 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.147" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 13:39:34.531152 kubelet[1849]: W1213 13:39:34.531116 1849 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 13:39:34.532019 kubelet[1849]: E1213 13:39:34.531313 1849 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 13:39:34.532019 kubelet[1849]: I1213 13:39:34.531587 1849 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:39:34.532019 kubelet[1849]: W1213 13:39:34.531754 1849 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:39:34.533771 kubelet[1849]: I1213 13:39:34.533726 1849 server.go:1256] "Started kubelet" Dec 13 13:39:34.534928 kubelet[1849]: I1213 13:39:34.534818 1849 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:39:34.538402 kubelet[1849]: I1213 13:39:34.538068 1849 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:39:34.543584 kubelet[1849]: I1213 13:39:34.543509 1849 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:39:34.548676 kubelet[1849]: I1213 13:39:34.547484 1849 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:39:34.548676 kubelet[1849]: I1213 13:39:34.548050 1849 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:39:34.556774 kubelet[1849]: I1213 13:39:34.556738 1849 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:39:34.557437 kubelet[1849]: I1213 13:39:34.557406 1849 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:39:34.559480 kubelet[1849]: I1213 13:39:34.559450 1849 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:39:34.562344 kubelet[1849]: I1213 13:39:34.561384 1849 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:39:34.563859 kubelet[1849]: E1213 13:39:34.562579 1849 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.147.1810c0314e1676e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.147,UID:172.24.4.147,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.24.4.147,},FirstTimestamp:2024-12-13 13:39:34.53366653 +0000 UTC m=+1.732895942,LastTimestamp:2024-12-13 13:39:34.53366653 +0000 UTC m=+1.732895942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.147,}" Dec 13 13:39:34.564018 kubelet[1849]: W1213 13:39:34.562946 1849 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 13:39:34.564127 kubelet[1849]: E1213 13:39:34.564113 1849 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 13:39:34.564233 kubelet[1849]: E1213 13:39:34.563061 1849 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.147\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 13:39:34.564421 kubelet[1849]: I1213 13:39:34.563910 1849 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:39:34.565169 kubelet[1849]: E1213 13:39:34.564270 1849 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:39:34.567510 kubelet[1849]: I1213 13:39:34.567492 1849 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:39:34.578672 kubelet[1849]: E1213 13:39:34.577224 1849 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.147.1810c0314fe8f34e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.147,UID:172.24.4.147,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.24.4.147,},FirstTimestamp:2024-12-13 13:39:34.564238158 +0000 UTC m=+1.763467530,LastTimestamp:2024-12-13 13:39:34.564238158 +0000 UTC m=+1.763467530,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.147,}" Dec 13 13:39:34.593941 kubelet[1849]: E1213 13:39:34.593907 1849 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.147.1810c0315189ad98 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.147,UID:172.24.4.147,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.24.4.147 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.24.4.147,},FirstTimestamp:2024-12-13 13:39:34.591548824 +0000 UTC m=+1.790778196,LastTimestamp:2024-12-13 13:39:34.591548824 +0000 UTC m=+1.790778196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.147,}" Dec 13 13:39:34.597230 kubelet[1849]: I1213 13:39:34.596955 1849 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:39:34.597230 kubelet[1849]: I1213 13:39:34.596974 1849 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:39:34.597230 kubelet[1849]: I1213 13:39:34.596994 1849 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:39:34.602221 kubelet[1849]: I1213 13:39:34.601900 1849 policy_none.go:49] "None policy: Start" Dec 13 13:39:34.603519 kubelet[1849]: I1213 13:39:34.603498 1849 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:39:34.603519 kubelet[1849]: I1213 13:39:34.603522 1849 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:39:34.611096 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:39:34.625831 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:39:34.631765 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:39:34.638494 kubelet[1849]: I1213 13:39:34.638470 1849 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:39:34.641384 kubelet[1849]: I1213 13:39:34.640247 1849 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:39:34.646221 kubelet[1849]: E1213 13:39:34.645382 1849 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.147\" not found" Dec 13 13:39:34.651424 kubelet[1849]: I1213 13:39:34.651377 1849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:39:34.655411 kubelet[1849]: I1213 13:39:34.655374 1849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:39:34.655863 kubelet[1849]: I1213 13:39:34.655822 1849 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:39:34.656023 kubelet[1849]: I1213 13:39:34.655953 1849 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:39:34.656223 kubelet[1849]: E1213 13:39:34.656205 1849 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 13:39:34.658963 kubelet[1849]: I1213 13:39:34.658839 1849 kubelet_node_status.go:73] "Attempting to register node" node="172.24.4.147" Dec 13 13:39:34.687629 kubelet[1849]: I1213 13:39:34.687580 1849 kubelet_node_status.go:76] "Successfully registered node" node="172.24.4.147" Dec 13 13:39:34.706333 kubelet[1849]: E1213 13:39:34.706298 1849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.147\" not found" Dec 13 13:39:34.807854 kubelet[1849]: E1213 13:39:34.807591 1849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.147\" not found" Dec 13 13:39:34.907952 kubelet[1849]: E1213 13:39:34.907880 1849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.147\" not found" Dec 13 13:39:35.008302 kubelet[1849]: E1213 13:39:35.008200 1849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.147\" not found" Dec 13 13:39:35.108921 kubelet[1849]: E1213 13:39:35.108777 1849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.147\" not found" Dec 13 13:39:35.209778 kubelet[1849]: E1213 13:39:35.209686 1849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.147\" not found" Dec 13 13:39:35.310692 kubelet[1849]: E1213 13:39:35.310596 1849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.147\" not found" Dec 13 13:39:35.335951 sudo[1707]: pam_unix(sudo:session): session closed for user root Dec 13 13:39:35.411522 kubelet[1849]: E1213 13:39:35.411407 1849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.147\" not found" Dec 13 13:39:35.418942 kubelet[1849]: I1213 13:39:35.418873 1849 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 13:39:35.419296 kubelet[1849]: W1213 13:39:35.419165 1849 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:39:35.419296 kubelet[1849]: W1213 13:39:35.419221 1849 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:39:35.512061 kubelet[1849]: E1213 13:39:35.511960 1849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.147\" not found" Dec 13 13:39:35.519445 kubelet[1849]: E1213 13:39:35.519325 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:35.534612 sshd[1706]: Connection closed by 172.24.4.1 port 37496 Dec 13 13:39:35.535191 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Dec 13 13:39:35.544865 systemd[1]: sshd@8-172.24.4.147:22-172.24.4.1:37496.service: Deactivated successfully. Dec 13 13:39:35.551016 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:39:35.551505 systemd[1]: session-11.scope: Consumed 1.006s CPU time, 111.1M memory peak, 0B memory swap peak. Dec 13 13:39:35.554255 systemd-logind[1433]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:39:35.558053 systemd-logind[1433]: Removed session 11. Dec 13 13:39:35.612949 kubelet[1849]: E1213 13:39:35.612885 1849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.147\" not found" Dec 13 13:39:35.713967 kubelet[1849]: E1213 13:39:35.713806 1849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.147\" not found" Dec 13 13:39:35.814830 kubelet[1849]: E1213 13:39:35.814754 1849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.147\" not found" Dec 13 13:39:35.915895 kubelet[1849]: E1213 13:39:35.915831 1849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.147\" not found" Dec 13 13:39:36.016285 kubelet[1849]: E1213 13:39:36.016086 1849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.147\" not found" Dec 13 13:39:36.118173 kubelet[1849]: I1213 13:39:36.118031 1849 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 13:39:36.119069 containerd[1455]: time="2024-12-13T13:39:36.118734779Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:39:36.120292 kubelet[1849]: I1213 13:39:36.119350 1849 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 13:39:36.520232 kubelet[1849]: E1213 13:39:36.520162 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:36.520232 kubelet[1849]: I1213 13:39:36.520232 1849 apiserver.go:52] "Watching apiserver" Dec 13 13:39:36.532572 kubelet[1849]: I1213 13:39:36.532479 1849 topology_manager.go:215] "Topology Admit Handler" podUID="8cd289d4-e94b-4467-9501-6954d295e4cd" podNamespace="calico-system" podName="calico-node-znh9x" Dec 13 13:39:36.533675 kubelet[1849]: I1213 13:39:36.532981 1849 topology_manager.go:215] "Topology Admit Handler" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" podNamespace="calico-system" podName="csi-node-driver-r8fks" Dec 13 13:39:36.533675 kubelet[1849]: I1213 13:39:36.533103 1849 topology_manager.go:215] "Topology Admit Handler" podUID="73412d34-4505-40c6-9009-a70a7fa8cecb" podNamespace="kube-system" podName="kube-proxy-b847z" Dec 13 13:39:36.533675 kubelet[1849]: E1213 13:39:36.533494 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:39:36.548969 systemd[1]: Created slice kubepods-besteffort-pod8cd289d4_e94b_4467_9501_6954d295e4cd.slice - libcontainer container kubepods-besteffort-pod8cd289d4_e94b_4467_9501_6954d295e4cd.slice. Dec 13 13:39:36.562245 kubelet[1849]: I1213 13:39:36.561317 1849 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:39:36.570600 kubelet[1849]: I1213 13:39:36.570536 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8cd289d4-e94b-4467-9501-6954d295e4cd-cni-bin-dir\") pod \"calico-node-znh9x\" (UID: \"8cd289d4-e94b-4467-9501-6954d295e4cd\") " pod="calico-system/calico-node-znh9x" Dec 13 13:39:36.570799 kubelet[1849]: I1213 13:39:36.570673 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8cd289d4-e94b-4467-9501-6954d295e4cd-cni-log-dir\") pod \"calico-node-znh9x\" (UID: \"8cd289d4-e94b-4467-9501-6954d295e4cd\") " pod="calico-system/calico-node-znh9x" Dec 13 13:39:36.570799 kubelet[1849]: I1213 13:39:36.570738 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5d03366d-217c-4fd9-b86c-ea2d2a91fb87-varrun\") pod \"csi-node-driver-r8fks\" (UID: \"5d03366d-217c-4fd9-b86c-ea2d2a91fb87\") " pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:36.570799 kubelet[1849]: I1213 13:39:36.570793 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5d03366d-217c-4fd9-b86c-ea2d2a91fb87-socket-dir\") pod \"csi-node-driver-r8fks\" (UID: \"5d03366d-217c-4fd9-b86c-ea2d2a91fb87\") " pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:36.570989 kubelet[1849]: I1213 13:39:36.570847 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/73412d34-4505-40c6-9009-a70a7fa8cecb-kube-proxy\") pod \"kube-proxy-b847z\" (UID: \"73412d34-4505-40c6-9009-a70a7fa8cecb\") " pod="kube-system/kube-proxy-b847z" Dec 13 13:39:36.570989 kubelet[1849]: I1213 13:39:36.570902 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cd289d4-e94b-4467-9501-6954d295e4cd-lib-modules\") pod \"calico-node-znh9x\" (UID: \"8cd289d4-e94b-4467-9501-6954d295e4cd\") " pod="calico-system/calico-node-znh9x" Dec 13 13:39:36.570989 kubelet[1849]: I1213 13:39:36.570954 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8cd289d4-e94b-4467-9501-6954d295e4cd-policysync\") pod \"calico-node-znh9x\" (UID: \"8cd289d4-e94b-4467-9501-6954d295e4cd\") " pod="calico-system/calico-node-znh9x" Dec 13 13:39:36.571172 kubelet[1849]: I1213 13:39:36.571008 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cd289d4-e94b-4467-9501-6954d295e4cd-tigera-ca-bundle\") pod \"calico-node-znh9x\" (UID: \"8cd289d4-e94b-4467-9501-6954d295e4cd\") " pod="calico-system/calico-node-znh9x" Dec 13 13:39:36.571172 kubelet[1849]: I1213 13:39:36.571103 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8cd289d4-e94b-4467-9501-6954d295e4cd-cni-net-dir\") pod \"calico-node-znh9x\" (UID: \"8cd289d4-e94b-4467-9501-6954d295e4cd\") " pod="calico-system/calico-node-znh9x" Dec 13 13:39:36.571292 kubelet[1849]: I1213 13:39:36.571177 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhwbt\" (UniqueName: \"kubernetes.io/projected/8cd289d4-e94b-4467-9501-6954d295e4cd-kube-api-access-hhwbt\") pod \"calico-node-znh9x\" (UID: \"8cd289d4-e94b-4467-9501-6954d295e4cd\") " pod="calico-system/calico-node-znh9x" Dec 13 13:39:36.571292 kubelet[1849]: I1213 13:39:36.571235 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5d03366d-217c-4fd9-b86c-ea2d2a91fb87-registration-dir\") pod \"csi-node-driver-r8fks\" (UID: \"5d03366d-217c-4fd9-b86c-ea2d2a91fb87\") " pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:36.571420 kubelet[1849]: I1213 13:39:36.571295 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73412d34-4505-40c6-9009-a70a7fa8cecb-lib-modules\") pod \"kube-proxy-b847z\" (UID: \"73412d34-4505-40c6-9009-a70a7fa8cecb\") " pod="kube-system/kube-proxy-b847z" Dec 13 13:39:36.571420 kubelet[1849]: I1213 13:39:36.571353 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk8kc\" (UniqueName: \"kubernetes.io/projected/73412d34-4505-40c6-9009-a70a7fa8cecb-kube-api-access-zk8kc\") pod \"kube-proxy-b847z\" (UID: \"73412d34-4505-40c6-9009-a70a7fa8cecb\") " pod="kube-system/kube-proxy-b847z" Dec 13 13:39:36.571420 kubelet[1849]: I1213 13:39:36.571408 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cd289d4-e94b-4467-9501-6954d295e4cd-xtables-lock\") pod \"calico-node-znh9x\" (UID: \"8cd289d4-e94b-4467-9501-6954d295e4cd\") " pod="calico-system/calico-node-znh9x" Dec 13 13:39:36.571595 kubelet[1849]: I1213 13:39:36.571462 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d03366d-217c-4fd9-b86c-ea2d2a91fb87-kubelet-dir\") pod \"csi-node-driver-r8fks\" (UID: \"5d03366d-217c-4fd9-b86c-ea2d2a91fb87\") " pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:36.571595 kubelet[1849]: I1213 13:39:36.571535 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqjcp\" (UniqueName: \"kubernetes.io/projected/5d03366d-217c-4fd9-b86c-ea2d2a91fb87-kube-api-access-vqjcp\") pod \"csi-node-driver-r8fks\" (UID: \"5d03366d-217c-4fd9-b86c-ea2d2a91fb87\") " pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:36.571595 kubelet[1849]: I1213 13:39:36.571588 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8cd289d4-e94b-4467-9501-6954d295e4cd-flexvol-driver-host\") pod \"calico-node-znh9x\" (UID: \"8cd289d4-e94b-4467-9501-6954d295e4cd\") " pod="calico-system/calico-node-znh9x" Dec 13 13:39:36.571832 kubelet[1849]: I1213 13:39:36.571727 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8cd289d4-e94b-4467-9501-6954d295e4cd-var-run-calico\") pod \"calico-node-znh9x\" (UID: \"8cd289d4-e94b-4467-9501-6954d295e4cd\") " pod="calico-system/calico-node-znh9x" Dec 13 13:39:36.571832 kubelet[1849]: I1213 13:39:36.571789 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8cd289d4-e94b-4467-9501-6954d295e4cd-var-lib-calico\") pod \"calico-node-znh9x\" (UID: \"8cd289d4-e94b-4467-9501-6954d295e4cd\") " pod="calico-system/calico-node-znh9x" Dec 13 13:39:36.571954 kubelet[1849]: I1213 13:39:36.571841 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73412d34-4505-40c6-9009-a70a7fa8cecb-xtables-lock\") pod \"kube-proxy-b847z\" (UID: \"73412d34-4505-40c6-9009-a70a7fa8cecb\") " pod="kube-system/kube-proxy-b847z" Dec 13 13:39:36.571954 kubelet[1849]: I1213 13:39:36.571894 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8cd289d4-e94b-4467-9501-6954d295e4cd-node-certs\") pod \"calico-node-znh9x\" (UID: \"8cd289d4-e94b-4467-9501-6954d295e4cd\") " pod="calico-system/calico-node-znh9x" Dec 13 13:39:36.573024 systemd[1]: Created slice kubepods-besteffort-pod73412d34_4505_40c6_9009_a70a7fa8cecb.slice - libcontainer container kubepods-besteffort-pod73412d34_4505_40c6_9009_a70a7fa8cecb.slice. Dec 13 13:39:36.688229 kubelet[1849]: E1213 13:39:36.687787 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.688229 kubelet[1849]: W1213 13:39:36.687838 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.688229 kubelet[1849]: E1213 13:39:36.687882 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.702458 kubelet[1849]: E1213 13:39:36.702191 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.702458 kubelet[1849]: W1213 13:39:36.702231 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.703520 kubelet[1849]: E1213 13:39:36.703491 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.703970 kubelet[1849]: W1213 13:39:36.703713 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.704288 kubelet[1849]: E1213 13:39:36.704261 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.704599 kubelet[1849]: W1213 13:39:36.704425 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.705897 kubelet[1849]: E1213 13:39:36.705867 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.706192 kubelet[1849]: W1213 13:39:36.706014 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.709790 kubelet[1849]: E1213 13:39:36.709755 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.709971 kubelet[1849]: W1213 13:39:36.709944 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.713671 kubelet[1849]: E1213 13:39:36.711338 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.722683 kubelet[1849]: E1213 13:39:36.721112 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.722683 kubelet[1849]: E1213 13:39:36.721144 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.722683 kubelet[1849]: E1213 13:39:36.721223 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.722683 kubelet[1849]: E1213 13:39:36.721855 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.722683 kubelet[1849]: W1213 13:39:36.721876 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.727818 kubelet[1849]: E1213 13:39:36.727778 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.728016 kubelet[1849]: W1213 13:39:36.727951 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.728669 kubelet[1849]: E1213 13:39:36.728577 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.728787 kubelet[1849]: E1213 13:39:36.728701 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.728787 kubelet[1849]: E1213 13:39:36.728766 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.730995 kubelet[1849]: E1213 13:39:36.730965 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.731309 kubelet[1849]: W1213 13:39:36.731130 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.732787 kubelet[1849]: E1213 13:39:36.732755 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.733157 kubelet[1849]: E1213 13:39:36.733031 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.733157 kubelet[1849]: W1213 13:39:36.733055 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.733542 kubelet[1849]: E1213 13:39:36.733492 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.734680 kubelet[1849]: E1213 13:39:36.734433 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.734680 kubelet[1849]: W1213 13:39:36.734469 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.736706 kubelet[1849]: E1213 13:39:36.734868 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.736706 kubelet[1849]: W1213 13:39:36.734902 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.736706 kubelet[1849]: E1213 13:39:36.735189 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.736706 kubelet[1849]: W1213 13:39:36.735208 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.736706 kubelet[1849]: E1213 13:39:36.735491 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.736706 kubelet[1849]: W1213 13:39:36.735510 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.736706 kubelet[1849]: E1213 13:39:36.735846 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.736706 kubelet[1849]: E1213 13:39:36.735909 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.736706 kubelet[1849]: E1213 13:39:36.735920 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.736706 kubelet[1849]: W1213 13:39:36.735939 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.737480 kubelet[1849]: E1213 13:39:36.735949 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.737480 kubelet[1849]: E1213 13:39:36.735970 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.737480 kubelet[1849]: E1213 13:39:36.736779 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.739191 kubelet[1849]: E1213 13:39:36.739007 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.739191 kubelet[1849]: W1213 13:39:36.739115 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.739688 kubelet[1849]: E1213 13:39:36.739413 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.740085 kubelet[1849]: E1213 13:39:36.740055 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.740224 kubelet[1849]: W1213 13:39:36.740200 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.740765 kubelet[1849]: E1213 13:39:36.740737 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.741016 kubelet[1849]: W1213 13:39:36.740903 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.741204 kubelet[1849]: E1213 13:39:36.741127 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.741204 kubelet[1849]: E1213 13:39:36.741175 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.741424 kubelet[1849]: E1213 13:39:36.741406 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.741675 kubelet[1849]: W1213 13:39:36.741515 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.741675 kubelet[1849]: E1213 13:39:36.741545 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.742209 kubelet[1849]: E1213 13:39:36.742138 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.742209 kubelet[1849]: W1213 13:39:36.742157 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.742209 kubelet[1849]: E1213 13:39:36.742179 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.756107 kubelet[1849]: E1213 13:39:36.756093 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:36.756759 kubelet[1849]: W1213 13:39:36.756717 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:36.756759 kubelet[1849]: E1213 13:39:36.756737 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:36.868868 containerd[1455]: time="2024-12-13T13:39:36.868610163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-znh9x,Uid:8cd289d4-e94b-4467-9501-6954d295e4cd,Namespace:calico-system,Attempt:0,}" Dec 13 13:39:36.878981 containerd[1455]: time="2024-12-13T13:39:36.878739755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b847z,Uid:73412d34-4505-40c6-9009-a70a7fa8cecb,Namespace:kube-system,Attempt:0,}" Dec 13 13:39:37.521102 kubelet[1849]: E1213 13:39:37.521056 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:37.556836 containerd[1455]: time="2024-12-13T13:39:37.556613589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:39:37.560093 containerd[1455]: time="2024-12-13T13:39:37.560032813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 13:39:37.562681 containerd[1455]: time="2024-12-13T13:39:37.561742039Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:39:37.563951 containerd[1455]: time="2024-12-13T13:39:37.563874300Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:39:37.564615 containerd[1455]: time="2024-12-13T13:39:37.564567926Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:39:37.570076 containerd[1455]: time="2024-12-13T13:39:37.570018228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:39:37.572785 containerd[1455]: time="2024-12-13T13:39:37.572716263Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 693.383066ms" Dec 13 13:39:37.574783 containerd[1455]: time="2024-12-13T13:39:37.574607378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 705.680383ms" Dec 13 13:39:37.657129 kubelet[1849]: E1213 13:39:37.657010 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:39:37.694359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3705421850.mount: Deactivated successfully. Dec 13 13:39:37.807053 containerd[1455]: time="2024-12-13T13:39:37.806248622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:39:37.807053 containerd[1455]: time="2024-12-13T13:39:37.806418480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:39:37.807053 containerd[1455]: time="2024-12-13T13:39:37.806469823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:39:37.807053 containerd[1455]: time="2024-12-13T13:39:37.806945835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:39:37.808919 containerd[1455]: time="2024-12-13T13:39:37.808477729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:39:37.809179 containerd[1455]: time="2024-12-13T13:39:37.809097381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:39:37.809383 containerd[1455]: time="2024-12-13T13:39:37.809146510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:39:37.810833 containerd[1455]: time="2024-12-13T13:39:37.810729958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:39:37.925044 systemd[1]: Started cri-containerd-4d5c1b995800922579429ee3e1420091fa2157e9e0bb6626b13deeb9517be589.scope - libcontainer container 4d5c1b995800922579429ee3e1420091fa2157e9e0bb6626b13deeb9517be589. Dec 13 13:39:37.930213 systemd[1]: Started cri-containerd-63a295668b578cd2f41c42f231b0cf5a0a143ab51387f6416f67a7e732ef532a.scope - libcontainer container 63a295668b578cd2f41c42f231b0cf5a0a143ab51387f6416f67a7e732ef532a. Dec 13 13:39:37.960070 containerd[1455]: time="2024-12-13T13:39:37.959944210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b847z,Uid:73412d34-4505-40c6-9009-a70a7fa8cecb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d5c1b995800922579429ee3e1420091fa2157e9e0bb6626b13deeb9517be589\"" Dec 13 13:39:37.963404 containerd[1455]: time="2024-12-13T13:39:37.963209373Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 13:39:37.979197 containerd[1455]: time="2024-12-13T13:39:37.978921810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-znh9x,Uid:8cd289d4-e94b-4467-9501-6954d295e4cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"63a295668b578cd2f41c42f231b0cf5a0a143ab51387f6416f67a7e732ef532a\"" Dec 13 13:39:38.523536 kubelet[1849]: E1213 13:39:38.522780 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:39.367298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1042805949.mount: Deactivated successfully. Dec 13 13:39:39.523279 kubelet[1849]: E1213 13:39:39.523190 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:39.656675 kubelet[1849]: E1213 13:39:39.656653 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:39:39.835045 containerd[1455]: time="2024-12-13T13:39:39.834973109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:39:39.836440 containerd[1455]: time="2024-12-13T13:39:39.836382593Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Dec 13 13:39:39.837632 containerd[1455]: time="2024-12-13T13:39:39.837562739Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:39:39.840156 containerd[1455]: time="2024-12-13T13:39:39.840111275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:39:39.840968 containerd[1455]: time="2024-12-13T13:39:39.840821658Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.877581689s" Dec 13 13:39:39.840968 containerd[1455]: time="2024-12-13T13:39:39.840852524Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 13:39:39.842226 containerd[1455]: time="2024-12-13T13:39:39.841769692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 13:39:39.844435 containerd[1455]: time="2024-12-13T13:39:39.843636177Z" level=info msg="CreateContainer within sandbox \"4d5c1b995800922579429ee3e1420091fa2157e9e0bb6626b13deeb9517be589\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:39:39.865808 containerd[1455]: time="2024-12-13T13:39:39.865772461Z" level=info msg="CreateContainer within sandbox \"4d5c1b995800922579429ee3e1420091fa2157e9e0bb6626b13deeb9517be589\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0080fb5cc52a0a2289aadc9486b6abadf1062540cb9991ff4384eabccce7bf34\"" Dec 13 13:39:39.868643 containerd[1455]: time="2024-12-13T13:39:39.868597598Z" level=info msg="StartContainer for \"0080fb5cc52a0a2289aadc9486b6abadf1062540cb9991ff4384eabccce7bf34\"" Dec 13 13:39:39.907747 systemd[1]: Started cri-containerd-0080fb5cc52a0a2289aadc9486b6abadf1062540cb9991ff4384eabccce7bf34.scope - libcontainer container 0080fb5cc52a0a2289aadc9486b6abadf1062540cb9991ff4384eabccce7bf34. Dec 13 13:39:39.950120 containerd[1455]: time="2024-12-13T13:39:39.950079286Z" level=info msg="StartContainer for \"0080fb5cc52a0a2289aadc9486b6abadf1062540cb9991ff4384eabccce7bf34\" returns successfully" Dec 13 13:39:40.523688 kubelet[1849]: E1213 13:39:40.523500 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:40.894038 kubelet[1849]: E1213 13:39:40.893986 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.894038 kubelet[1849]: W1213 13:39:40.894027 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.894998 kubelet[1849]: E1213 13:39:40.894102 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.894998 kubelet[1849]: E1213 13:39:40.894668 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.894998 kubelet[1849]: W1213 13:39:40.894690 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.894998 kubelet[1849]: E1213 13:39:40.894762 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.895351 kubelet[1849]: E1213 13:39:40.895227 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.895351 kubelet[1849]: W1213 13:39:40.895248 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.895351 kubelet[1849]: E1213 13:39:40.895277 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.895799 kubelet[1849]: E1213 13:39:40.895712 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.895799 kubelet[1849]: W1213 13:39:40.895797 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.895959 kubelet[1849]: E1213 13:39:40.895828 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.896345 kubelet[1849]: E1213 13:39:40.896314 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.896345 kubelet[1849]: W1213 13:39:40.896343 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.896538 kubelet[1849]: E1213 13:39:40.896373 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.896896 kubelet[1849]: E1213 13:39:40.896827 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.896896 kubelet[1849]: W1213 13:39:40.896895 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.897080 kubelet[1849]: E1213 13:39:40.896925 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.897445 kubelet[1849]: E1213 13:39:40.897414 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.897445 kubelet[1849]: W1213 13:39:40.897443 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.897790 kubelet[1849]: E1213 13:39:40.897472 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.897989 kubelet[1849]: E1213 13:39:40.897925 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.897989 kubelet[1849]: W1213 13:39:40.897984 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.898121 kubelet[1849]: E1213 13:39:40.898015 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.898509 kubelet[1849]: E1213 13:39:40.898479 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.898509 kubelet[1849]: W1213 13:39:40.898507 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.898734 kubelet[1849]: E1213 13:39:40.898535 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.899045 kubelet[1849]: E1213 13:39:40.898992 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.899131 kubelet[1849]: W1213 13:39:40.899065 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.899131 kubelet[1849]: E1213 13:39:40.899111 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.899705 kubelet[1849]: E1213 13:39:40.899671 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.899705 kubelet[1849]: W1213 13:39:40.899702 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.899936 kubelet[1849]: E1213 13:39:40.899753 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.900911 kubelet[1849]: E1213 13:39:40.900229 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.900911 kubelet[1849]: W1213 13:39:40.900262 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.900911 kubelet[1849]: E1213 13:39:40.900290 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.900911 kubelet[1849]: E1213 13:39:40.900823 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.900911 kubelet[1849]: W1213 13:39:40.900844 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.900911 kubelet[1849]: E1213 13:39:40.900876 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.901336 kubelet[1849]: E1213 13:39:40.901191 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.901336 kubelet[1849]: W1213 13:39:40.901211 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.901336 kubelet[1849]: E1213 13:39:40.901238 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.901705 kubelet[1849]: E1213 13:39:40.901598 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.901705 kubelet[1849]: W1213 13:39:40.901670 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.901705 kubelet[1849]: E1213 13:39:40.901701 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.902675 kubelet[1849]: E1213 13:39:40.902031 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.902675 kubelet[1849]: W1213 13:39:40.902062 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.902675 kubelet[1849]: E1213 13:39:40.902089 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.902675 kubelet[1849]: I1213 13:39:40.902183 1849 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-b847z" podStartSLOduration=5.023005937 podStartE2EDuration="6.902042316s" podCreationTimestamp="2024-12-13 13:39:34 +0000 UTC" firstStartedPulling="2024-12-13 13:39:37.962450809 +0000 UTC m=+5.161680131" lastFinishedPulling="2024-12-13 13:39:39.841487188 +0000 UTC m=+7.040716510" observedRunningTime="2024-12-13 13:39:40.900883613 +0000 UTC m=+8.100113036" watchObservedRunningTime="2024-12-13 13:39:40.902042316 +0000 UTC m=+8.101271688" Dec 13 13:39:40.902675 kubelet[1849]: E1213 13:39:40.902408 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.902675 kubelet[1849]: W1213 13:39:40.902427 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.902675 kubelet[1849]: E1213 13:39:40.902455 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.903215 kubelet[1849]: E1213 13:39:40.902820 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.903215 kubelet[1849]: W1213 13:39:40.902840 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.903215 kubelet[1849]: E1213 13:39:40.902868 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.903215 kubelet[1849]: E1213 13:39:40.903147 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.903215 kubelet[1849]: W1213 13:39:40.903165 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.903215 kubelet[1849]: E1213 13:39:40.903192 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.903564 kubelet[1849]: E1213 13:39:40.903471 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.903564 kubelet[1849]: W1213 13:39:40.903490 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.903564 kubelet[1849]: E1213 13:39:40.903516 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.904092 kubelet[1849]: E1213 13:39:40.903991 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.904092 kubelet[1849]: W1213 13:39:40.904011 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.904092 kubelet[1849]: E1213 13:39:40.904044 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.904646 kubelet[1849]: E1213 13:39:40.904544 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.904646 kubelet[1849]: W1213 13:39:40.904580 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.904646 kubelet[1849]: E1213 13:39:40.904613 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.905090 kubelet[1849]: E1213 13:39:40.905046 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.905090 kubelet[1849]: W1213 13:39:40.905075 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.905245 kubelet[1849]: E1213 13:39:40.905135 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.905588 kubelet[1849]: E1213 13:39:40.905504 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.905588 kubelet[1849]: W1213 13:39:40.905540 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.905588 kubelet[1849]: E1213 13:39:40.905573 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.906045 kubelet[1849]: E1213 13:39:40.905957 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.906045 kubelet[1849]: W1213 13:39:40.905993 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.906045 kubelet[1849]: E1213 13:39:40.906028 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.906405 kubelet[1849]: E1213 13:39:40.906341 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.906405 kubelet[1849]: W1213 13:39:40.906369 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.906587 kubelet[1849]: E1213 13:39:40.906427 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.907161 kubelet[1849]: E1213 13:39:40.906889 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.907161 kubelet[1849]: W1213 13:39:40.906926 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.907161 kubelet[1849]: E1213 13:39:40.906959 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.907536 kubelet[1849]: E1213 13:39:40.907505 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.908183 kubelet[1849]: W1213 13:39:40.907702 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.908183 kubelet[1849]: E1213 13:39:40.907786 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.908183 kubelet[1849]: E1213 13:39:40.908171 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.908413 kubelet[1849]: W1213 13:39:40.908193 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.908413 kubelet[1849]: E1213 13:39:40.908259 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.908753 kubelet[1849]: E1213 13:39:40.908720 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.908753 kubelet[1849]: W1213 13:39:40.908749 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.908908 kubelet[1849]: E1213 13:39:40.908809 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.909569 kubelet[1849]: E1213 13:39:40.909514 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.909569 kubelet[1849]: W1213 13:39:40.909555 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.909829 kubelet[1849]: E1213 13:39:40.909599 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:40.910038 kubelet[1849]: E1213 13:39:40.909987 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:39:40.910038 kubelet[1849]: W1213 13:39:40.910023 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:39:40.910178 kubelet[1849]: E1213 13:39:40.910055 1849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:39:41.505637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount168835115.mount: Deactivated successfully. Dec 13 13:39:41.524732 kubelet[1849]: E1213 13:39:41.524685 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:41.649699 containerd[1455]: time="2024-12-13T13:39:41.649022478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:39:41.651070 containerd[1455]: time="2024-12-13T13:39:41.650864341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 13:39:41.652069 containerd[1455]: time="2024-12-13T13:39:41.652000064Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:39:41.654512 containerd[1455]: time="2024-12-13T13:39:41.654461127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:39:41.655908 containerd[1455]: time="2024-12-13T13:39:41.655261860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.813463885s" Dec 13 13:39:41.655908 containerd[1455]: time="2024-12-13T13:39:41.655303837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 13:39:41.656637 kubelet[1849]: E1213 13:39:41.656418 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:39:41.657793 containerd[1455]: time="2024-12-13T13:39:41.657764869Z" level=info msg="CreateContainer within sandbox \"63a295668b578cd2f41c42f231b0cf5a0a143ab51387f6416f67a7e732ef532a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 13:39:41.679166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2878247470.mount: Deactivated successfully. Dec 13 13:39:41.685700 containerd[1455]: time="2024-12-13T13:39:41.685646504Z" level=info msg="CreateContainer within sandbox \"63a295668b578cd2f41c42f231b0cf5a0a143ab51387f6416f67a7e732ef532a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6c3489af116859b4053f074c01ccdada6e55ce525700ca4e4d50440f6a4cfc77\"" Dec 13 13:39:41.686332 containerd[1455]: time="2024-12-13T13:39:41.686272577Z" level=info msg="StartContainer for \"6c3489af116859b4053f074c01ccdada6e55ce525700ca4e4d50440f6a4cfc77\"" Dec 13 13:39:41.716777 systemd[1]: Started cri-containerd-6c3489af116859b4053f074c01ccdada6e55ce525700ca4e4d50440f6a4cfc77.scope - libcontainer container 6c3489af116859b4053f074c01ccdada6e55ce525700ca4e4d50440f6a4cfc77. Dec 13 13:39:41.755586 containerd[1455]: time="2024-12-13T13:39:41.755471878Z" level=info msg="StartContainer for \"6c3489af116859b4053f074c01ccdada6e55ce525700ca4e4d50440f6a4cfc77\" returns successfully" Dec 13 13:39:41.763773 systemd[1]: cri-containerd-6c3489af116859b4053f074c01ccdada6e55ce525700ca4e4d50440f6a4cfc77.scope: Deactivated successfully. Dec 13 13:39:42.434955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c3489af116859b4053f074c01ccdada6e55ce525700ca4e4d50440f6a4cfc77-rootfs.mount: Deactivated successfully. Dec 13 13:39:42.440602 containerd[1455]: time="2024-12-13T13:39:42.440460197Z" level=info msg="shim disconnected" id=6c3489af116859b4053f074c01ccdada6e55ce525700ca4e4d50440f6a4cfc77 namespace=k8s.io Dec 13 13:39:42.440602 containerd[1455]: time="2024-12-13T13:39:42.440563556Z" level=warning msg="cleaning up after shim disconnected" id=6c3489af116859b4053f074c01ccdada6e55ce525700ca4e4d50440f6a4cfc77 namespace=k8s.io Dec 13 13:39:42.440602 containerd[1455]: time="2024-12-13T13:39:42.440587379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:39:42.525551 kubelet[1849]: E1213 13:39:42.525500 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:42.892604 containerd[1455]: time="2024-12-13T13:39:42.892385928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 13:39:43.525896 kubelet[1849]: E1213 13:39:43.525823 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:43.656696 kubelet[1849]: E1213 13:39:43.656569 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:39:44.527649 kubelet[1849]: E1213 13:39:44.526808 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:45.527411 kubelet[1849]: E1213 13:39:45.527343 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:45.657040 kubelet[1849]: E1213 13:39:45.656983 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:39:46.528404 kubelet[1849]: E1213 13:39:46.528322 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:47.191256 update_engine[1437]: I20241213 13:39:47.191159 1437 update_attempter.cc:509] Updating boot flags... Dec 13 13:39:47.235774 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2281) Dec 13 13:39:47.328056 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2285) Dec 13 13:39:47.529253 kubelet[1849]: E1213 13:39:47.529040 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:47.658047 kubelet[1849]: E1213 13:39:47.657869 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:39:48.529808 kubelet[1849]: E1213 13:39:48.529721 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:49.347692 containerd[1455]: time="2024-12-13T13:39:49.347551561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:39:49.350398 containerd[1455]: time="2024-12-13T13:39:49.349863068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 13:39:49.352117 containerd[1455]: time="2024-12-13T13:39:49.351985284Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:39:49.358493 containerd[1455]: time="2024-12-13T13:39:49.358381811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:39:49.362105 containerd[1455]: time="2024-12-13T13:39:49.361749297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.469255181s" Dec 13 13:39:49.362105 containerd[1455]: time="2024-12-13T13:39:49.361828905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 13:39:49.367233 containerd[1455]: time="2024-12-13T13:39:49.366968642Z" level=info msg="CreateContainer within sandbox \"63a295668b578cd2f41c42f231b0cf5a0a143ab51387f6416f67a7e732ef532a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:39:49.404532 containerd[1455]: time="2024-12-13T13:39:49.404334722Z" level=info msg="CreateContainer within sandbox \"63a295668b578cd2f41c42f231b0cf5a0a143ab51387f6416f67a7e732ef532a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fc19fba7191d363af9de04591caeaa26693a2bfd2e1c2e4c5c03941cd10c18ad\"" Dec 13 13:39:49.405698 containerd[1455]: time="2024-12-13T13:39:49.405561847Z" level=info msg="StartContainer for \"fc19fba7191d363af9de04591caeaa26693a2bfd2e1c2e4c5c03941cd10c18ad\"" Dec 13 13:39:49.468318 systemd[1]: Started cri-containerd-fc19fba7191d363af9de04591caeaa26693a2bfd2e1c2e4c5c03941cd10c18ad.scope - libcontainer container fc19fba7191d363af9de04591caeaa26693a2bfd2e1c2e4c5c03941cd10c18ad. Dec 13 13:39:49.508451 containerd[1455]: time="2024-12-13T13:39:49.508372466Z" level=info msg="StartContainer for \"fc19fba7191d363af9de04591caeaa26693a2bfd2e1c2e4c5c03941cd10c18ad\" returns successfully" Dec 13 13:39:49.529949 kubelet[1849]: E1213 13:39:49.529851 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:49.656502 kubelet[1849]: E1213 13:39:49.656325 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:39:50.530691 kubelet[1849]: E1213 13:39:50.530550 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:51.116961 containerd[1455]: time="2024-12-13T13:39:51.116827696Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:39:51.124053 systemd[1]: cri-containerd-fc19fba7191d363af9de04591caeaa26693a2bfd2e1c2e4c5c03941cd10c18ad.scope: Deactivated successfully. Dec 13 13:39:51.179539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc19fba7191d363af9de04591caeaa26693a2bfd2e1c2e4c5c03941cd10c18ad-rootfs.mount: Deactivated successfully. Dec 13 13:39:51.186222 kubelet[1849]: I1213 13:39:51.186186 1849 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:39:52.138957 kubelet[1849]: E1213 13:39:51.531666 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:51.669288 systemd[1]: Created slice kubepods-besteffort-pod5d03366d_217c_4fd9_b86c_ea2d2a91fb87.slice - libcontainer container kubepods-besteffort-pod5d03366d_217c_4fd9_b86c_ea2d2a91fb87.slice. Dec 13 13:39:52.142516 containerd[1455]: time="2024-12-13T13:39:52.141894815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:0,}" Dec 13 13:39:52.532251 kubelet[1849]: E1213 13:39:52.532186 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:53.230390 containerd[1455]: time="2024-12-13T13:39:53.230124036Z" level=info msg="shim disconnected" id=fc19fba7191d363af9de04591caeaa26693a2bfd2e1c2e4c5c03941cd10c18ad namespace=k8s.io Dec 13 13:39:53.230390 containerd[1455]: time="2024-12-13T13:39:53.230177525Z" level=warning msg="cleaning up after shim disconnected" id=fc19fba7191d363af9de04591caeaa26693a2bfd2e1c2e4c5c03941cd10c18ad namespace=k8s.io Dec 13 13:39:53.230390 containerd[1455]: time="2024-12-13T13:39:53.230188807Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:39:53.328271 containerd[1455]: time="2024-12-13T13:39:53.328176536Z" level=error msg="Failed to destroy network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:53.330768 containerd[1455]: time="2024-12-13T13:39:53.330700522Z" level=error msg="encountered an error cleaning up failed sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:53.331034 containerd[1455]: time="2024-12-13T13:39:53.330854536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:53.331977 kubelet[1849]: E1213 13:39:53.331957 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:53.332599 kubelet[1849]: E1213 13:39:53.332283 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:53.332599 kubelet[1849]: E1213 13:39:53.332315 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:53.332599 kubelet[1849]: E1213 13:39:53.332374 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:39:53.333297 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b-shm.mount: Deactivated successfully. Dec 13 13:39:53.533002 kubelet[1849]: E1213 13:39:53.532786 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:53.929480 containerd[1455]: time="2024-12-13T13:39:53.929396507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 13:39:53.930980 kubelet[1849]: I1213 13:39:53.930894 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b" Dec 13 13:39:53.932082 containerd[1455]: time="2024-12-13T13:39:53.931935610Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\"" Dec 13 13:39:53.933192 containerd[1455]: time="2024-12-13T13:39:53.932452457Z" level=info msg="Ensure that sandbox d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b in task-service has been cleanup successfully" Dec 13 13:39:53.938819 containerd[1455]: time="2024-12-13T13:39:53.938750596Z" level=info msg="TearDown network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" successfully" Dec 13 13:39:53.938819 containerd[1455]: time="2024-12-13T13:39:53.938810195Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" returns successfully" Dec 13 13:39:53.940910 containerd[1455]: time="2024-12-13T13:39:53.939864879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:1,}" Dec 13 13:39:53.941126 systemd[1]: run-netns-cni\x2dd2e8300a\x2d2044\x2de280\x2d8334\x2d7a7b54470810.mount: Deactivated successfully. Dec 13 13:39:54.077858 containerd[1455]: time="2024-12-13T13:39:54.077721950Z" level=error msg="Failed to destroy network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:54.078921 containerd[1455]: time="2024-12-13T13:39:54.078678884Z" level=error msg="encountered an error cleaning up failed sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:54.078921 containerd[1455]: time="2024-12-13T13:39:54.078793967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:54.079267 kubelet[1849]: E1213 13:39:54.079171 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:54.079359 kubelet[1849]: E1213 13:39:54.079326 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:54.079401 kubelet[1849]: E1213 13:39:54.079392 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:54.079530 kubelet[1849]: E1213 13:39:54.079508 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:39:54.247731 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073-shm.mount: Deactivated successfully. Dec 13 13:39:54.520048 kubelet[1849]: E1213 13:39:54.519555 1849 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:54.534108 kubelet[1849]: E1213 13:39:54.534003 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:54.936925 kubelet[1849]: I1213 13:39:54.936829 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073" Dec 13 13:39:54.939106 containerd[1455]: time="2024-12-13T13:39:54.937966184Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\"" Dec 13 13:39:54.939106 containerd[1455]: time="2024-12-13T13:39:54.938387134Z" level=info msg="Ensure that sandbox 85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073 in task-service has been cleanup successfully" Dec 13 13:39:54.941861 containerd[1455]: time="2024-12-13T13:39:54.940684443Z" level=info msg="TearDown network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" successfully" Dec 13 13:39:54.941861 containerd[1455]: time="2024-12-13T13:39:54.940749985Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" returns successfully" Dec 13 13:39:54.945019 containerd[1455]: time="2024-12-13T13:39:54.942528341Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\"" Dec 13 13:39:54.945019 containerd[1455]: time="2024-12-13T13:39:54.942893498Z" level=info msg="TearDown network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" successfully" Dec 13 13:39:54.945019 containerd[1455]: time="2024-12-13T13:39:54.942948580Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" returns successfully" Dec 13 13:39:54.947854 containerd[1455]: time="2024-12-13T13:39:54.946899045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:2,}" Dec 13 13:39:54.947474 systemd[1]: run-netns-cni\x2dff24e9db\x2da073\x2d6797\x2d38fc\x2df19a53b042a5.mount: Deactivated successfully. Dec 13 13:39:55.150510 containerd[1455]: time="2024-12-13T13:39:55.148288950Z" level=error msg="Failed to destroy network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:55.149969 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556-shm.mount: Deactivated successfully. Dec 13 13:39:55.151975 containerd[1455]: time="2024-12-13T13:39:55.151521027Z" level=error msg="encountered an error cleaning up failed sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:55.151975 containerd[1455]: time="2024-12-13T13:39:55.151795556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:55.152771 kubelet[1849]: E1213 13:39:55.152486 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:55.152771 kubelet[1849]: E1213 13:39:55.152592 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:55.153226 kubelet[1849]: E1213 13:39:55.153011 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:55.153226 kubelet[1849]: E1213 13:39:55.153173 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:39:55.534897 kubelet[1849]: E1213 13:39:55.534826 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:55.942016 kubelet[1849]: I1213 13:39:55.941006 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556" Dec 13 13:39:55.942972 containerd[1455]: time="2024-12-13T13:39:55.942389621Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\"" Dec 13 13:39:55.942972 containerd[1455]: time="2024-12-13T13:39:55.942760959Z" level=info msg="Ensure that sandbox 7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556 in task-service has been cleanup successfully" Dec 13 13:39:55.943708 containerd[1455]: time="2024-12-13T13:39:55.943660238Z" level=info msg="TearDown network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" successfully" Dec 13 13:39:55.944038 containerd[1455]: time="2024-12-13T13:39:55.943836175Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" returns successfully" Dec 13 13:39:55.944865 containerd[1455]: time="2024-12-13T13:39:55.944386144Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\"" Dec 13 13:39:55.944865 containerd[1455]: time="2024-12-13T13:39:55.944531183Z" level=info msg="TearDown network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" successfully" Dec 13 13:39:55.944865 containerd[1455]: time="2024-12-13T13:39:55.944554928Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" returns successfully" Dec 13 13:39:55.945322 containerd[1455]: time="2024-12-13T13:39:55.945280223Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\"" Dec 13 13:39:55.945587 containerd[1455]: time="2024-12-13T13:39:55.945551056Z" level=info msg="TearDown network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" successfully" Dec 13 13:39:55.946243 containerd[1455]: time="2024-12-13T13:39:55.945764723Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" returns successfully" Dec 13 13:39:55.946057 systemd[1]: run-netns-cni\x2daef9b375\x2d83e9\x2d7c12\x2d663b\x2dc3c4a8a78806.mount: Deactivated successfully. Dec 13 13:39:55.948036 containerd[1455]: time="2024-12-13T13:39:55.946747516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:3,}" Dec 13 13:39:56.294006 containerd[1455]: time="2024-12-13T13:39:56.293882410Z" level=error msg="Failed to destroy network for sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:56.296203 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e-shm.mount: Deactivated successfully. Dec 13 13:39:56.297295 containerd[1455]: time="2024-12-13T13:39:56.297023813Z" level=error msg="encountered an error cleaning up failed sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:56.299351 containerd[1455]: time="2024-12-13T13:39:56.297734943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:56.299419 kubelet[1849]: E1213 13:39:56.298921 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:56.299419 kubelet[1849]: E1213 13:39:56.298988 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:56.299419 kubelet[1849]: E1213 13:39:56.299014 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:56.299522 kubelet[1849]: E1213 13:39:56.299082 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:39:56.535312 kubelet[1849]: E1213 13:39:56.535244 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:56.948663 kubelet[1849]: I1213 13:39:56.947317 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e" Dec 13 13:39:56.949846 containerd[1455]: time="2024-12-13T13:39:56.949136428Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\"" Dec 13 13:39:56.949846 containerd[1455]: time="2024-12-13T13:39:56.949553422Z" level=info msg="Ensure that sandbox 0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e in task-service has been cleanup successfully" Dec 13 13:39:56.952848 containerd[1455]: time="2024-12-13T13:39:56.952799148Z" level=info msg="TearDown network for sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" successfully" Dec 13 13:39:56.955422 containerd[1455]: time="2024-12-13T13:39:56.952998348Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" returns successfully" Dec 13 13:39:56.955803 containerd[1455]: time="2024-12-13T13:39:56.955742733Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\"" Dec 13 13:39:56.955916 containerd[1455]: time="2024-12-13T13:39:56.955854792Z" level=info msg="TearDown network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" successfully" Dec 13 13:39:56.955916 containerd[1455]: time="2024-12-13T13:39:56.955873215Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" returns successfully" Dec 13 13:39:56.956093 systemd[1]: run-netns-cni\x2d2a8a4d60\x2d0324\x2d7c10\x2d81d1\x2d7c8ee8b3189f.mount: Deactivated successfully. Dec 13 13:39:56.957265 containerd[1455]: time="2024-12-13T13:39:56.956922703Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\"" Dec 13 13:39:56.957265 containerd[1455]: time="2024-12-13T13:39:56.957001801Z" level=info msg="TearDown network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" successfully" Dec 13 13:39:56.957265 containerd[1455]: time="2024-12-13T13:39:56.957014304Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" returns successfully" Dec 13 13:39:56.958022 containerd[1455]: time="2024-12-13T13:39:56.957896452Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\"" Dec 13 13:39:56.958022 containerd[1455]: time="2024-12-13T13:39:56.957961062Z" level=info msg="TearDown network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" successfully" Dec 13 13:39:56.958022 containerd[1455]: time="2024-12-13T13:39:56.957971531Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" returns successfully" Dec 13 13:39:56.958839 containerd[1455]: time="2024-12-13T13:39:56.958474325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:4,}" Dec 13 13:39:57.536338 kubelet[1849]: E1213 13:39:57.536244 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:57.665838 kubelet[1849]: I1213 13:39:57.664378 1849 topology_manager.go:215] "Topology Admit Handler" podUID="e992dced-14d2-4b14-8f2a-2ecb469b78aa" podNamespace="default" podName="nginx-deployment-6d5f899847-vxhhl" Dec 13 13:39:57.675271 systemd[1]: Created slice kubepods-besteffort-pode992dced_14d2_4b14_8f2a_2ecb469b78aa.slice - libcontainer container kubepods-besteffort-pode992dced_14d2_4b14_8f2a_2ecb469b78aa.slice. Dec 13 13:39:57.704226 containerd[1455]: time="2024-12-13T13:39:57.704171971Z" level=error msg="Failed to destroy network for sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:57.705989 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a-shm.mount: Deactivated successfully. Dec 13 13:39:57.707571 containerd[1455]: time="2024-12-13T13:39:57.706928622Z" level=error msg="encountered an error cleaning up failed sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:57.707571 containerd[1455]: time="2024-12-13T13:39:57.706991059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:57.707759 kubelet[1849]: E1213 13:39:57.707630 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:57.707819 kubelet[1849]: E1213 13:39:57.707794 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:57.707848 kubelet[1849]: E1213 13:39:57.707832 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:57.707921 kubelet[1849]: E1213 13:39:57.707905 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:39:57.815191 kubelet[1849]: I1213 13:39:57.814942 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxgpx\" (UniqueName: \"kubernetes.io/projected/e992dced-14d2-4b14-8f2a-2ecb469b78aa-kube-api-access-pxgpx\") pod \"nginx-deployment-6d5f899847-vxhhl\" (UID: \"e992dced-14d2-4b14-8f2a-2ecb469b78aa\") " pod="default/nginx-deployment-6d5f899847-vxhhl" Dec 13 13:39:57.957643 kubelet[1849]: I1213 13:39:57.956057 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a" Dec 13 13:39:57.958121 containerd[1455]: time="2024-12-13T13:39:57.958068777Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\"" Dec 13 13:39:57.958476 containerd[1455]: time="2024-12-13T13:39:57.958432562Z" level=info msg="Ensure that sandbox 22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a in task-service has been cleanup successfully" Dec 13 13:39:57.960220 systemd[1]: run-netns-cni\x2d58378e2f\x2dbe58\x2d95e2\x2d7b80\x2d3a919c523183.mount: Deactivated successfully. Dec 13 13:39:57.961714 containerd[1455]: time="2024-12-13T13:39:57.961684184Z" level=info msg="TearDown network for sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" successfully" Dec 13 13:39:57.961812 containerd[1455]: time="2024-12-13T13:39:57.961795370Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" returns successfully" Dec 13 13:39:57.962334 containerd[1455]: time="2024-12-13T13:39:57.962314584Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\"" Dec 13 13:39:57.962485 containerd[1455]: time="2024-12-13T13:39:57.962468020Z" level=info msg="TearDown network for sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" successfully" Dec 13 13:39:57.962554 containerd[1455]: time="2024-12-13T13:39:57.962537749Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" returns successfully" Dec 13 13:39:57.965215 containerd[1455]: time="2024-12-13T13:39:57.965191168Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\"" Dec 13 13:39:57.965370 containerd[1455]: time="2024-12-13T13:39:57.965353269Z" level=info msg="TearDown network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" successfully" Dec 13 13:39:57.965452 containerd[1455]: time="2024-12-13T13:39:57.965436004Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" returns successfully" Dec 13 13:39:57.966284 containerd[1455]: time="2024-12-13T13:39:57.966237753Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\"" Dec 13 13:39:57.966440 containerd[1455]: time="2024-12-13T13:39:57.966423217Z" level=info msg="TearDown network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" successfully" Dec 13 13:39:57.968659 containerd[1455]: time="2024-12-13T13:39:57.968639545Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" returns successfully" Dec 13 13:39:57.969276 containerd[1455]: time="2024-12-13T13:39:57.969228770Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\"" Dec 13 13:39:57.969430 containerd[1455]: time="2024-12-13T13:39:57.969395890Z" level=info msg="TearDown network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" successfully" Dec 13 13:39:57.969464 containerd[1455]: time="2024-12-13T13:39:57.969431015Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" returns successfully" Dec 13 13:39:57.970671 containerd[1455]: time="2024-12-13T13:39:57.970409814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:5,}" Dec 13 13:39:57.980421 containerd[1455]: time="2024-12-13T13:39:57.980389645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxhhl,Uid:e992dced-14d2-4b14-8f2a-2ecb469b78aa,Namespace:default,Attempt:0,}" Dec 13 13:39:58.237715 containerd[1455]: time="2024-12-13T13:39:58.237657465Z" level=error msg="Failed to destroy network for sandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:58.239059 containerd[1455]: time="2024-12-13T13:39:58.239023014Z" level=error msg="encountered an error cleaning up failed sandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:58.239311 containerd[1455]: time="2024-12-13T13:39:58.239273750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:58.239773 kubelet[1849]: E1213 13:39:58.239743 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:58.239835 kubelet[1849]: E1213 13:39:58.239800 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:58.239835 kubelet[1849]: E1213 13:39:58.239826 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:58.239970 kubelet[1849]: E1213 13:39:58.239891 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:39:58.260361 containerd[1455]: time="2024-12-13T13:39:58.260256043Z" level=error msg="Failed to destroy network for sandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:58.260937 containerd[1455]: time="2024-12-13T13:39:58.260785217Z" level=error msg="encountered an error cleaning up failed sandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:58.260937 containerd[1455]: time="2024-12-13T13:39:58.260848264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxhhl,Uid:e992dced-14d2-4b14-8f2a-2ecb469b78aa,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:58.261642 kubelet[1849]: E1213 13:39:58.261197 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:58.261642 kubelet[1849]: E1213 13:39:58.261263 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxhhl" Dec 13 13:39:58.261642 kubelet[1849]: E1213 13:39:58.261289 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxhhl" Dec 13 13:39:58.261778 kubelet[1849]: E1213 13:39:58.261389 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-vxhhl_default(e992dced-14d2-4b14-8f2a-2ecb469b78aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-vxhhl_default(e992dced-14d2-4b14-8f2a-2ecb469b78aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-vxhhl" podUID="e992dced-14d2-4b14-8f2a-2ecb469b78aa" Dec 13 13:39:58.537196 kubelet[1849]: E1213 13:39:58.536925 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:58.965174 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b-shm.mount: Deactivated successfully. Dec 13 13:39:58.966062 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6-shm.mount: Deactivated successfully. Dec 13 13:39:58.970395 kubelet[1849]: I1213 13:39:58.970333 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6" Dec 13 13:39:58.974676 containerd[1455]: time="2024-12-13T13:39:58.972207926Z" level=info msg="StopPodSandbox for \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\"" Dec 13 13:39:58.974676 containerd[1455]: time="2024-12-13T13:39:58.972697896Z" level=info msg="Ensure that sandbox 6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6 in task-service has been cleanup successfully" Dec 13 13:39:58.978262 kubelet[1849]: I1213 13:39:58.976500 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b" Dec 13 13:39:58.978452 containerd[1455]: time="2024-12-13T13:39:58.977737595Z" level=info msg="StopPodSandbox for \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\"" Dec 13 13:39:58.978452 containerd[1455]: time="2024-12-13T13:39:58.978122690Z" level=info msg="Ensure that sandbox 988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b in task-service has been cleanup successfully" Dec 13 13:39:58.979161 containerd[1455]: time="2024-12-13T13:39:58.978794850Z" level=info msg="TearDown network for sandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" successfully" Dec 13 13:39:58.979161 containerd[1455]: time="2024-12-13T13:39:58.978866733Z" level=info msg="StopPodSandbox for \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" returns successfully" Dec 13 13:39:58.979161 containerd[1455]: time="2024-12-13T13:39:58.978822902Z" level=info msg="TearDown network for sandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" successfully" Dec 13 13:39:58.979161 containerd[1455]: time="2024-12-13T13:39:58.979009338Z" level=info msg="StopPodSandbox for \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" returns successfully" Dec 13 13:39:58.980299 systemd[1]: run-netns-cni\x2d8fbf1c7c\x2d8387\x2d1d17\x2d120d\x2d26c305ab63da.mount: Deactivated successfully. Dec 13 13:39:58.984512 containerd[1455]: time="2024-12-13T13:39:58.981966335Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\"" Dec 13 13:39:58.984512 containerd[1455]: time="2024-12-13T13:39:58.982561501Z" level=info msg="TearDown network for sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" successfully" Dec 13 13:39:58.984512 containerd[1455]: time="2024-12-13T13:39:58.982602378Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" returns successfully" Dec 13 13:39:58.986361 containerd[1455]: time="2024-12-13T13:39:58.985448538Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\"" Dec 13 13:39:58.986361 containerd[1455]: time="2024-12-13T13:39:58.986310130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxhhl,Uid:e992dced-14d2-4b14-8f2a-2ecb469b78aa,Namespace:default,Attempt:1,}" Dec 13 13:39:58.986985 containerd[1455]: time="2024-12-13T13:39:58.986800301Z" level=info msg="TearDown network for sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" successfully" Dec 13 13:39:58.986985 containerd[1455]: time="2024-12-13T13:39:58.986847278Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" returns successfully" Dec 13 13:39:58.988278 systemd[1]: run-netns-cni\x2dd0f4c405\x2d505a\x2d8411\x2d2dc4\x2d3a70d1bc875d.mount: Deactivated successfully. Dec 13 13:39:58.989353 containerd[1455]: time="2024-12-13T13:39:58.988992676Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\"" Dec 13 13:39:58.990695 containerd[1455]: time="2024-12-13T13:39:58.990102850Z" level=info msg="TearDown network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" successfully" Dec 13 13:39:58.990695 containerd[1455]: time="2024-12-13T13:39:58.990151460Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" returns successfully" Dec 13 13:39:58.992434 containerd[1455]: time="2024-12-13T13:39:58.991111445Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\"" Dec 13 13:39:58.992434 containerd[1455]: time="2024-12-13T13:39:58.991358284Z" level=info msg="TearDown network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" successfully" Dec 13 13:39:58.992434 containerd[1455]: time="2024-12-13T13:39:58.991393419Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" returns successfully" Dec 13 13:39:58.992434 containerd[1455]: time="2024-12-13T13:39:58.992214435Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\"" Dec 13 13:39:58.992434 containerd[1455]: time="2024-12-13T13:39:58.992380123Z" level=info msg="TearDown network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" successfully" Dec 13 13:39:58.992434 containerd[1455]: time="2024-12-13T13:39:58.992409138Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" returns successfully" Dec 13 13:39:58.993272 containerd[1455]: time="2024-12-13T13:39:58.993209174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:6,}" Dec 13 13:39:59.537668 kubelet[1849]: E1213 13:39:59.537559 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:39:59.810745 containerd[1455]: time="2024-12-13T13:39:59.810651062Z" level=error msg="Failed to destroy network for sandbox \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:59.811444 containerd[1455]: time="2024-12-13T13:39:59.811291954Z" level=error msg="encountered an error cleaning up failed sandbox \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:59.811444 containerd[1455]: time="2024-12-13T13:39:59.811352436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxhhl,Uid:e992dced-14d2-4b14-8f2a-2ecb469b78aa,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:59.812145 kubelet[1849]: E1213 13:39:59.811613 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:59.812145 kubelet[1849]: E1213 13:39:59.811695 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxhhl" Dec 13 13:39:59.812145 kubelet[1849]: E1213 13:39:59.811723 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxhhl" Dec 13 13:39:59.812244 kubelet[1849]: E1213 13:39:59.811801 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-vxhhl_default(e992dced-14d2-4b14-8f2a-2ecb469b78aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-vxhhl_default(e992dced-14d2-4b14-8f2a-2ecb469b78aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-vxhhl" podUID="e992dced-14d2-4b14-8f2a-2ecb469b78aa" Dec 13 13:39:59.840042 containerd[1455]: time="2024-12-13T13:39:59.839696495Z" level=error msg="Failed to destroy network for sandbox \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:59.841042 containerd[1455]: time="2024-12-13T13:39:59.840368544Z" level=error msg="encountered an error cleaning up failed sandbox \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:59.841042 containerd[1455]: time="2024-12-13T13:39:59.840429557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:59.841154 kubelet[1849]: E1213 13:39:59.840654 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:39:59.841154 kubelet[1849]: E1213 13:39:59.840710 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:59.841154 kubelet[1849]: E1213 13:39:59.840737 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:39:59.841263 kubelet[1849]: E1213 13:39:59.840805 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:39:59.962172 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2-shm.mount: Deactivated successfully. Dec 13 13:39:59.981681 kubelet[1849]: I1213 13:39:59.981654 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867" Dec 13 13:39:59.982525 containerd[1455]: time="2024-12-13T13:39:59.982500369Z" level=info msg="StopPodSandbox for \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\"" Dec 13 13:39:59.983882 containerd[1455]: time="2024-12-13T13:39:59.983768147Z" level=info msg="Ensure that sandbox f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867 in task-service has been cleanup successfully" Dec 13 13:39:59.983996 containerd[1455]: time="2024-12-13T13:39:59.983979330Z" level=info msg="TearDown network for sandbox \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\" successfully" Dec 13 13:39:59.984331 containerd[1455]: time="2024-12-13T13:39:59.984045513Z" level=info msg="StopPodSandbox for \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\" returns successfully" Dec 13 13:39:59.985471 systemd[1]: run-netns-cni\x2d6053bc41\x2ded0f\x2d3a27\x2d2e6a\x2d82c64c96828e.mount: Deactivated successfully. Dec 13 13:39:59.987190 containerd[1455]: time="2024-12-13T13:39:59.987156961Z" level=info msg="StopPodSandbox for \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\"" Dec 13 13:39:59.987279 containerd[1455]: time="2024-12-13T13:39:59.987249242Z" level=info msg="TearDown network for sandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" successfully" Dec 13 13:39:59.987279 containerd[1455]: time="2024-12-13T13:39:59.987262196Z" level=info msg="StopPodSandbox for \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" returns successfully" Dec 13 13:39:59.987812 containerd[1455]: time="2024-12-13T13:39:59.987789718Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\"" Dec 13 13:39:59.987960 containerd[1455]: time="2024-12-13T13:39:59.987942702Z" level=info msg="TearDown network for sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" successfully" Dec 13 13:39:59.988028 containerd[1455]: time="2024-12-13T13:39:59.988013373Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" returns successfully" Dec 13 13:39:59.988753 containerd[1455]: time="2024-12-13T13:39:59.988725718Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\"" Dec 13 13:39:59.988964 containerd[1455]: time="2024-12-13T13:39:59.988938755Z" level=info msg="TearDown network for sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" successfully" Dec 13 13:39:59.989066 containerd[1455]: time="2024-12-13T13:39:59.989043169Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" returns successfully" Dec 13 13:39:59.989498 kubelet[1849]: I1213 13:39:59.989468 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2" Dec 13 13:39:59.990494 containerd[1455]: time="2024-12-13T13:39:59.989932873Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\"" Dec 13 13:39:59.990494 containerd[1455]: time="2024-12-13T13:39:59.990062745Z" level=info msg="TearDown network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" successfully" Dec 13 13:39:59.990494 containerd[1455]: time="2024-12-13T13:39:59.990081640Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" returns successfully" Dec 13 13:39:59.990494 containerd[1455]: time="2024-12-13T13:39:59.990222643Z" level=info msg="StopPodSandbox for \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\"" Dec 13 13:39:59.990494 containerd[1455]: time="2024-12-13T13:39:59.990374144Z" level=info msg="Ensure that sandbox 0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2 in task-service has been cleanup successfully" Dec 13 13:39:59.992179 systemd[1]: run-netns-cni\x2dcfcecf63\x2dafac\x2ded62\x2d716d\x2de3218aa27a75.mount: Deactivated successfully. Dec 13 13:39:59.992425 containerd[1455]: time="2024-12-13T13:39:59.992365969Z" level=info msg="TearDown network for sandbox \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\" successfully" Dec 13 13:39:59.992504 containerd[1455]: time="2024-12-13T13:39:59.992488967Z" level=info msg="StopPodSandbox for \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\" returns successfully" Dec 13 13:39:59.994543 containerd[1455]: time="2024-12-13T13:39:59.994123458Z" level=info msg="StopPodSandbox for \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\"" Dec 13 13:39:59.994543 containerd[1455]: time="2024-12-13T13:39:59.994212353Z" level=info msg="TearDown network for sandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" successfully" Dec 13 13:39:59.994543 containerd[1455]: time="2024-12-13T13:39:59.994224185Z" level=info msg="StopPodSandbox for \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" returns successfully" Dec 13 13:39:59.994543 containerd[1455]: time="2024-12-13T13:39:59.994277063Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\"" Dec 13 13:39:59.994543 containerd[1455]: time="2024-12-13T13:39:59.994331134Z" level=info msg="TearDown network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" successfully" Dec 13 13:39:59.994543 containerd[1455]: time="2024-12-13T13:39:59.994341563Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" returns successfully" Dec 13 13:39:59.995394 containerd[1455]: time="2024-12-13T13:39:59.994954944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxhhl,Uid:e992dced-14d2-4b14-8f2a-2ecb469b78aa,Namespace:default,Attempt:2,}" Dec 13 13:39:59.995394 containerd[1455]: time="2024-12-13T13:39:59.995291320Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\"" Dec 13 13:39:59.995394 containerd[1455]: time="2024-12-13T13:39:59.995366960Z" level=info msg="TearDown network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" successfully" Dec 13 13:39:59.995394 containerd[1455]: time="2024-12-13T13:39:59.995377981Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" returns successfully" Dec 13 13:39:59.995992 containerd[1455]: time="2024-12-13T13:39:59.995964802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:7,}" Dec 13 13:40:00.139528 containerd[1455]: time="2024-12-13T13:40:00.137967952Z" level=error msg="Failed to destroy network for sandbox \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:00.140680 containerd[1455]: time="2024-12-13T13:40:00.140147849Z" level=error msg="Failed to destroy network for sandbox \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:00.140680 containerd[1455]: time="2024-12-13T13:40:00.140539227Z" level=error msg="encountered an error cleaning up failed sandbox \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:00.141341 containerd[1455]: time="2024-12-13T13:40:00.140592967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxhhl,Uid:e992dced-14d2-4b14-8f2a-2ecb469b78aa,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:00.141689 kubelet[1849]: E1213 13:40:00.141656 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:00.141755 kubelet[1849]: E1213 13:40:00.141709 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxhhl" Dec 13 13:40:00.141755 kubelet[1849]: E1213 13:40:00.141735 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxhhl" Dec 13 13:40:00.141824 kubelet[1849]: E1213 13:40:00.141788 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-vxhhl_default(e992dced-14d2-4b14-8f2a-2ecb469b78aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-vxhhl_default(e992dced-14d2-4b14-8f2a-2ecb469b78aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-vxhhl" podUID="e992dced-14d2-4b14-8f2a-2ecb469b78aa" Dec 13 13:40:00.142062 containerd[1455]: time="2024-12-13T13:40:00.140804451Z" level=error msg="encountered an error cleaning up failed sandbox \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:00.142179 containerd[1455]: time="2024-12-13T13:40:00.141959961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:00.142330 kubelet[1849]: E1213 13:40:00.142261 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:00.142372 kubelet[1849]: E1213 13:40:00.142349 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:40:00.142401 kubelet[1849]: E1213 13:40:00.142374 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:40:00.142647 kubelet[1849]: E1213 13:40:00.142436 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:40:00.538612 kubelet[1849]: E1213 13:40:00.538357 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:00.961951 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d-shm.mount: Deactivated successfully. Dec 13 13:40:00.996214 kubelet[1849]: I1213 13:40:00.995472 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9" Dec 13 13:40:00.996426 containerd[1455]: time="2024-12-13T13:40:00.996389672Z" level=info msg="StopPodSandbox for \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\"" Dec 13 13:40:00.996787 containerd[1455]: time="2024-12-13T13:40:00.996590025Z" level=info msg="Ensure that sandbox 8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9 in task-service has been cleanup successfully" Dec 13 13:40:00.998313 systemd[1]: run-netns-cni\x2db600ca8d\x2d4a12\x2d3342\x2d853f\x2d977298823534.mount: Deactivated successfully. Dec 13 13:40:00.998919 containerd[1455]: time="2024-12-13T13:40:00.998414009Z" level=info msg="TearDown network for sandbox \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\" successfully" Dec 13 13:40:00.998919 containerd[1455]: time="2024-12-13T13:40:00.998434818Z" level=info msg="StopPodSandbox for \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\" returns successfully" Dec 13 13:40:01.001051 containerd[1455]: time="2024-12-13T13:40:01.000057287Z" level=info msg="StopPodSandbox for \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\"" Dec 13 13:40:01.001051 containerd[1455]: time="2024-12-13T13:40:01.000141815Z" level=info msg="TearDown network for sandbox \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\" successfully" Dec 13 13:40:01.001051 containerd[1455]: time="2024-12-13T13:40:01.000154599Z" level=info msg="StopPodSandbox for \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\" returns successfully" Dec 13 13:40:01.001601 containerd[1455]: time="2024-12-13T13:40:01.001561887Z" level=info msg="StopPodSandbox for \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\"" Dec 13 13:40:01.001973 containerd[1455]: time="2024-12-13T13:40:01.001946824Z" level=info msg="TearDown network for sandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" successfully" Dec 13 13:40:01.001973 containerd[1455]: time="2024-12-13T13:40:01.001967503Z" level=info msg="StopPodSandbox for \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" returns successfully" Dec 13 13:40:01.002538 containerd[1455]: time="2024-12-13T13:40:01.002505724Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\"" Dec 13 13:40:01.002605 containerd[1455]: time="2024-12-13T13:40:01.002578219Z" level=info msg="TearDown network for sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" successfully" Dec 13 13:40:01.002605 containerd[1455]: time="2024-12-13T13:40:01.002590482Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" returns successfully" Dec 13 13:40:01.003460 containerd[1455]: time="2024-12-13T13:40:01.003233980Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\"" Dec 13 13:40:01.003460 containerd[1455]: time="2024-12-13T13:40:01.003298490Z" level=info msg="TearDown network for sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" successfully" Dec 13 13:40:01.003460 containerd[1455]: time="2024-12-13T13:40:01.003422361Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" returns successfully" Dec 13 13:40:01.003541 kubelet[1849]: I1213 13:40:01.002811 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d" Dec 13 13:40:01.003581 containerd[1455]: time="2024-12-13T13:40:01.003476051Z" level=info msg="StopPodSandbox for \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\"" Dec 13 13:40:01.003847 containerd[1455]: time="2024-12-13T13:40:01.003681373Z" level=info msg="Ensure that sandbox bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d in task-service has been cleanup successfully" Dec 13 13:40:01.005763 containerd[1455]: time="2024-12-13T13:40:01.004844899Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\"" Dec 13 13:40:01.005763 containerd[1455]: time="2024-12-13T13:40:01.004937913Z" level=info msg="TearDown network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" successfully" Dec 13 13:40:01.005763 containerd[1455]: time="2024-12-13T13:40:01.004950607Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" returns successfully" Dec 13 13:40:01.006165 containerd[1455]: time="2024-12-13T13:40:01.006131595Z" level=info msg="TearDown network for sandbox \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\" successfully" Dec 13 13:40:01.006165 containerd[1455]: time="2024-12-13T13:40:01.006159747Z" level=info msg="StopPodSandbox for \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\" returns successfully" Dec 13 13:40:01.006717 containerd[1455]: time="2024-12-13T13:40:01.006697168Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\"" Dec 13 13:40:01.007016 systemd[1]: run-netns-cni\x2d98f3988d\x2d7f1e\x2de411\x2d7696\x2db4d8262d0b5a.mount: Deactivated successfully. Dec 13 13:40:01.008194 containerd[1455]: time="2024-12-13T13:40:01.008175310Z" level=info msg="TearDown network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" successfully" Dec 13 13:40:01.008355 containerd[1455]: time="2024-12-13T13:40:01.008332062Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" returns successfully" Dec 13 13:40:01.008895 containerd[1455]: time="2024-12-13T13:40:01.008875143Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\"" Dec 13 13:40:01.009070 containerd[1455]: time="2024-12-13T13:40:01.009054326Z" level=info msg="TearDown network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" successfully" Dec 13 13:40:01.009167 containerd[1455]: time="2024-12-13T13:40:01.009138162Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" returns successfully" Dec 13 13:40:01.009358 containerd[1455]: time="2024-12-13T13:40:01.009337914Z" level=info msg="StopPodSandbox for \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\"" Dec 13 13:40:01.009489 containerd[1455]: time="2024-12-13T13:40:01.009472676Z" level=info msg="TearDown network for sandbox \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\" successfully" Dec 13 13:40:01.010155 containerd[1455]: time="2024-12-13T13:40:01.009539961Z" level=info msg="StopPodSandbox for \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\" returns successfully" Dec 13 13:40:01.010517 containerd[1455]: time="2024-12-13T13:40:01.010435748Z" level=info msg="StopPodSandbox for \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\"" Dec 13 13:40:01.010639 containerd[1455]: time="2024-12-13T13:40:01.010582551Z" level=info msg="TearDown network for sandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" successfully" Dec 13 13:40:01.010639 containerd[1455]: time="2024-12-13T13:40:01.010633627Z" level=info msg="StopPodSandbox for \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" returns successfully" Dec 13 13:40:01.010757 containerd[1455]: time="2024-12-13T13:40:01.010732621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:8,}" Dec 13 13:40:01.012847 containerd[1455]: time="2024-12-13T13:40:01.012810219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxhhl,Uid:e992dced-14d2-4b14-8f2a-2ecb469b78aa,Namespace:default,Attempt:3,}" Dec 13 13:40:01.128658 containerd[1455]: time="2024-12-13T13:40:01.128606711Z" level=error msg="Failed to destroy network for sandbox \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:01.129410 containerd[1455]: time="2024-12-13T13:40:01.129286847Z" level=error msg="encountered an error cleaning up failed sandbox \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:01.129477 containerd[1455]: time="2024-12-13T13:40:01.129369902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxhhl,Uid:e992dced-14d2-4b14-8f2a-2ecb469b78aa,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:01.130254 kubelet[1849]: E1213 13:40:01.130188 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:01.130571 kubelet[1849]: E1213 13:40:01.130420 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxhhl" Dec 13 13:40:01.130571 kubelet[1849]: E1213 13:40:01.130505 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxhhl" Dec 13 13:40:01.130790 kubelet[1849]: E1213 13:40:01.130709 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-vxhhl_default(e992dced-14d2-4b14-8f2a-2ecb469b78aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-vxhhl_default(e992dced-14d2-4b14-8f2a-2ecb469b78aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-vxhhl" podUID="e992dced-14d2-4b14-8f2a-2ecb469b78aa" Dec 13 13:40:01.147663 containerd[1455]: time="2024-12-13T13:40:01.147537797Z" level=error msg="Failed to destroy network for sandbox \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:01.148614 containerd[1455]: time="2024-12-13T13:40:01.148571001Z" level=error msg="encountered an error cleaning up failed sandbox \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:01.148677 containerd[1455]: time="2024-12-13T13:40:01.148652533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:01.149081 kubelet[1849]: E1213 13:40:01.148838 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:01.149081 kubelet[1849]: E1213 13:40:01.148889 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:40:01.149081 kubelet[1849]: E1213 13:40:01.148913 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:40:01.149200 kubelet[1849]: E1213 13:40:01.148981 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:40:01.538988 kubelet[1849]: E1213 13:40:01.538918 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:01.964263 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3-shm.mount: Deactivated successfully. Dec 13 13:40:02.007278 kubelet[1849]: I1213 13:40:02.007238 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49" Dec 13 13:40:02.008106 containerd[1455]: time="2024-12-13T13:40:02.008047854Z" level=info msg="StopPodSandbox for \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\"" Dec 13 13:40:02.008407 containerd[1455]: time="2024-12-13T13:40:02.008336993Z" level=info msg="Ensure that sandbox a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49 in task-service has been cleanup successfully" Dec 13 13:40:02.010904 containerd[1455]: time="2024-12-13T13:40:02.010114143Z" level=info msg="TearDown network for sandbox \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\" successfully" Dec 13 13:40:02.010904 containerd[1455]: time="2024-12-13T13:40:02.010140452Z" level=info msg="StopPodSandbox for \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\" returns successfully" Dec 13 13:40:02.010904 containerd[1455]: time="2024-12-13T13:40:02.010730211Z" level=info msg="StopPodSandbox for \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\"" Dec 13 13:40:02.010904 containerd[1455]: time="2024-12-13T13:40:02.010812184Z" level=info msg="TearDown network for sandbox \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\" successfully" Dec 13 13:40:02.010904 containerd[1455]: time="2024-12-13T13:40:02.010857608Z" level=info msg="StopPodSandbox for \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\" returns successfully" Dec 13 13:40:02.011997 containerd[1455]: time="2024-12-13T13:40:02.011857450Z" level=info msg="StopPodSandbox for \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\"" Dec 13 13:40:02.011997 containerd[1455]: time="2024-12-13T13:40:02.011936648Z" level=info msg="TearDown network for sandbox \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\" successfully" Dec 13 13:40:02.011997 containerd[1455]: time="2024-12-13T13:40:02.011953479Z" level=info msg="StopPodSandbox for \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\" returns successfully" Dec 13 13:40:02.012985 containerd[1455]: time="2024-12-13T13:40:02.012856561Z" level=info msg="StopPodSandbox for \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\"" Dec 13 13:40:02.012985 containerd[1455]: time="2024-12-13T13:40:02.012917205Z" level=info msg="TearDown network for sandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" successfully" Dec 13 13:40:02.012985 containerd[1455]: time="2024-12-13T13:40:02.012927694Z" level=info msg="StopPodSandbox for \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" returns successfully" Dec 13 13:40:02.013042 systemd[1]: run-netns-cni\x2ddfb171be\x2dbbde\x2d4c79\x2d5149\x2d90dc15099679.mount: Deactivated successfully. Dec 13 13:40:02.014478 containerd[1455]: time="2024-12-13T13:40:02.014137367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxhhl,Uid:e992dced-14d2-4b14-8f2a-2ecb469b78aa,Namespace:default,Attempt:4,}" Dec 13 13:40:02.026070 kubelet[1849]: I1213 13:40:02.025928 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3" Dec 13 13:40:02.027660 containerd[1455]: time="2024-12-13T13:40:02.027531857Z" level=info msg="StopPodSandbox for \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\"" Dec 13 13:40:02.028255 containerd[1455]: time="2024-12-13T13:40:02.028134439Z" level=info msg="Ensure that sandbox 4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3 in task-service has been cleanup successfully" Dec 13 13:40:02.028906 containerd[1455]: time="2024-12-13T13:40:02.028705864Z" level=info msg="TearDown network for sandbox \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\" successfully" Dec 13 13:40:02.028906 containerd[1455]: time="2024-12-13T13:40:02.028734647Z" level=info msg="StopPodSandbox for \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\" returns successfully" Dec 13 13:40:02.029039 containerd[1455]: time="2024-12-13T13:40:02.028988590Z" level=info msg="StopPodSandbox for \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\"" Dec 13 13:40:02.029128 containerd[1455]: time="2024-12-13T13:40:02.029069541Z" level=info msg="TearDown network for sandbox \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\" successfully" Dec 13 13:40:02.029128 containerd[1455]: time="2024-12-13T13:40:02.029081724Z" level=info msg="StopPodSandbox for \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\" returns successfully" Dec 13 13:40:02.030793 systemd[1]: run-netns-cni\x2dd08655b1\x2de61f\x2d80d8\x2d010e\x2d86b9fe86c56f.mount: Deactivated successfully. Dec 13 13:40:02.031252 containerd[1455]: time="2024-12-13T13:40:02.030850247Z" level=info msg="StopPodSandbox for \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\"" Dec 13 13:40:02.033416 containerd[1455]: time="2024-12-13T13:40:02.033288239Z" level=info msg="TearDown network for sandbox \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\" successfully" Dec 13 13:40:02.033416 containerd[1455]: time="2024-12-13T13:40:02.033308136Z" level=info msg="StopPodSandbox for \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\" returns successfully" Dec 13 13:40:02.034045 containerd[1455]: time="2024-12-13T13:40:02.033930185Z" level=info msg="StopPodSandbox for \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\"" Dec 13 13:40:02.034045 containerd[1455]: time="2024-12-13T13:40:02.034010855Z" level=info msg="TearDown network for sandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" successfully" Dec 13 13:40:02.034045 containerd[1455]: time="2024-12-13T13:40:02.034022296Z" level=info msg="StopPodSandbox for \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" returns successfully" Dec 13 13:40:02.034423 containerd[1455]: time="2024-12-13T13:40:02.034310593Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\"" Dec 13 13:40:02.034423 containerd[1455]: time="2024-12-13T13:40:02.034399289Z" level=info msg="TearDown network for sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" successfully" Dec 13 13:40:02.034423 containerd[1455]: time="2024-12-13T13:40:02.034410739Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" returns successfully" Dec 13 13:40:02.034950 containerd[1455]: time="2024-12-13T13:40:02.034672597Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\"" Dec 13 13:40:02.034950 containerd[1455]: time="2024-12-13T13:40:02.034745213Z" level=info msg="TearDown network for sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" successfully" Dec 13 13:40:02.034950 containerd[1455]: time="2024-12-13T13:40:02.034756754Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" returns successfully" Dec 13 13:40:02.035693 containerd[1455]: time="2024-12-13T13:40:02.035363424Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\"" Dec 13 13:40:02.035693 containerd[1455]: time="2024-12-13T13:40:02.035446459Z" level=info msg="TearDown network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" successfully" Dec 13 13:40:02.035693 containerd[1455]: time="2024-12-13T13:40:02.035459052Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" returns successfully" Dec 13 13:40:02.035800 containerd[1455]: time="2024-12-13T13:40:02.035758891Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\"" Dec 13 13:40:02.035931 containerd[1455]: time="2024-12-13T13:40:02.035860661Z" level=info msg="TearDown network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" successfully" Dec 13 13:40:02.035931 containerd[1455]: time="2024-12-13T13:40:02.035880788Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" returns successfully" Dec 13 13:40:02.036373 containerd[1455]: time="2024-12-13T13:40:02.036226171Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\"" Dec 13 13:40:02.036373 containerd[1455]: time="2024-12-13T13:40:02.036305389Z" level=info msg="TearDown network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" successfully" Dec 13 13:40:02.036373 containerd[1455]: time="2024-12-13T13:40:02.036318103Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" returns successfully" Dec 13 13:40:02.037360 containerd[1455]: time="2024-12-13T13:40:02.037089018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:9,}" Dec 13 13:40:02.539571 kubelet[1849]: E1213 13:40:02.539505 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:03.539842 kubelet[1849]: E1213 13:40:03.539723 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:04.251505 containerd[1455]: time="2024-12-13T13:40:04.251446168Z" level=error msg="Failed to destroy network for sandbox \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:04.255653 containerd[1455]: time="2024-12-13T13:40:04.254744035Z" level=error msg="encountered an error cleaning up failed sandbox \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:04.255653 containerd[1455]: time="2024-12-13T13:40:04.254830707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:04.255392 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268-shm.mount: Deactivated successfully. Dec 13 13:40:04.256328 kubelet[1849]: E1213 13:40:04.256120 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:04.256328 kubelet[1849]: E1213 13:40:04.256194 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:40:04.256328 kubelet[1849]: E1213 13:40:04.256221 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:40:04.256451 kubelet[1849]: E1213 13:40:04.256285 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:40:04.270652 containerd[1455]: time="2024-12-13T13:40:04.270513814Z" level=error msg="Failed to destroy network for sandbox \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:04.271289 containerd[1455]: time="2024-12-13T13:40:04.271118581Z" level=error msg="encountered an error cleaning up failed sandbox \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:04.271289 containerd[1455]: time="2024-12-13T13:40:04.271175227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxhhl,Uid:e992dced-14d2-4b14-8f2a-2ecb469b78aa,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:04.271415 kubelet[1849]: E1213 13:40:04.271366 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:04.271415 kubelet[1849]: E1213 13:40:04.271414 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxhhl" Dec 13 13:40:04.271485 kubelet[1849]: E1213 13:40:04.271437 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxhhl" Dec 13 13:40:04.271515 kubelet[1849]: E1213 13:40:04.271490 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-vxhhl_default(e992dced-14d2-4b14-8f2a-2ecb469b78aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-vxhhl_default(e992dced-14d2-4b14-8f2a-2ecb469b78aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-vxhhl" podUID="e992dced-14d2-4b14-8f2a-2ecb469b78aa" Dec 13 13:40:04.275732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a-shm.mount: Deactivated successfully. Dec 13 13:40:04.540645 kubelet[1849]: E1213 13:40:04.540390 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:04.568635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2214092441.mount: Deactivated successfully. Dec 13 13:40:04.607220 containerd[1455]: time="2024-12-13T13:40:04.607163063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:40:04.608125 containerd[1455]: time="2024-12-13T13:40:04.608075054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 13:40:04.609469 containerd[1455]: time="2024-12-13T13:40:04.609412367Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:40:04.611709 containerd[1455]: time="2024-12-13T13:40:04.611682689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:40:04.612820 containerd[1455]: time="2024-12-13T13:40:04.612330918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.682540872s" Dec 13 13:40:04.612820 containerd[1455]: time="2024-12-13T13:40:04.612372835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 13:40:04.639932 containerd[1455]: time="2024-12-13T13:40:04.639561812Z" level=info msg="CreateContainer within sandbox \"63a295668b578cd2f41c42f231b0cf5a0a143ab51387f6416f67a7e732ef532a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 13:40:04.659592 containerd[1455]: time="2024-12-13T13:40:04.659511523Z" level=info msg="CreateContainer within sandbox \"63a295668b578cd2f41c42f231b0cf5a0a143ab51387f6416f67a7e732ef532a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f5629f47fc3f46904e6afb49ddc27aaa2f2b90c15f96b06b4e6322a4cce9df64\"" Dec 13 13:40:04.661602 containerd[1455]: time="2024-12-13T13:40:04.660163819Z" level=info msg="StartContainer for \"f5629f47fc3f46904e6afb49ddc27aaa2f2b90c15f96b06b4e6322a4cce9df64\"" Dec 13 13:40:05.002132 systemd[1]: Started cri-containerd-f5629f47fc3f46904e6afb49ddc27aaa2f2b90c15f96b06b4e6322a4cce9df64.scope - libcontainer container f5629f47fc3f46904e6afb49ddc27aaa2f2b90c15f96b06b4e6322a4cce9df64. Dec 13 13:40:05.085327 kubelet[1849]: I1213 13:40:05.085214 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a" Dec 13 13:40:05.088679 containerd[1455]: time="2024-12-13T13:40:05.086242645Z" level=info msg="StopPodSandbox for \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\"" Dec 13 13:40:05.088679 containerd[1455]: time="2024-12-13T13:40:05.086761984Z" level=info msg="Ensure that sandbox ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a in task-service has been cleanup successfully" Dec 13 13:40:05.090431 kubelet[1849]: I1213 13:40:05.090350 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268" Dec 13 13:40:05.090834 containerd[1455]: time="2024-12-13T13:40:05.090780879Z" level=info msg="StopPodSandbox for \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\"" Dec 13 13:40:05.091096 containerd[1455]: time="2024-12-13T13:40:05.091028951Z" level=info msg="Ensure that sandbox 7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268 in task-service has been cleanup successfully" Dec 13 13:40:05.091219 containerd[1455]: time="2024-12-13T13:40:05.091193168Z" level=info msg="TearDown network for sandbox \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\" successfully" Dec 13 13:40:05.091219 containerd[1455]: time="2024-12-13T13:40:05.091211282Z" level=info msg="StopPodSandbox for \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\" returns successfully" Dec 13 13:40:05.091374 containerd[1455]: time="2024-12-13T13:40:05.091326095Z" level=info msg="TearDown network for sandbox \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\" successfully" Dec 13 13:40:05.091374 containerd[1455]: time="2024-12-13T13:40:05.091342275Z" level=info msg="StopPodSandbox for \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\" returns successfully" Dec 13 13:40:05.092219 containerd[1455]: time="2024-12-13T13:40:05.091854100Z" level=info msg="StopPodSandbox for \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\"" Dec 13 13:40:05.092219 containerd[1455]: time="2024-12-13T13:40:05.091927758Z" level=info msg="TearDown network for sandbox \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\" successfully" Dec 13 13:40:05.092219 containerd[1455]: time="2024-12-13T13:40:05.091939169Z" level=info msg="StopPodSandbox for \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\" returns successfully" Dec 13 13:40:05.092219 containerd[1455]: time="2024-12-13T13:40:05.092072437Z" level=info msg="StopPodSandbox for \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\"" Dec 13 13:40:05.092219 containerd[1455]: time="2024-12-13T13:40:05.092133592Z" level=info msg="TearDown network for sandbox \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\" successfully" Dec 13 13:40:05.092219 containerd[1455]: time="2024-12-13T13:40:05.092144031Z" level=info msg="StopPodSandbox for \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\" returns successfully" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.092453958Z" level=info msg="StopPodSandbox for \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\"" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.092517978Z" level=info msg="TearDown network for sandbox \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\" successfully" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.092528538Z" level=info msg="StopPodSandbox for \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\" returns successfully" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.092580805Z" level=info msg="StopPodSandbox for \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\"" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.092806346Z" level=info msg="StopPodSandbox for \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\"" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.092880705Z" level=info msg="TearDown network for sandbox \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\" successfully" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.092891325Z" level=info msg="StopPodSandbox for \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\" returns successfully" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.093067313Z" level=info msg="TearDown network for sandbox \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\" successfully" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.093081169Z" level=info msg="StopPodSandbox for \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\" returns successfully" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.093359468Z" level=info msg="StopPodSandbox for \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\"" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.093425851Z" level=info msg="TearDown network for sandbox \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\" successfully" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.093437844Z" level=info msg="StopPodSandbox for \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\" returns successfully" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.093491794Z" level=info msg="StopPodSandbox for \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\"" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.093545745Z" level=info msg="TearDown network for sandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" successfully" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.093556345Z" level=info msg="StopPodSandbox for \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" returns successfully" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.093920424Z" level=info msg="StopPodSandbox for \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\"" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.093988120Z" level=info msg="TearDown network for sandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" successfully" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.093998639Z" level=info msg="StopPodSandbox for \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" returns successfully" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.094072637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxhhl,Uid:e992dced-14d2-4b14-8f2a-2ecb469b78aa,Namespace:default,Attempt:5,}" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.094596164Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\"" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.094682856Z" level=info msg="TearDown network for sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" successfully" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.094716287Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" returns successfully" Dec 13 13:40:05.094937 containerd[1455]: time="2024-12-13T13:40:05.094929155Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\"" Dec 13 13:40:05.098701 containerd[1455]: time="2024-12-13T13:40:05.094992423Z" level=info msg="TearDown network for sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" successfully" Dec 13 13:40:05.098701 containerd[1455]: time="2024-12-13T13:40:05.095003603Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" returns successfully" Dec 13 13:40:05.098701 containerd[1455]: time="2024-12-13T13:40:05.095257678Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\"" Dec 13 13:40:05.098701 containerd[1455]: time="2024-12-13T13:40:05.095320645Z" level=info msg="TearDown network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" successfully" Dec 13 13:40:05.098701 containerd[1455]: time="2024-12-13T13:40:05.095331395Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" returns successfully" Dec 13 13:40:05.098701 containerd[1455]: time="2024-12-13T13:40:05.095541006Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\"" Dec 13 13:40:05.098701 containerd[1455]: time="2024-12-13T13:40:05.095600427Z" level=info msg="TearDown network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" successfully" Dec 13 13:40:05.098701 containerd[1455]: time="2024-12-13T13:40:05.095611127Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" returns successfully" Dec 13 13:40:05.098701 containerd[1455]: time="2024-12-13T13:40:05.096344955Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\"" Dec 13 13:40:05.098701 containerd[1455]: time="2024-12-13T13:40:05.096484175Z" level=info msg="TearDown network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" successfully" Dec 13 13:40:05.098701 containerd[1455]: time="2024-12-13T13:40:05.096498041Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" returns successfully" Dec 13 13:40:05.098701 containerd[1455]: time="2024-12-13T13:40:05.096947740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:10,}" Dec 13 13:40:05.104797 containerd[1455]: time="2024-12-13T13:40:05.104360724Z" level=info msg="StartContainer for \"f5629f47fc3f46904e6afb49ddc27aaa2f2b90c15f96b06b4e6322a4cce9df64\" returns successfully" Dec 13 13:40:05.181269 systemd[1]: run-netns-cni\x2d33043eb2\x2d36a1\x2db24e\x2d75ee\x2d28c98b15c153.mount: Deactivated successfully. Dec 13 13:40:05.182072 systemd[1]: run-netns-cni\x2dcfde3ccb\x2d5e86\x2d65f2\x2d751a\x2d14d78d89d0ef.mount: Deactivated successfully. Dec 13 13:40:05.209975 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 13:40:05.210147 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 13:40:05.251772 containerd[1455]: time="2024-12-13T13:40:05.251729076Z" level=error msg="Failed to destroy network for sandbox \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:05.255832 containerd[1455]: time="2024-12-13T13:40:05.255068974Z" level=error msg="Failed to destroy network for sandbox \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:05.255832 containerd[1455]: time="2024-12-13T13:40:05.255126933Z" level=error msg="encountered an error cleaning up failed sandbox \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:05.255832 containerd[1455]: time="2024-12-13T13:40:05.255373041Z" level=error msg="encountered an error cleaning up failed sandbox \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:05.255537 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b-shm.mount: Deactivated successfully. Dec 13 13:40:05.257685 containerd[1455]: time="2024-12-13T13:40:05.256179145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:10,} failed, error" error="failed to setup network for sandbox \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:05.257685 containerd[1455]: time="2024-12-13T13:40:05.256321089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxhhl,Uid:e992dced-14d2-4b14-8f2a-2ecb469b78aa,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:05.258602 kubelet[1849]: E1213 13:40:05.258022 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:05.258602 kubelet[1849]: E1213 13:40:05.258088 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxhhl" Dec 13 13:40:05.258602 kubelet[1849]: E1213 13:40:05.258116 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxhhl" Dec 13 13:40:05.258794 kubelet[1849]: E1213 13:40:05.258175 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-vxhhl_default(e992dced-14d2-4b14-8f2a-2ecb469b78aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-vxhhl_default(e992dced-14d2-4b14-8f2a-2ecb469b78aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-vxhhl" podUID="e992dced-14d2-4b14-8f2a-2ecb469b78aa" Dec 13 13:40:05.258794 kubelet[1849]: E1213 13:40:05.258398 1849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:40:05.258794 kubelet[1849]: E1213 13:40:05.258424 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:40:05.258927 kubelet[1849]: E1213 13:40:05.258444 1849 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8fks" Dec 13 13:40:05.258927 kubelet[1849]: E1213 13:40:05.258498 1849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r8fks_calico-system(5d03366d-217c-4fd9-b86c-ea2d2a91fb87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r8fks" podUID="5d03366d-217c-4fd9-b86c-ea2d2a91fb87" Dec 13 13:40:05.259920 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4-shm.mount: Deactivated successfully. Dec 13 13:40:05.541449 kubelet[1849]: E1213 13:40:05.541240 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:06.111055 kubelet[1849]: I1213 13:40:06.110934 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b" Dec 13 13:40:06.113959 containerd[1455]: time="2024-12-13T13:40:06.113373755Z" level=info msg="StopPodSandbox for \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\"" Dec 13 13:40:06.114650 containerd[1455]: time="2024-12-13T13:40:06.114426269Z" level=info msg="Ensure that sandbox 3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b in task-service has been cleanup successfully" Dec 13 13:40:06.122593 systemd[1]: run-netns-cni\x2d9e72108d\x2dd405\x2d4c5e\x2d4129\x2d341bb46e1dfb.mount: Deactivated successfully. Dec 13 13:40:06.123253 containerd[1455]: time="2024-12-13T13:40:06.122804229Z" level=info msg="TearDown network for sandbox \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\" successfully" Dec 13 13:40:06.123253 containerd[1455]: time="2024-12-13T13:40:06.122870903Z" level=info msg="StopPodSandbox for \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\" returns successfully" Dec 13 13:40:06.126375 kubelet[1849]: I1213 13:40:06.125954 1849 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-znh9x" podStartSLOduration=5.493528915 podStartE2EDuration="32.125864318s" podCreationTimestamp="2024-12-13 13:39:34 +0000 UTC" firstStartedPulling="2024-12-13 13:39:37.980751083 +0000 UTC m=+5.179980405" lastFinishedPulling="2024-12-13 13:40:04.613086486 +0000 UTC m=+31.812315808" observedRunningTime="2024-12-13 13:40:06.120034121 +0000 UTC m=+33.319263503" watchObservedRunningTime="2024-12-13 13:40:06.125864318 +0000 UTC m=+33.325093690" Dec 13 13:40:06.128281 containerd[1455]: time="2024-12-13T13:40:06.127018912Z" level=info msg="StopPodSandbox for \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\"" Dec 13 13:40:06.128281 containerd[1455]: time="2024-12-13T13:40:06.127311167Z" level=info msg="TearDown network for sandbox \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\" successfully" Dec 13 13:40:06.128281 containerd[1455]: time="2024-12-13T13:40:06.127358256Z" level=info msg="StopPodSandbox for \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\" returns successfully" Dec 13 13:40:06.130776 containerd[1455]: time="2024-12-13T13:40:06.130592799Z" level=info msg="StopPodSandbox for \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\"" Dec 13 13:40:06.131920 containerd[1455]: time="2024-12-13T13:40:06.131736102Z" level=info msg="TearDown network for sandbox \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\" successfully" Dec 13 13:40:06.131920 containerd[1455]: time="2024-12-13T13:40:06.131819708Z" level=info msg="StopPodSandbox for \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\" returns successfully" Dec 13 13:40:06.134316 kubelet[1849]: I1213 13:40:06.133958 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4" Dec 13 13:40:06.135866 containerd[1455]: time="2024-12-13T13:40:06.135430284Z" level=info msg="StopPodSandbox for \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\"" Dec 13 13:40:06.135866 containerd[1455]: time="2024-12-13T13:40:06.135700338Z" level=info msg="TearDown network for sandbox \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\" successfully" Dec 13 13:40:06.135866 containerd[1455]: time="2024-12-13T13:40:06.135732097Z" level=info msg="StopPodSandbox for \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\" returns successfully" Dec 13 13:40:06.136703 containerd[1455]: time="2024-12-13T13:40:06.136516351Z" level=info msg="StopPodSandbox for \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\"" Dec 13 13:40:06.137174 containerd[1455]: time="2024-12-13T13:40:06.137120428Z" level=info msg="StopPodSandbox for \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\"" Dec 13 13:40:06.137946 containerd[1455]: time="2024-12-13T13:40:06.137549668Z" level=info msg="TearDown network for sandbox \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\" successfully" Dec 13 13:40:06.137946 containerd[1455]: time="2024-12-13T13:40:06.137590274Z" level=info msg="StopPodSandbox for \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\" returns successfully" Dec 13 13:40:06.138533 containerd[1455]: time="2024-12-13T13:40:06.138485083Z" level=info msg="Ensure that sandbox 243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4 in task-service has been cleanup successfully" Dec 13 13:40:06.142089 containerd[1455]: time="2024-12-13T13:40:06.141750876Z" level=info msg="StopPodSandbox for \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\"" Dec 13 13:40:06.142662 containerd[1455]: time="2024-12-13T13:40:06.142582186Z" level=info msg="TearDown network for sandbox \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\" successfully" Dec 13 13:40:06.144995 containerd[1455]: time="2024-12-13T13:40:06.144914488Z" level=info msg="StopPodSandbox for \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\" returns successfully" Dec 13 13:40:06.145850 systemd[1]: run-netns-cni\x2d74ec5e2f\x2d1c22\x2d68a4\x2d0a60\x2d3e86c63848e8.mount: Deactivated successfully. Dec 13 13:40:06.147696 containerd[1455]: time="2024-12-13T13:40:06.145294917Z" level=info msg="TearDown network for sandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" successfully" Dec 13 13:40:06.149312 containerd[1455]: time="2024-12-13T13:40:06.146770791Z" level=info msg="StopPodSandbox for \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" returns successfully" Dec 13 13:40:06.150274 containerd[1455]: time="2024-12-13T13:40:06.148266982Z" level=info msg="StopPodSandbox for \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\"" Dec 13 13:40:06.150811 containerd[1455]: time="2024-12-13T13:40:06.150733133Z" level=info msg="TearDown network for sandbox \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\" successfully" Dec 13 13:40:06.151392 containerd[1455]: time="2024-12-13T13:40:06.151319177Z" level=info msg="StopPodSandbox for \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\" returns successfully" Dec 13 13:40:06.151850 containerd[1455]: time="2024-12-13T13:40:06.151394467Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\"" Dec 13 13:40:06.152525 containerd[1455]: time="2024-12-13T13:40:06.152134588Z" level=info msg="TearDown network for sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" successfully" Dec 13 13:40:06.152525 containerd[1455]: time="2024-12-13T13:40:06.152174281Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" returns successfully" Dec 13 13:40:06.153855 containerd[1455]: time="2024-12-13T13:40:06.153715707Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\"" Dec 13 13:40:06.154143 containerd[1455]: time="2024-12-13T13:40:06.153800235Z" level=info msg="StopPodSandbox for \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\"" Dec 13 13:40:06.154491 containerd[1455]: time="2024-12-13T13:40:06.153955705Z" level=info msg="TearDown network for sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" successfully" Dec 13 13:40:06.154903 containerd[1455]: time="2024-12-13T13:40:06.154704532Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" returns successfully" Dec 13 13:40:06.155403 containerd[1455]: time="2024-12-13T13:40:06.155241214Z" level=info msg="TearDown network for sandbox \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\" successfully" Dec 13 13:40:06.155403 containerd[1455]: time="2024-12-13T13:40:06.155280867Z" level=info msg="StopPodSandbox for \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\" returns successfully" Dec 13 13:40:06.156695 containerd[1455]: time="2024-12-13T13:40:06.156247381Z" level=info msg="StopPodSandbox for \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\"" Dec 13 13:40:06.156695 containerd[1455]: time="2024-12-13T13:40:06.156470207Z" level=info msg="TearDown network for sandbox \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\" successfully" Dec 13 13:40:06.156695 containerd[1455]: time="2024-12-13T13:40:06.156499190Z" level=info msg="StopPodSandbox for \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\" returns successfully" Dec 13 13:40:06.156695 containerd[1455]: time="2024-12-13T13:40:06.156600559Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\"" Dec 13 13:40:06.157808 containerd[1455]: time="2024-12-13T13:40:06.157731469Z" level=info msg="StopPodSandbox for \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\"" Dec 13 13:40:06.158236 containerd[1455]: time="2024-12-13T13:40:06.158198109Z" level=info msg="TearDown network for sandbox \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\" successfully" Dec 13 13:40:06.158762 containerd[1455]: time="2024-12-13T13:40:06.158610198Z" level=info msg="StopPodSandbox for \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\" returns successfully" Dec 13 13:40:06.159126 containerd[1455]: time="2024-12-13T13:40:06.158247231Z" level=info msg="TearDown network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" successfully" Dec 13 13:40:06.159126 containerd[1455]: time="2024-12-13T13:40:06.159023990Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" returns successfully" Dec 13 13:40:06.160145 containerd[1455]: time="2024-12-13T13:40:06.159980285Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\"" Dec 13 13:40:06.160350 containerd[1455]: time="2024-12-13T13:40:06.160198180Z" level=info msg="TearDown network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" successfully" Dec 13 13:40:06.160350 containerd[1455]: time="2024-12-13T13:40:06.160233968Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" returns successfully" Dec 13 13:40:06.161558 containerd[1455]: time="2024-12-13T13:40:06.161028480Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\"" Dec 13 13:40:06.161558 containerd[1455]: time="2024-12-13T13:40:06.161192997Z" level=info msg="TearDown network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" successfully" Dec 13 13:40:06.161558 containerd[1455]: time="2024-12-13T13:40:06.161220458Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" returns successfully" Dec 13 13:40:06.161558 containerd[1455]: time="2024-12-13T13:40:06.161330934Z" level=info msg="StopPodSandbox for \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\"" Dec 13 13:40:06.161558 containerd[1455]: time="2024-12-13T13:40:06.161452160Z" level=info msg="TearDown network for sandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" successfully" Dec 13 13:40:06.161558 containerd[1455]: time="2024-12-13T13:40:06.161477458Z" level=info msg="StopPodSandbox for \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" returns successfully" Dec 13 13:40:06.163570 containerd[1455]: time="2024-12-13T13:40:06.163421824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:11,}" Dec 13 13:40:06.164985 containerd[1455]: time="2024-12-13T13:40:06.164899221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxhhl,Uid:e992dced-14d2-4b14-8f2a-2ecb469b78aa,Namespace:default,Attempt:6,}" Dec 13 13:40:06.542206 kubelet[1849]: E1213 13:40:06.542151 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:06.811035 systemd-networkd[1350]: califea935b0287: Link UP Dec 13 13:40:06.812386 systemd-networkd[1350]: cali1f50c551c91: Link UP Dec 13 13:40:06.814747 systemd-networkd[1350]: cali1f50c551c91: Gained carrier Dec 13 13:40:06.815036 systemd-networkd[1350]: califea935b0287: Gained carrier Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.275 [INFO][2968] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.358 [INFO][2968] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.147-k8s-nginx--deployment--6d5f899847--vxhhl-eth0 nginx-deployment-6d5f899847- default e992dced-14d2-4b14-8f2a-2ecb469b78aa 1143 0 2024-12-13 13:39:57 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.147 nginx-deployment-6d5f899847-vxhhl eth0 default [] [] [kns.default ksa.default.default] califea935b0287 [] []}} ContainerID="c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" Namespace="default" Pod="nginx-deployment-6d5f899847-vxhhl" WorkloadEndpoint="172.24.4.147-k8s-nginx--deployment--6d5f899847--vxhhl-" Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.358 [INFO][2968] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" Namespace="default" Pod="nginx-deployment-6d5f899847-vxhhl" WorkloadEndpoint="172.24.4.147-k8s-nginx--deployment--6d5f899847--vxhhl-eth0" Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.437 [INFO][2984] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" HandleID="k8s-pod-network.c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" Workload="172.24.4.147-k8s-nginx--deployment--6d5f899847--vxhhl-eth0" Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.461 [INFO][2984] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" HandleID="k8s-pod-network.c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" Workload="172.24.4.147-k8s-nginx--deployment--6d5f899847--vxhhl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000fd7f0), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.147", "pod":"nginx-deployment-6d5f899847-vxhhl", "timestamp":"2024-12-13 13:40:06.437816242 +0000 UTC"}, Hostname:"172.24.4.147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.461 [INFO][2984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.462 [INFO][2984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.462 [INFO][2984] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.147' Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.465 [INFO][2984] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" host="172.24.4.147" Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.473 [INFO][2984] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.147" Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.493 [INFO][2984] ipam/ipam.go 489: Trying affinity for 192.168.48.0/26 host="172.24.4.147" Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.499 [INFO][2984] ipam/ipam.go 155: Attempting to load block cidr=192.168.48.0/26 host="172.24.4.147" Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.505 [INFO][2984] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="172.24.4.147" Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.505 [INFO][2984] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" host="172.24.4.147" Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.508 [INFO][2984] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.526 [INFO][2984] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" host="172.24.4.147" Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.642 [INFO][2984] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.48.1/26] block=192.168.48.0/26 handle="k8s-pod-network.c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" host="172.24.4.147" Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.642 [INFO][2984] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.48.1/26] handle="k8s-pod-network.c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" host="172.24.4.147" Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.642 [INFO][2984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:40:06.974020 containerd[1455]: 2024-12-13 13:40:06.642 [INFO][2984] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.1/26] IPv6=[] ContainerID="c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" HandleID="k8s-pod-network.c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" Workload="172.24.4.147-k8s-nginx--deployment--6d5f899847--vxhhl-eth0" Dec 13 13:40:06.974878 containerd[1455]: 2024-12-13 13:40:06.649 [INFO][2968] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" Namespace="default" Pod="nginx-deployment-6d5f899847-vxhhl" WorkloadEndpoint="172.24.4.147-k8s-nginx--deployment--6d5f899847--vxhhl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.147-k8s-nginx--deployment--6d5f899847--vxhhl-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"e992dced-14d2-4b14-8f2a-2ecb469b78aa", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.147", ContainerID:"", Pod:"nginx-deployment-6d5f899847-vxhhl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.48.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califea935b0287", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:40:06.974878 containerd[1455]: 2024-12-13 13:40:06.649 [INFO][2968] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.48.1/32] ContainerID="c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" Namespace="default" Pod="nginx-deployment-6d5f899847-vxhhl" WorkloadEndpoint="172.24.4.147-k8s-nginx--deployment--6d5f899847--vxhhl-eth0" Dec 13 13:40:06.974878 containerd[1455]: 2024-12-13 13:40:06.649 [INFO][2968] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califea935b0287 ContainerID="c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" Namespace="default" Pod="nginx-deployment-6d5f899847-vxhhl" WorkloadEndpoint="172.24.4.147-k8s-nginx--deployment--6d5f899847--vxhhl-eth0" Dec 13 13:40:06.974878 containerd[1455]: 2024-12-13 13:40:06.820 [INFO][2968] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" Namespace="default" Pod="nginx-deployment-6d5f899847-vxhhl" WorkloadEndpoint="172.24.4.147-k8s-nginx--deployment--6d5f899847--vxhhl-eth0" Dec 13 13:40:06.974878 containerd[1455]: 2024-12-13 13:40:06.821 [INFO][2968] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" Namespace="default" Pod="nginx-deployment-6d5f899847-vxhhl" WorkloadEndpoint="172.24.4.147-k8s-nginx--deployment--6d5f899847--vxhhl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.147-k8s-nginx--deployment--6d5f899847--vxhhl-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"e992dced-14d2-4b14-8f2a-2ecb469b78aa", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.147", ContainerID:"c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc", Pod:"nginx-deployment-6d5f899847-vxhhl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.48.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califea935b0287", MAC:"b6:35:b6:0e:ce:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:40:06.974878 containerd[1455]: 2024-12-13 13:40:06.972 [INFO][2968] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc" Namespace="default" Pod="nginx-deployment-6d5f899847-vxhhl" WorkloadEndpoint="172.24.4.147-k8s-nginx--deployment--6d5f899847--vxhhl-eth0" Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.264 [INFO][2956] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.358 [INFO][2956] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.147-k8s-csi--node--driver--r8fks-eth0 csi-node-driver- calico-system 5d03366d-217c-4fd9-b86c-ea2d2a91fb87 1045 0 2024-12-13 13:39:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.24.4.147 csi-node-driver-r8fks eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1f50c551c91 [] []}} ContainerID="163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" Namespace="calico-system" Pod="csi-node-driver-r8fks" WorkloadEndpoint="172.24.4.147-k8s-csi--node--driver--r8fks-" Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.358 [INFO][2956] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" Namespace="calico-system" Pod="csi-node-driver-r8fks" WorkloadEndpoint="172.24.4.147-k8s-csi--node--driver--r8fks-eth0" Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.448 [INFO][2983] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" HandleID="k8s-pod-network.163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" Workload="172.24.4.147-k8s-csi--node--driver--r8fks-eth0" Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.470 [INFO][2983] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" HandleID="k8s-pod-network.163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" Workload="172.24.4.147-k8s-csi--node--driver--r8fks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103b90), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.147", "pod":"csi-node-driver-r8fks", "timestamp":"2024-12-13 13:40:06.448802258 +0000 UTC"}, Hostname:"172.24.4.147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.470 [INFO][2983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.642 [INFO][2983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.642 [INFO][2983] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.147' Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.646 [INFO][2983] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" host="172.24.4.147" Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.653 [INFO][2983] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.147" Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.679 [INFO][2983] ipam/ipam.go 489: Trying affinity for 192.168.48.0/26 host="172.24.4.147" Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.682 [INFO][2983] ipam/ipam.go 155: Attempting to load block cidr=192.168.48.0/26 host="172.24.4.147" Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.686 [INFO][2983] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="172.24.4.147" Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.686 [INFO][2983] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" host="172.24.4.147" Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.689 [INFO][2983] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.710 [INFO][2983] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" host="172.24.4.147" Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.786 [INFO][2983] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.48.2/26] block=192.168.48.0/26 handle="k8s-pod-network.163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" host="172.24.4.147" Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.786 [INFO][2983] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.48.2/26] handle="k8s-pod-network.163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" host="172.24.4.147" Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.786 [INFO][2983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:40:07.224562 containerd[1455]: 2024-12-13 13:40:06.786 [INFO][2983] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.2/26] IPv6=[] ContainerID="163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" HandleID="k8s-pod-network.163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" Workload="172.24.4.147-k8s-csi--node--driver--r8fks-eth0" Dec 13 13:40:07.226520 containerd[1455]: 2024-12-13 13:40:06.791 [INFO][2956] cni-plugin/k8s.go 386: Populated endpoint ContainerID="163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" Namespace="calico-system" Pod="csi-node-driver-r8fks" WorkloadEndpoint="172.24.4.147-k8s-csi--node--driver--r8fks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.147-k8s-csi--node--driver--r8fks-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5d03366d-217c-4fd9-b86c-ea2d2a91fb87", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 39, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.147", ContainerID:"", Pod:"csi-node-driver-r8fks", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.48.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f50c551c91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:40:07.226520 containerd[1455]: 2024-12-13 13:40:06.791 [INFO][2956] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.48.2/32] ContainerID="163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" Namespace="calico-system" Pod="csi-node-driver-r8fks" WorkloadEndpoint="172.24.4.147-k8s-csi--node--driver--r8fks-eth0" Dec 13 13:40:07.226520 containerd[1455]: 2024-12-13 13:40:06.791 [INFO][2956] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f50c551c91 ContainerID="163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" Namespace="calico-system" Pod="csi-node-driver-r8fks" WorkloadEndpoint="172.24.4.147-k8s-csi--node--driver--r8fks-eth0" Dec 13 13:40:07.226520 containerd[1455]: 2024-12-13 13:40:06.821 [INFO][2956] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" Namespace="calico-system" Pod="csi-node-driver-r8fks" WorkloadEndpoint="172.24.4.147-k8s-csi--node--driver--r8fks-eth0" Dec 13 13:40:07.226520 containerd[1455]: 2024-12-13 13:40:06.821 [INFO][2956] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" Namespace="calico-system" Pod="csi-node-driver-r8fks" WorkloadEndpoint="172.24.4.147-k8s-csi--node--driver--r8fks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.147-k8s-csi--node--driver--r8fks-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5d03366d-217c-4fd9-b86c-ea2d2a91fb87", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 39, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.147", ContainerID:"163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff", Pod:"csi-node-driver-r8fks", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.48.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f50c551c91", MAC:"22:17:93:bd:29:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:40:07.226520 containerd[1455]: 2024-12-13 13:40:07.205 [INFO][2956] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff" Namespace="calico-system" Pod="csi-node-driver-r8fks" WorkloadEndpoint="172.24.4.147-k8s-csi--node--driver--r8fks-eth0" Dec 13 13:40:07.313304 containerd[1455]: time="2024-12-13T13:40:07.312901813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:40:07.313304 containerd[1455]: time="2024-12-13T13:40:07.313011938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:40:07.313304 containerd[1455]: time="2024-12-13T13:40:07.313039640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:40:07.313304 containerd[1455]: time="2024-12-13T13:40:07.313163271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:40:07.334668 containerd[1455]: time="2024-12-13T13:40:07.330447637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:40:07.334668 containerd[1455]: time="2024-12-13T13:40:07.332978109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:40:07.334668 containerd[1455]: time="2024-12-13T13:40:07.332994039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:40:07.334668 containerd[1455]: time="2024-12-13T13:40:07.333089227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:40:07.368898 systemd[1]: Started cri-containerd-c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc.scope - libcontainer container c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc. Dec 13 13:40:07.380301 systemd[1]: Started cri-containerd-163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff.scope - libcontainer container 163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff. Dec 13 13:40:07.420880 containerd[1455]: time="2024-12-13T13:40:07.420846740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8fks,Uid:5d03366d-217c-4fd9-b86c-ea2d2a91fb87,Namespace:calico-system,Attempt:11,} returns sandbox id \"163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff\"" Dec 13 13:40:07.424994 containerd[1455]: time="2024-12-13T13:40:07.424951190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 13:40:07.444893 kernel: bpftool[3236]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 13:40:07.464124 containerd[1455]: time="2024-12-13T13:40:07.464085584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxhhl,Uid:e992dced-14d2-4b14-8f2a-2ecb469b78aa,Namespace:default,Attempt:6,} returns sandbox id \"c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc\"" Dec 13 13:40:07.543648 kubelet[1849]: E1213 13:40:07.543515 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:07.721780 systemd-networkd[1350]: vxlan.calico: Link UP Dec 13 13:40:07.721789 systemd-networkd[1350]: vxlan.calico: Gained carrier Dec 13 13:40:08.256894 systemd-networkd[1350]: califea935b0287: Gained IPv6LL Dec 13 13:40:08.258185 systemd-networkd[1350]: cali1f50c551c91: Gained IPv6LL Dec 13 13:40:08.544769 kubelet[1849]: E1213 13:40:08.544476 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:09.025012 systemd-networkd[1350]: vxlan.calico: Gained IPv6LL Dec 13 13:40:09.547278 kubelet[1849]: E1213 13:40:09.547148 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:10.548737 containerd[1455]: time="2024-12-13T13:40:10.546800079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:40:10.549682 kubelet[1849]: E1213 13:40:10.549562 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:10.558726 containerd[1455]: time="2024-12-13T13:40:10.558557300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 13:40:10.568284 containerd[1455]: time="2024-12-13T13:40:10.568175024Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:40:10.577396 containerd[1455]: time="2024-12-13T13:40:10.577252699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:40:10.579437 containerd[1455]: time="2024-12-13T13:40:10.579195398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 3.154197289s" Dec 13 13:40:10.579437 containerd[1455]: time="2024-12-13T13:40:10.579266611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 13:40:10.580502 containerd[1455]: time="2024-12-13T13:40:10.580229028Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 13:40:10.583978 containerd[1455]: time="2024-12-13T13:40:10.583888013Z" level=info msg="CreateContainer within sandbox \"163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 13:40:10.701479 containerd[1455]: time="2024-12-13T13:40:10.701201416Z" level=info msg="CreateContainer within sandbox \"163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2c2067f238aca05fc90a8327fc0aa150c2c2233832f00765791aa977235f7304\"" Dec 13 13:40:10.702566 containerd[1455]: time="2024-12-13T13:40:10.702494211Z" level=info msg="StartContainer for \"2c2067f238aca05fc90a8327fc0aa150c2c2233832f00765791aa977235f7304\"" Dec 13 13:40:10.771840 systemd[1]: run-containerd-runc-k8s.io-2c2067f238aca05fc90a8327fc0aa150c2c2233832f00765791aa977235f7304-runc.LjaXtM.mount: Deactivated successfully. Dec 13 13:40:10.781826 systemd[1]: Started cri-containerd-2c2067f238aca05fc90a8327fc0aa150c2c2233832f00765791aa977235f7304.scope - libcontainer container 2c2067f238aca05fc90a8327fc0aa150c2c2233832f00765791aa977235f7304. Dec 13 13:40:10.881215 containerd[1455]: time="2024-12-13T13:40:10.880880540Z" level=info msg="StartContainer for \"2c2067f238aca05fc90a8327fc0aa150c2c2233832f00765791aa977235f7304\" returns successfully" Dec 13 13:40:11.550098 kubelet[1849]: E1213 13:40:11.550017 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:12.550766 kubelet[1849]: E1213 13:40:12.550706 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:13.551417 kubelet[1849]: E1213 13:40:13.551361 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:14.446419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount578834202.mount: Deactivated successfully. Dec 13 13:40:14.519402 kubelet[1849]: E1213 13:40:14.519357 1849 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:14.551881 kubelet[1849]: E1213 13:40:14.551836 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:15.552014 kubelet[1849]: E1213 13:40:15.551977 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:15.868832 containerd[1455]: time="2024-12-13T13:40:15.867867389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:40:15.871219 containerd[1455]: time="2024-12-13T13:40:15.870996209Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036027" Dec 13 13:40:15.872764 containerd[1455]: time="2024-12-13T13:40:15.872562719Z" level=info msg="ImageCreate event name:\"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:40:15.879738 containerd[1455]: time="2024-12-13T13:40:15.879529907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:40:15.882686 containerd[1455]: time="2024-12-13T13:40:15.882354539Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 5.302063233s" Dec 13 13:40:15.882686 containerd[1455]: time="2024-12-13T13:40:15.882440580Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 13:40:15.884567 containerd[1455]: time="2024-12-13T13:40:15.883497747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 13:40:15.888229 containerd[1455]: time="2024-12-13T13:40:15.888141801Z" level=info msg="CreateContainer within sandbox \"c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 13:40:15.920722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2449236548.mount: Deactivated successfully. Dec 13 13:40:15.927257 containerd[1455]: time="2024-12-13T13:40:15.926990656Z" level=info msg="CreateContainer within sandbox \"c54f5412e802c378b4b718044995659374a836b11863fdbadf87c901ce3e41dc\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"5e93985a430880770b5b0b7f4d375aff98d1eec4d451662fc521afaed3411b71\"" Dec 13 13:40:15.928415 containerd[1455]: time="2024-12-13T13:40:15.928044687Z" level=info msg="StartContainer for \"5e93985a430880770b5b0b7f4d375aff98d1eec4d451662fc521afaed3411b71\"" Dec 13 13:40:15.973758 systemd[1]: Started cri-containerd-5e93985a430880770b5b0b7f4d375aff98d1eec4d451662fc521afaed3411b71.scope - libcontainer container 5e93985a430880770b5b0b7f4d375aff98d1eec4d451662fc521afaed3411b71. Dec 13 13:40:16.004038 containerd[1455]: time="2024-12-13T13:40:16.003956110Z" level=info msg="StartContainer for \"5e93985a430880770b5b0b7f4d375aff98d1eec4d451662fc521afaed3411b71\" returns successfully" Dec 13 13:40:16.552858 kubelet[1849]: E1213 13:40:16.552747 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:16.766581 kubelet[1849]: I1213 13:40:16.766525 1849 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-vxhhl" podStartSLOduration=11.348644761 podStartE2EDuration="19.766446718s" podCreationTimestamp="2024-12-13 13:39:57 +0000 UTC" firstStartedPulling="2024-12-13 13:40:07.465379108 +0000 UTC m=+34.664608430" lastFinishedPulling="2024-12-13 13:40:15.883181015 +0000 UTC m=+43.082410387" observedRunningTime="2024-12-13 13:40:16.54082182 +0000 UTC m=+43.740051193" watchObservedRunningTime="2024-12-13 13:40:16.766446718 +0000 UTC m=+43.965676090" Dec 13 13:40:17.553152 kubelet[1849]: E1213 13:40:17.553069 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:18.553870 kubelet[1849]: E1213 13:40:18.553730 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:19.554496 kubelet[1849]: E1213 13:40:19.554408 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:19.977597 containerd[1455]: time="2024-12-13T13:40:19.977483320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:40:19.982004 containerd[1455]: time="2024-12-13T13:40:19.981539521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 13:40:19.983867 containerd[1455]: time="2024-12-13T13:40:19.983793568Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:40:19.989557 containerd[1455]: time="2024-12-13T13:40:19.989488395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:40:19.993041 containerd[1455]: time="2024-12-13T13:40:19.991745759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 4.108159336s" Dec 13 13:40:19.993041 containerd[1455]: time="2024-12-13T13:40:19.991825899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 13:40:19.995567 containerd[1455]: time="2024-12-13T13:40:19.995480838Z" level=info msg="CreateContainer within sandbox \"163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 13:40:20.038471 containerd[1455]: time="2024-12-13T13:40:20.038309189Z" level=info msg="CreateContainer within sandbox \"163f7c69e700daab54a51ef49747482a126e0cce4668192a9d7006c299ce14ff\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2e46ad45946518487d3c2dcbc36361446ccf456c5e3dcf79eac042532764e6ae\"" Dec 13 13:40:20.039346 containerd[1455]: time="2024-12-13T13:40:20.039250500Z" level=info msg="StartContainer for \"2e46ad45946518487d3c2dcbc36361446ccf456c5e3dcf79eac042532764e6ae\"" Dec 13 13:40:20.118847 systemd[1]: Started cri-containerd-2e46ad45946518487d3c2dcbc36361446ccf456c5e3dcf79eac042532764e6ae.scope - libcontainer container 2e46ad45946518487d3c2dcbc36361446ccf456c5e3dcf79eac042532764e6ae. Dec 13 13:40:20.170858 containerd[1455]: time="2024-12-13T13:40:20.170826152Z" level=info msg="StartContainer for \"2e46ad45946518487d3c2dcbc36361446ccf456c5e3dcf79eac042532764e6ae\" returns successfully" Dec 13 13:40:20.506571 kubelet[1849]: I1213 13:40:20.506497 1849 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-r8fks" podStartSLOduration=33.937996973 podStartE2EDuration="46.506416014s" podCreationTimestamp="2024-12-13 13:39:34 +0000 UTC" firstStartedPulling="2024-12-13 13:40:07.423743878 +0000 UTC m=+34.622973200" lastFinishedPulling="2024-12-13 13:40:19.992162859 +0000 UTC m=+47.191392241" observedRunningTime="2024-12-13 13:40:20.505276502 +0000 UTC m=+47.704505874" watchObservedRunningTime="2024-12-13 13:40:20.506416014 +0000 UTC m=+47.705645437" Dec 13 13:40:20.555328 kubelet[1849]: E1213 13:40:20.555233 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:20.692919 kubelet[1849]: I1213 13:40:20.692756 1849 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 13:40:20.696386 kubelet[1849]: I1213 13:40:20.696030 1849 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 13:40:21.555545 kubelet[1849]: E1213 13:40:21.555453 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:22.556427 kubelet[1849]: E1213 13:40:22.556292 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:23.558746 kubelet[1849]: E1213 13:40:23.557318 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:24.558186 kubelet[1849]: E1213 13:40:24.558083 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:25.558673 kubelet[1849]: E1213 13:40:25.558583 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:26.559818 kubelet[1849]: E1213 13:40:26.559731 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:27.560350 kubelet[1849]: E1213 13:40:27.560276 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:28.561068 kubelet[1849]: E1213 13:40:28.560996 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:29.258615 kubelet[1849]: I1213 13:40:29.258451 1849 topology_manager.go:215] "Topology Admit Handler" podUID="cc6fb84a-d8d3-46ad-a120-93ac12998370" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 13:40:29.274938 systemd[1]: Created slice kubepods-besteffort-podcc6fb84a_d8d3_46ad_a120_93ac12998370.slice - libcontainer container kubepods-besteffort-podcc6fb84a_d8d3_46ad_a120_93ac12998370.slice. Dec 13 13:40:29.439215 kubelet[1849]: I1213 13:40:29.439152 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/cc6fb84a-d8d3-46ad-a120-93ac12998370-data\") pod \"nfs-server-provisioner-0\" (UID: \"cc6fb84a-d8d3-46ad-a120-93ac12998370\") " pod="default/nfs-server-provisioner-0" Dec 13 13:40:29.439430 kubelet[1849]: I1213 13:40:29.439262 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m78j\" (UniqueName: \"kubernetes.io/projected/cc6fb84a-d8d3-46ad-a120-93ac12998370-kube-api-access-2m78j\") pod \"nfs-server-provisioner-0\" (UID: \"cc6fb84a-d8d3-46ad-a120-93ac12998370\") " pod="default/nfs-server-provisioner-0" Dec 13 13:40:29.561779 kubelet[1849]: E1213 13:40:29.561537 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:29.583951 containerd[1455]: time="2024-12-13T13:40:29.583833518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:cc6fb84a-d8d3-46ad-a120-93ac12998370,Namespace:default,Attempt:0,}" Dec 13 13:40:29.868938 systemd-networkd[1350]: cali60e51b789ff: Link UP Dec 13 13:40:29.869423 systemd-networkd[1350]: cali60e51b789ff: Gained carrier Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.724 [INFO][3535] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.147-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default cc6fb84a-d8d3-46ad-a120-93ac12998370 1282 0 2024-12-13 13:40:29 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.24.4.147 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.147-k8s-nfs--server--provisioner--0-" Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.724 [INFO][3535] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.147-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.791 [INFO][3545] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" HandleID="k8s-pod-network.d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" Workload="172.24.4.147-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.815 [INFO][3545] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" HandleID="k8s-pod-network.d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" Workload="172.24.4.147-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290ae0), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.147", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 13:40:29.791273778 +0000 UTC"}, Hostname:"172.24.4.147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.815 [INFO][3545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.815 [INFO][3545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.815 [INFO][3545] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.147' Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.819 [INFO][3545] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" host="172.24.4.147" Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.826 [INFO][3545] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.147" Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.835 [INFO][3545] ipam/ipam.go 489: Trying affinity for 192.168.48.0/26 host="172.24.4.147" Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.838 [INFO][3545] ipam/ipam.go 155: Attempting to load block cidr=192.168.48.0/26 host="172.24.4.147" Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.841 [INFO][3545] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="172.24.4.147" Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.842 [INFO][3545] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" host="172.24.4.147" Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.844 [INFO][3545] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.850 [INFO][3545] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" host="172.24.4.147" Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.862 [INFO][3545] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.48.3/26] block=192.168.48.0/26 handle="k8s-pod-network.d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" host="172.24.4.147" Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.862 [INFO][3545] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.48.3/26] handle="k8s-pod-network.d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" host="172.24.4.147" Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.862 [INFO][3545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:40:29.900524 containerd[1455]: 2024-12-13 13:40:29.862 [INFO][3545] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.3/26] IPv6=[] ContainerID="d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" HandleID="k8s-pod-network.d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" Workload="172.24.4.147-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:40:29.905062 containerd[1455]: 2024-12-13 13:40:29.865 [INFO][3535] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.147-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.147-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"cc6fb84a-d8d3-46ad-a120-93ac12998370", ResourceVersion:"1282", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 40, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.147", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.48.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:40:29.905062 containerd[1455]: 2024-12-13 13:40:29.866 [INFO][3535] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.48.3/32] ContainerID="d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.147-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:40:29.905062 containerd[1455]: 2024-12-13 13:40:29.866 [INFO][3535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.147-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:40:29.905062 containerd[1455]: 2024-12-13 13:40:29.869 [INFO][3535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.147-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:40:29.905360 containerd[1455]: 2024-12-13 13:40:29.870 [INFO][3535] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.147-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.147-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"cc6fb84a-d8d3-46ad-a120-93ac12998370", ResourceVersion:"1282", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 40, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.147", ContainerID:"d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.48.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"3e:32:69:a2:e0:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:40:29.905360 containerd[1455]: 2024-12-13 13:40:29.897 [INFO][3535] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.147-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:40:29.940589 containerd[1455]: time="2024-12-13T13:40:29.940335067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:40:29.940589 containerd[1455]: time="2024-12-13T13:40:29.940388857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:40:29.940589 containerd[1455]: time="2024-12-13T13:40:29.940408665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:40:29.940589 containerd[1455]: time="2024-12-13T13:40:29.940508512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:40:29.966857 systemd[1]: Started cri-containerd-d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc.scope - libcontainer container d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc. Dec 13 13:40:30.019268 containerd[1455]: time="2024-12-13T13:40:30.019209982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:cc6fb84a-d8d3-46ad-a120-93ac12998370,Namespace:default,Attempt:0,} returns sandbox id \"d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc\"" Dec 13 13:40:30.023246 containerd[1455]: time="2024-12-13T13:40:30.023177715Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 13:40:30.562372 kubelet[1849]: E1213 13:40:30.562302 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:31.489421 systemd-networkd[1350]: cali60e51b789ff: Gained IPv6LL Dec 13 13:40:31.563355 kubelet[1849]: E1213 13:40:31.563333 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:32.563914 kubelet[1849]: E1213 13:40:32.563869 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:33.564782 kubelet[1849]: E1213 13:40:33.564732 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:33.661613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount439383451.mount: Deactivated successfully. Dec 13 13:40:34.518803 kubelet[1849]: E1213 13:40:34.518640 1849 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:34.858558 kubelet[1849]: E1213 13:40:34.858178 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:34.914408 containerd[1455]: time="2024-12-13T13:40:34.914000457Z" level=info msg="StopPodSandbox for \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\"" Dec 13 13:40:34.914408 containerd[1455]: time="2024-12-13T13:40:34.914102559Z" level=info msg="TearDown network for sandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" successfully" Dec 13 13:40:34.914408 containerd[1455]: time="2024-12-13T13:40:34.914181256Z" level=info msg="StopPodSandbox for \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" returns successfully" Dec 13 13:40:34.922980 containerd[1455]: time="2024-12-13T13:40:34.922822979Z" level=info msg="RemovePodSandbox for \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\"" Dec 13 13:40:34.934647 containerd[1455]: time="2024-12-13T13:40:34.934452201Z" level=info msg="Forcibly stopping sandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\"" Dec 13 13:40:34.952249 containerd[1455]: time="2024-12-13T13:40:34.934563499Z" level=info msg="TearDown network for sandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" successfully" Dec 13 13:40:35.113534 containerd[1455]: time="2024-12-13T13:40:35.113031377Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.113534 containerd[1455]: time="2024-12-13T13:40:35.113107690Z" level=info msg="RemovePodSandbox \"988276e4a82feec5c1d22f0b68568303bd5c8bb07d0d2ebf499241e282d02a7b\" returns successfully" Dec 13 13:40:35.113754 containerd[1455]: time="2024-12-13T13:40:35.113565298Z" level=info msg="StopPodSandbox for \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\"" Dec 13 13:40:35.113754 containerd[1455]: time="2024-12-13T13:40:35.113678109Z" level=info msg="TearDown network for sandbox \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\" successfully" Dec 13 13:40:35.113754 containerd[1455]: time="2024-12-13T13:40:35.113691294Z" level=info msg="StopPodSandbox for \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\" returns successfully" Dec 13 13:40:35.114903 containerd[1455]: time="2024-12-13T13:40:35.113988400Z" level=info msg="RemovePodSandbox for \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\"" Dec 13 13:40:35.114903 containerd[1455]: time="2024-12-13T13:40:35.114053502Z" level=info msg="Forcibly stopping sandbox \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\"" Dec 13 13:40:35.114903 containerd[1455]: time="2024-12-13T13:40:35.114115659Z" level=info msg="TearDown network for sandbox \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\" successfully" Dec 13 13:40:35.662883 containerd[1455]: time="2024-12-13T13:40:35.662769302Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.663452 containerd[1455]: time="2024-12-13T13:40:35.662894647Z" level=info msg="RemovePodSandbox \"0304775c3ec5bc83f022c627b30def65d7959474b6e32883bb0ce5afc766b5d2\" returns successfully" Dec 13 13:40:35.664157 containerd[1455]: time="2024-12-13T13:40:35.663821213Z" level=info msg="StopPodSandbox for \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\"" Dec 13 13:40:35.664157 containerd[1455]: time="2024-12-13T13:40:35.663960935Z" level=info msg="TearDown network for sandbox \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\" successfully" Dec 13 13:40:35.664157 containerd[1455]: time="2024-12-13T13:40:35.663980602Z" level=info msg="StopPodSandbox for \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\" returns successfully" Dec 13 13:40:35.664501 containerd[1455]: time="2024-12-13T13:40:35.664384468Z" level=info msg="RemovePodSandbox for \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\"" Dec 13 13:40:35.664501 containerd[1455]: time="2024-12-13T13:40:35.664450923Z" level=info msg="Forcibly stopping sandbox \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\"" Dec 13 13:40:35.664841 containerd[1455]: time="2024-12-13T13:40:35.664576969Z" level=info msg="TearDown network for sandbox \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\" successfully" Dec 13 13:40:35.674896 containerd[1455]: time="2024-12-13T13:40:35.674765764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.675136 containerd[1455]: time="2024-12-13T13:40:35.674924951Z" level=info msg="RemovePodSandbox \"bde7b51a555a648300968f121f9c183a28bb5d64c63562a4f0b90818c1d5314d\" returns successfully" Dec 13 13:40:35.676412 containerd[1455]: time="2024-12-13T13:40:35.675935395Z" level=info msg="StopPodSandbox for \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\"" Dec 13 13:40:35.676412 containerd[1455]: time="2024-12-13T13:40:35.676169283Z" level=info msg="TearDown network for sandbox \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\" successfully" Dec 13 13:40:35.676412 containerd[1455]: time="2024-12-13T13:40:35.676207294Z" level=info msg="StopPodSandbox for \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\" returns successfully" Dec 13 13:40:35.678141 containerd[1455]: time="2024-12-13T13:40:35.678054286Z" level=info msg="RemovePodSandbox for \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\"" Dec 13 13:40:35.678141 containerd[1455]: time="2024-12-13T13:40:35.678127412Z" level=info msg="Forcibly stopping sandbox \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\"" Dec 13 13:40:35.678394 containerd[1455]: time="2024-12-13T13:40:35.678303002Z" level=info msg="TearDown network for sandbox \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\" successfully" Dec 13 13:40:35.745585 containerd[1455]: time="2024-12-13T13:40:35.745280165Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.745585 containerd[1455]: time="2024-12-13T13:40:35.745374212Z" level=info msg="RemovePodSandbox \"a630369d7ddcad6683e652a5135a880fb7742c42daf798bf89b7fc9050deaa49\" returns successfully" Dec 13 13:40:35.746121 containerd[1455]: time="2024-12-13T13:40:35.746077169Z" level=info msg="StopPodSandbox for \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\"" Dec 13 13:40:35.746603 containerd[1455]: time="2024-12-13T13:40:35.746370899Z" level=info msg="TearDown network for sandbox \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\" successfully" Dec 13 13:40:35.746603 containerd[1455]: time="2024-12-13T13:40:35.746408129Z" level=info msg="StopPodSandbox for \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\" returns successfully" Dec 13 13:40:35.747562 containerd[1455]: time="2024-12-13T13:40:35.747057105Z" level=info msg="RemovePodSandbox for \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\"" Dec 13 13:40:35.747562 containerd[1455]: time="2024-12-13T13:40:35.747106728Z" level=info msg="Forcibly stopping sandbox \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\"" Dec 13 13:40:35.747562 containerd[1455]: time="2024-12-13T13:40:35.747230590Z" level=info msg="TearDown network for sandbox \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\" successfully" Dec 13 13:40:35.765698 containerd[1455]: time="2024-12-13T13:40:35.765653825Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.765955 containerd[1455]: time="2024-12-13T13:40:35.765868096Z" level=info msg="RemovePodSandbox \"ccd55c8566f514324c05ad92b5831a8b7ff38a891f9db22763ab89c742c0544a\" returns successfully" Dec 13 13:40:35.766743 containerd[1455]: time="2024-12-13T13:40:35.766697992Z" level=info msg="StopPodSandbox for \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\"" Dec 13 13:40:35.767280 containerd[1455]: time="2024-12-13T13:40:35.766936148Z" level=info msg="TearDown network for sandbox \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\" successfully" Dec 13 13:40:35.767280 containerd[1455]: time="2024-12-13T13:40:35.766981634Z" level=info msg="StopPodSandbox for \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\" returns successfully" Dec 13 13:40:35.767738 containerd[1455]: time="2024-12-13T13:40:35.767699108Z" level=info msg="RemovePodSandbox for \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\"" Dec 13 13:40:35.767799 containerd[1455]: time="2024-12-13T13:40:35.767753420Z" level=info msg="Forcibly stopping sandbox \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\"" Dec 13 13:40:35.767945 containerd[1455]: time="2024-12-13T13:40:35.767874075Z" level=info msg="TearDown network for sandbox \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\" successfully" Dec 13 13:40:35.785690 containerd[1455]: time="2024-12-13T13:40:35.784992886Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.785690 containerd[1455]: time="2024-12-13T13:40:35.785101049Z" level=info msg="RemovePodSandbox \"243021f3aef59b879802d19c6ed4ef63dee69b2887abf775387daf75883aedb4\" returns successfully" Dec 13 13:40:35.786601 containerd[1455]: time="2024-12-13T13:40:35.786544093Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\"" Dec 13 13:40:35.787481 containerd[1455]: time="2024-12-13T13:40:35.786770939Z" level=info msg="TearDown network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" successfully" Dec 13 13:40:35.787481 containerd[1455]: time="2024-12-13T13:40:35.786899128Z" level=info msg="StopPodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" returns successfully" Dec 13 13:40:35.788294 containerd[1455]: time="2024-12-13T13:40:35.787564725Z" level=info msg="RemovePodSandbox for \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\"" Dec 13 13:40:35.788294 containerd[1455]: time="2024-12-13T13:40:35.787609950Z" level=info msg="Forcibly stopping sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\"" Dec 13 13:40:35.788294 containerd[1455]: time="2024-12-13T13:40:35.787788485Z" level=info msg="TearDown network for sandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" successfully" Dec 13 13:40:35.795712 containerd[1455]: time="2024-12-13T13:40:35.795584184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.796393 containerd[1455]: time="2024-12-13T13:40:35.795941995Z" level=info msg="RemovePodSandbox \"d0b4e5d77bee11958327f488bb7d6d42095a4e12f8c0f5bd6e30375e38c3136b\" returns successfully" Dec 13 13:40:35.796928 containerd[1455]: time="2024-12-13T13:40:35.796719341Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\"" Dec 13 13:40:35.796928 containerd[1455]: time="2024-12-13T13:40:35.796905510Z" level=info msg="TearDown network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" successfully" Dec 13 13:40:35.797197 containerd[1455]: time="2024-12-13T13:40:35.796933973Z" level=info msg="StopPodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" returns successfully" Dec 13 13:40:35.798891 containerd[1455]: time="2024-12-13T13:40:35.798030438Z" level=info msg="RemovePodSandbox for \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\"" Dec 13 13:40:35.798891 containerd[1455]: time="2024-12-13T13:40:35.798082676Z" level=info msg="Forcibly stopping sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\"" Dec 13 13:40:35.798891 containerd[1455]: time="2024-12-13T13:40:35.798214252Z" level=info msg="TearDown network for sandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" successfully" Dec 13 13:40:35.806098 containerd[1455]: time="2024-12-13T13:40:35.806039186Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.806357 containerd[1455]: time="2024-12-13T13:40:35.806317147Z" level=info msg="RemovePodSandbox \"85f378b579b9462b1d486fd51155d457dce728fb4431f8ce1467462dfe31b073\" returns successfully" Dec 13 13:40:35.807192 containerd[1455]: time="2024-12-13T13:40:35.807010387Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\"" Dec 13 13:40:35.807192 containerd[1455]: time="2024-12-13T13:40:35.807093583Z" level=info msg="TearDown network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" successfully" Dec 13 13:40:35.807192 containerd[1455]: time="2024-12-13T13:40:35.807105886Z" level=info msg="StopPodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" returns successfully" Dec 13 13:40:35.808857 containerd[1455]: time="2024-12-13T13:40:35.808652473Z" level=info msg="RemovePodSandbox for \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\"" Dec 13 13:40:35.808857 containerd[1455]: time="2024-12-13T13:40:35.808703970Z" level=info msg="Forcibly stopping sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\"" Dec 13 13:40:35.809520 containerd[1455]: time="2024-12-13T13:40:35.809395316Z" level=info msg="TearDown network for sandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" successfully" Dec 13 13:40:35.817070 containerd[1455]: time="2024-12-13T13:40:35.817026265Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.817258 containerd[1455]: time="2024-12-13T13:40:35.817229777Z" level=info msg="RemovePodSandbox \"7e0883e229c12a822cc39361dcf80f906b0e7c0aeaf527aed50907b4024c6556\" returns successfully" Dec 13 13:40:35.817872 containerd[1455]: time="2024-12-13T13:40:35.817842906Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\"" Dec 13 13:40:35.817969 containerd[1455]: time="2024-12-13T13:40:35.817915502Z" level=info msg="TearDown network for sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" successfully" Dec 13 13:40:35.817969 containerd[1455]: time="2024-12-13T13:40:35.817926042Z" level=info msg="StopPodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" returns successfully" Dec 13 13:40:35.819894 containerd[1455]: time="2024-12-13T13:40:35.819859495Z" level=info msg="RemovePodSandbox for \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\"" Dec 13 13:40:35.820036 containerd[1455]: time="2024-12-13T13:40:35.820009917Z" level=info msg="Forcibly stopping sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\"" Dec 13 13:40:35.820293 containerd[1455]: time="2024-12-13T13:40:35.820231592Z" level=info msg="TearDown network for sandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" successfully" Dec 13 13:40:35.824090 containerd[1455]: time="2024-12-13T13:40:35.824059647Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.824232 containerd[1455]: time="2024-12-13T13:40:35.824212132Z" level=info msg="RemovePodSandbox \"0fc10bcc989491393eed2bddf4bb8613ee1dd4cf88db3f79444d79f907b5b88e\" returns successfully" Dec 13 13:40:35.825033 containerd[1455]: time="2024-12-13T13:40:35.825010578Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\"" Dec 13 13:40:35.825200 containerd[1455]: time="2024-12-13T13:40:35.825181168Z" level=info msg="TearDown network for sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" successfully" Dec 13 13:40:35.825369 containerd[1455]: time="2024-12-13T13:40:35.825318415Z" level=info msg="StopPodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" returns successfully" Dec 13 13:40:35.826308 containerd[1455]: time="2024-12-13T13:40:35.826280117Z" level=info msg="RemovePodSandbox for \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\"" Dec 13 13:40:35.826375 containerd[1455]: time="2024-12-13T13:40:35.826328488Z" level=info msg="Forcibly stopping sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\"" Dec 13 13:40:35.826533 containerd[1455]: time="2024-12-13T13:40:35.826434767Z" level=info msg="TearDown network for sandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" successfully" Dec 13 13:40:35.831445 containerd[1455]: time="2024-12-13T13:40:35.830901197Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.831445 containerd[1455]: time="2024-12-13T13:40:35.830946702Z" level=info msg="RemovePodSandbox \"22fcebee3b9dab014524cad81ab7f82a1e6c5d119d5c2f73d9bc0090cf46ba6a\" returns successfully" Dec 13 13:40:35.831967 containerd[1455]: time="2024-12-13T13:40:35.831936117Z" level=info msg="StopPodSandbox for \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\"" Dec 13 13:40:35.832095 containerd[1455]: time="2024-12-13T13:40:35.832066891Z" level=info msg="TearDown network for sandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" successfully" Dec 13 13:40:35.832159 containerd[1455]: time="2024-12-13T13:40:35.832112226Z" level=info msg="StopPodSandbox for \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" returns successfully" Dec 13 13:40:35.832444 containerd[1455]: time="2024-12-13T13:40:35.832416176Z" level=info msg="RemovePodSandbox for \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\"" Dec 13 13:40:35.832503 containerd[1455]: time="2024-12-13T13:40:35.832444069Z" level=info msg="Forcibly stopping sandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\"" Dec 13 13:40:35.832655 containerd[1455]: time="2024-12-13T13:40:35.832534648Z" level=info msg="TearDown network for sandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" successfully" Dec 13 13:40:35.836550 containerd[1455]: time="2024-12-13T13:40:35.836511972Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.836907 containerd[1455]: time="2024-12-13T13:40:35.836565902Z" level=info msg="RemovePodSandbox \"6b19e0476a19c7b9b18add813ffe2858b4e221006ea9eafe6d0084a1494945e6\" returns successfully" Dec 13 13:40:35.837521 containerd[1455]: time="2024-12-13T13:40:35.837369017Z" level=info msg="StopPodSandbox for \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\"" Dec 13 13:40:35.839173 containerd[1455]: time="2024-12-13T13:40:35.837791239Z" level=info msg="TearDown network for sandbox \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\" successfully" Dec 13 13:40:35.839173 containerd[1455]: time="2024-12-13T13:40:35.838938168Z" level=info msg="StopPodSandbox for \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\" returns successfully" Dec 13 13:40:35.839786 containerd[1455]: time="2024-12-13T13:40:35.839754508Z" level=info msg="RemovePodSandbox for \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\"" Dec 13 13:40:35.840000 containerd[1455]: time="2024-12-13T13:40:35.839895151Z" level=info msg="Forcibly stopping sandbox \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\"" Dec 13 13:40:35.840438 containerd[1455]: time="2024-12-13T13:40:35.840347930Z" level=info msg="TearDown network for sandbox \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\" successfully" Dec 13 13:40:35.845658 containerd[1455]: time="2024-12-13T13:40:35.845512659Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.845658 containerd[1455]: time="2024-12-13T13:40:35.845565257Z" level=info msg="RemovePodSandbox \"f1ce9e81b7a577ac31d7a80012199b6182bef3374a711f99a1cbb3e4acb00867\" returns successfully" Dec 13 13:40:35.846459 containerd[1455]: time="2024-12-13T13:40:35.846411623Z" level=info msg="StopPodSandbox for \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\"" Dec 13 13:40:35.846717 containerd[1455]: time="2024-12-13T13:40:35.846697319Z" level=info msg="TearDown network for sandbox \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\" successfully" Dec 13 13:40:35.847837 containerd[1455]: time="2024-12-13T13:40:35.846776367Z" level=info msg="StopPodSandbox for \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\" returns successfully" Dec 13 13:40:35.847837 containerd[1455]: time="2024-12-13T13:40:35.847153153Z" level=info msg="RemovePodSandbox for \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\"" Dec 13 13:40:35.847837 containerd[1455]: time="2024-12-13T13:40:35.847177548Z" level=info msg="Forcibly stopping sandbox \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\"" Dec 13 13:40:35.847837 containerd[1455]: time="2024-12-13T13:40:35.847253711Z" level=info msg="TearDown network for sandbox \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\" successfully" Dec 13 13:40:35.858521 kubelet[1849]: E1213 13:40:35.858469 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:35.863134 containerd[1455]: time="2024-12-13T13:40:35.862768044Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.863134 containerd[1455]: time="2024-12-13T13:40:35.862863734Z" level=info msg="RemovePodSandbox \"8c527e1df8a79a53eac54012fac02690000cd29a59009e71c2a6f7d4e3daf7b9\" returns successfully" Dec 13 13:40:35.863824 containerd[1455]: time="2024-12-13T13:40:35.863787155Z" level=info msg="StopPodSandbox for \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\"" Dec 13 13:40:35.864042 containerd[1455]: time="2024-12-13T13:40:35.863946574Z" level=info msg="TearDown network for sandbox \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\" successfully" Dec 13 13:40:35.864042 containerd[1455]: time="2024-12-13T13:40:35.863971520Z" level=info msg="StopPodSandbox for \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\" returns successfully" Dec 13 13:40:35.864824 containerd[1455]: time="2024-12-13T13:40:35.864800002Z" level=info msg="RemovePodSandbox for \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\"" Dec 13 13:40:35.864952 containerd[1455]: time="2024-12-13T13:40:35.864929265Z" level=info msg="Forcibly stopping sandbox \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\"" Dec 13 13:40:35.865159 containerd[1455]: time="2024-12-13T13:40:35.865106537Z" level=info msg="TearDown network for sandbox \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\" successfully" Dec 13 13:40:35.869122 containerd[1455]: time="2024-12-13T13:40:35.868702035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.869122 containerd[1455]: time="2024-12-13T13:40:35.868775292Z" level=info msg="RemovePodSandbox \"4a5bd17547f81e4c2b1fa0c06b1cd510dc2908d186e291cccb392c701a22a6d3\" returns successfully" Dec 13 13:40:35.869271 containerd[1455]: time="2024-12-13T13:40:35.869232840Z" level=info msg="StopPodSandbox for \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\"" Dec 13 13:40:35.869404 containerd[1455]: time="2024-12-13T13:40:35.869325322Z" level=info msg="TearDown network for sandbox \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\" successfully" Dec 13 13:40:35.869463 containerd[1455]: time="2024-12-13T13:40:35.869402106Z" level=info msg="StopPodSandbox for \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\" returns successfully" Dec 13 13:40:35.869990 containerd[1455]: time="2024-12-13T13:40:35.869883129Z" level=info msg="RemovePodSandbox for \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\"" Dec 13 13:40:35.869990 containerd[1455]: time="2024-12-13T13:40:35.869915128Z" level=info msg="Forcibly stopping sandbox \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\"" Dec 13 13:40:35.870176 containerd[1455]: time="2024-12-13T13:40:35.870007391Z" level=info msg="TearDown network for sandbox \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\" successfully" Dec 13 13:40:35.899474 containerd[1455]: time="2024-12-13T13:40:35.899154252Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.899474 containerd[1455]: time="2024-12-13T13:40:35.899208824Z" level=info msg="RemovePodSandbox \"7a02fab36988c3ba295d05f615aefc6643934dde91142d524dbb970e19281268\" returns successfully" Dec 13 13:40:35.901525 containerd[1455]: time="2024-12-13T13:40:35.899886384Z" level=info msg="StopPodSandbox for \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\"" Dec 13 13:40:35.901525 containerd[1455]: time="2024-12-13T13:40:35.899998083Z" level=info msg="TearDown network for sandbox \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\" successfully" Dec 13 13:40:35.901525 containerd[1455]: time="2024-12-13T13:40:35.900014224Z" level=info msg="StopPodSandbox for \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\" returns successfully" Dec 13 13:40:35.901525 containerd[1455]: time="2024-12-13T13:40:35.900479585Z" level=info msg="RemovePodSandbox for \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\"" Dec 13 13:40:35.901525 containerd[1455]: time="2024-12-13T13:40:35.900503831Z" level=info msg="Forcibly stopping sandbox \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\"" Dec 13 13:40:35.901525 containerd[1455]: time="2024-12-13T13:40:35.900573812Z" level=info msg="TearDown network for sandbox \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\" successfully" Dec 13 13:40:35.905073 containerd[1455]: time="2024-12-13T13:40:35.904225815Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:40:35.905073 containerd[1455]: time="2024-12-13T13:40:35.904276030Z" level=info msg="RemovePodSandbox \"3517c78823a84676a399b59c5ec7554f254b58139034c24feb583e52b812c80b\" returns successfully" Dec 13 13:40:36.437757 containerd[1455]: time="2024-12-13T13:40:36.437680869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:40:36.439363 containerd[1455]: time="2024-12-13T13:40:36.439269054Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Dec 13 13:40:36.443120 containerd[1455]: time="2024-12-13T13:40:36.443004445Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:40:36.450246 containerd[1455]: time="2024-12-13T13:40:36.450150377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:40:36.454684 containerd[1455]: time="2024-12-13T13:40:36.452964541Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.429736622s" Dec 13 13:40:36.454684 containerd[1455]: time="2024-12-13T13:40:36.453045834Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 13:40:36.458375 containerd[1455]: time="2024-12-13T13:40:36.458324897Z" level=info msg="CreateContainer within sandbox \"d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 13:40:36.497552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2577538615.mount: Deactivated successfully. Dec 13 13:40:36.539360 containerd[1455]: time="2024-12-13T13:40:36.539290101Z" level=info msg="CreateContainer within sandbox \"d5ca680839b26bab0560b2fefc7056baa0a98d845a5cb112542aec06bc0fcacc\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"adce56b2cd6d3eb83c8c27495e473ce2fec315240fff66e0d93d543492feeb05\"" Dec 13 13:40:36.540753 containerd[1455]: time="2024-12-13T13:40:36.540673714Z" level=info msg="StartContainer for \"adce56b2cd6d3eb83c8c27495e473ce2fec315240fff66e0d93d543492feeb05\"" Dec 13 13:40:36.594380 systemd[1]: Started cri-containerd-adce56b2cd6d3eb83c8c27495e473ce2fec315240fff66e0d93d543492feeb05.scope - libcontainer container adce56b2cd6d3eb83c8c27495e473ce2fec315240fff66e0d93d543492feeb05. Dec 13 13:40:36.628638 containerd[1455]: time="2024-12-13T13:40:36.628411031Z" level=info msg="StartContainer for \"adce56b2cd6d3eb83c8c27495e473ce2fec315240fff66e0d93d543492feeb05\" returns successfully" Dec 13 13:40:36.859235 kubelet[1849]: E1213 13:40:36.859054 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:36.934605 kubelet[1849]: I1213 13:40:36.934544 1849 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.502691969 podStartE2EDuration="7.934472461s" podCreationTimestamp="2024-12-13 13:40:29 +0000 UTC" firstStartedPulling="2024-12-13 13:40:30.021881636 +0000 UTC m=+57.221111009" lastFinishedPulling="2024-12-13 13:40:36.453662129 +0000 UTC m=+63.652891501" observedRunningTime="2024-12-13 13:40:36.934121523 +0000 UTC m=+64.133350896" watchObservedRunningTime="2024-12-13 13:40:36.934472461 +0000 UTC m=+64.133701833" Dec 13 13:40:37.860519 kubelet[1849]: E1213 13:40:37.860447 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:38.861211 kubelet[1849]: E1213 13:40:38.861132 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:39.862105 kubelet[1849]: E1213 13:40:39.862017 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:40.862782 kubelet[1849]: E1213 13:40:40.862588 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:41.863404 kubelet[1849]: E1213 13:40:41.863286 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:42.864316 kubelet[1849]: E1213 13:40:42.864105 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:43.865096 kubelet[1849]: E1213 13:40:43.865004 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:44.865720 kubelet[1849]: E1213 13:40:44.865594 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:45.866360 kubelet[1849]: E1213 13:40:45.866262 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:46.867241 kubelet[1849]: E1213 13:40:46.867118 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:47.867951 kubelet[1849]: E1213 13:40:47.867860 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:48.869216 kubelet[1849]: E1213 13:40:48.869057 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:49.869328 kubelet[1849]: E1213 13:40:49.869235 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:50.870258 kubelet[1849]: E1213 13:40:50.870163 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:51.871203 kubelet[1849]: E1213 13:40:51.871117 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:52.871862 kubelet[1849]: E1213 13:40:52.871737 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:53.873201 kubelet[1849]: E1213 13:40:53.873006 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:54.518662 kubelet[1849]: E1213 13:40:54.518494 1849 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:54.873862 kubelet[1849]: E1213 13:40:54.873573 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:55.874728 kubelet[1849]: E1213 13:40:55.874610 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:56.875214 kubelet[1849]: E1213 13:40:56.875132 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:57.876142 kubelet[1849]: E1213 13:40:57.876054 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:58.877289 kubelet[1849]: E1213 13:40:58.877220 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:40:59.877493 kubelet[1849]: E1213 13:40:59.877433 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:41:00.878680 kubelet[1849]: E1213 13:41:00.878606 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:41:01.879510 kubelet[1849]: E1213 13:41:01.879432 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:41:01.924369 kubelet[1849]: I1213 13:41:01.924290 1849 topology_manager.go:215] "Topology Admit Handler" podUID="c9ff31ac-b1d4-440b-a97f-fea09914780c" podNamespace="default" podName="test-pod-1" Dec 13 13:41:01.936790 systemd[1]: Created slice kubepods-besteffort-podc9ff31ac_b1d4_440b_a97f_fea09914780c.slice - libcontainer container kubepods-besteffort-podc9ff31ac_b1d4_440b_a97f_fea09914780c.slice. Dec 13 13:41:02.028507 kubelet[1849]: I1213 13:41:02.028230 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzs87\" (UniqueName: \"kubernetes.io/projected/c9ff31ac-b1d4-440b-a97f-fea09914780c-kube-api-access-kzs87\") pod \"test-pod-1\" (UID: \"c9ff31ac-b1d4-440b-a97f-fea09914780c\") " pod="default/test-pod-1" Dec 13 13:41:02.028507 kubelet[1849]: I1213 13:41:02.028315 1849 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-854c1b3b-b211-4717-86db-fe059d4420b6\" (UniqueName: \"kubernetes.io/nfs/c9ff31ac-b1d4-440b-a97f-fea09914780c-pvc-854c1b3b-b211-4717-86db-fe059d4420b6\") pod \"test-pod-1\" (UID: \"c9ff31ac-b1d4-440b-a97f-fea09914780c\") " pod="default/test-pod-1" Dec 13 13:41:02.205663 kernel: FS-Cache: Loaded Dec 13 13:41:02.293940 kernel: RPC: Registered named UNIX socket transport module. Dec 13 13:41:02.294105 kernel: RPC: Registered udp transport module. Dec 13 13:41:02.294164 kernel: RPC: Registered tcp transport module. Dec 13 13:41:02.294813 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 13:41:02.294960 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 13:41:02.604389 kernel: NFS: Registering the id_resolver key type Dec 13 13:41:02.604507 kernel: Key type id_resolver registered Dec 13 13:41:02.607482 kernel: Key type id_legacy registered Dec 13 13:41:02.659047 nfsidmap[3765]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 13:41:02.668837 nfsidmap[3766]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 13:41:02.842172 containerd[1455]: time="2024-12-13T13:41:02.842070999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c9ff31ac-b1d4-440b-a97f-fea09914780c,Namespace:default,Attempt:0,}" Dec 13 13:41:02.886746 kubelet[1849]: E1213 13:41:02.880327 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:41:03.130894 systemd-networkd[1350]: cali5ec59c6bf6e: Link UP Dec 13 13:41:03.131381 systemd-networkd[1350]: cali5ec59c6bf6e: Gained carrier Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:02.958 [INFO][3768] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.147-k8s-test--pod--1-eth0 default c9ff31ac-b1d4-440b-a97f-fea09914780c 1378 0 2024-12-13 13:40:33 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.147 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.147-k8s-test--pod--1-" Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:02.959 [INFO][3768] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.147-k8s-test--pod--1-eth0" Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.028 [INFO][3779] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" HandleID="k8s-pod-network.84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" Workload="172.24.4.147-k8s-test--pod--1-eth0" Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.052 [INFO][3779] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" HandleID="k8s-pod-network.84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" Workload="172.24.4.147-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a9c60), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.147", "pod":"test-pod-1", "timestamp":"2024-12-13 13:41:03.02839128 +0000 UTC"}, Hostname:"172.24.4.147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.052 [INFO][3779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.052 [INFO][3779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.052 [INFO][3779] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.147' Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.057 [INFO][3779] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" host="172.24.4.147" Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.067 [INFO][3779] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.147" Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.078 [INFO][3779] ipam/ipam.go 489: Trying affinity for 192.168.48.0/26 host="172.24.4.147" Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.082 [INFO][3779] ipam/ipam.go 155: Attempting to load block cidr=192.168.48.0/26 host="172.24.4.147" Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.088 [INFO][3779] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="172.24.4.147" Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.088 [INFO][3779] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" host="172.24.4.147" Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.092 [INFO][3779] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04 Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.102 [INFO][3779] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" host="172.24.4.147" Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.119 [INFO][3779] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.48.4/26] block=192.168.48.0/26 handle="k8s-pod-network.84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" host="172.24.4.147" Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.120 [INFO][3779] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.48.4/26] handle="k8s-pod-network.84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" host="172.24.4.147" Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.120 [INFO][3779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.120 [INFO][3779] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.4/26] IPv6=[] ContainerID="84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" HandleID="k8s-pod-network.84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" Workload="172.24.4.147-k8s-test--pod--1-eth0" Dec 13 13:41:03.151662 containerd[1455]: 2024-12-13 13:41:03.122 [INFO][3768] cni-plugin/k8s.go 386: Populated endpoint ContainerID="84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.147-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.147-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"c9ff31ac-b1d4-440b-a97f-fea09914780c", ResourceVersion:"1378", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 40, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.147", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.48.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:41:03.160325 containerd[1455]: 2024-12-13 13:41:03.123 [INFO][3768] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.48.4/32] ContainerID="84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.147-k8s-test--pod--1-eth0" Dec 13 13:41:03.160325 containerd[1455]: 2024-12-13 13:41:03.123 [INFO][3768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.147-k8s-test--pod--1-eth0" Dec 13 13:41:03.160325 containerd[1455]: 2024-12-13 13:41:03.127 [INFO][3768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.147-k8s-test--pod--1-eth0" Dec 13 13:41:03.160325 containerd[1455]: 2024-12-13 13:41:03.128 [INFO][3768] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.147-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.147-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"c9ff31ac-b1d4-440b-a97f-fea09914780c", ResourceVersion:"1378", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 40, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.147", ContainerID:"84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.48.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"a2:c9:0b:55:7e:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:41:03.160325 containerd[1455]: 2024-12-13 13:41:03.141 [INFO][3768] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.147-k8s-test--pod--1-eth0" Dec 13 13:41:03.202408 containerd[1455]: time="2024-12-13T13:41:03.202118541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:41:03.202408 containerd[1455]: time="2024-12-13T13:41:03.202200376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:41:03.202408 containerd[1455]: time="2024-12-13T13:41:03.202228148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:41:03.202408 containerd[1455]: time="2024-12-13T13:41:03.202304233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:41:03.228831 systemd[1]: Started cri-containerd-84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04.scope - libcontainer container 84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04. Dec 13 13:41:03.267819 containerd[1455]: time="2024-12-13T13:41:03.267776880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c9ff31ac-b1d4-440b-a97f-fea09914780c,Namespace:default,Attempt:0,} returns sandbox id \"84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04\"" Dec 13 13:41:03.269843 containerd[1455]: time="2024-12-13T13:41:03.269816655Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 13:41:03.739338 containerd[1455]: time="2024-12-13T13:41:03.739173668Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:41:03.741226 containerd[1455]: time="2024-12-13T13:41:03.741129341Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 13:41:03.807310 containerd[1455]: time="2024-12-13T13:41:03.807229490Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 537.370255ms" Dec 13 13:41:03.807310 containerd[1455]: time="2024-12-13T13:41:03.807303291Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 13:41:03.811978 containerd[1455]: time="2024-12-13T13:41:03.811489420Z" level=info msg="CreateContainer within sandbox \"84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 13:41:03.842425 containerd[1455]: time="2024-12-13T13:41:03.842362831Z" level=info msg="CreateContainer within sandbox \"84722d08d13ff7c844e4b6415ec9dff4c946f2d6eea9a390128227f02529ee04\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2502279fa8fb37be460227f622d7336e6f0e252547fd122539be454df7c962cb\"" Dec 13 13:41:03.843389 containerd[1455]: time="2024-12-13T13:41:03.843335368Z" level=info msg="StartContainer for \"2502279fa8fb37be460227f622d7336e6f0e252547fd122539be454df7c962cb\"" Dec 13 13:41:03.880969 kubelet[1849]: E1213 13:41:03.880870 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:41:03.883784 systemd[1]: Started cri-containerd-2502279fa8fb37be460227f622d7336e6f0e252547fd122539be454df7c962cb.scope - libcontainer container 2502279fa8fb37be460227f622d7336e6f0e252547fd122539be454df7c962cb. Dec 13 13:41:03.921885 containerd[1455]: time="2024-12-13T13:41:03.921816920Z" level=info msg="StartContainer for \"2502279fa8fb37be460227f622d7336e6f0e252547fd122539be454df7c962cb\" returns successfully" Dec 13 13:41:04.576830 systemd-networkd[1350]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 13:41:04.882125 kubelet[1849]: E1213 13:41:04.881909 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:41:05.882918 kubelet[1849]: E1213 13:41:05.882795 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:41:06.883834 kubelet[1849]: E1213 13:41:06.883715 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:41:07.884498 kubelet[1849]: E1213 13:41:07.884393 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:41:08.885023 kubelet[1849]: E1213 13:41:08.884873 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:41:09.885169 kubelet[1849]: E1213 13:41:09.885075 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:41:10.885874 kubelet[1849]: E1213 13:41:10.885785 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"