Jan 29 12:56:26.046760 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:56:26.046806 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:56:26.046823 kernel: BIOS-provided physical RAM map: Jan 29 12:56:26.046835 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 12:56:26.046846 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 12:56:26.046861 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 12:56:26.046875 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 29 12:56:26.046907 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 29 12:56:26.046919 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 12:56:26.046930 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 12:56:26.046942 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 29 12:56:26.046954 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 12:56:26.046966 kernel: NX (Execute Disable) protection: active Jan 29 12:56:26.046977 kernel: APIC: Static calls initialized Jan 29 12:56:26.046996 kernel: SMBIOS 3.0.0 present. Jan 29 12:56:26.047010 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 29 12:56:26.047020 kernel: Hypervisor detected: KVM Jan 29 12:56:26.047032 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 12:56:26.047045 kernel: kvm-clock: using sched offset of 4487475156 cycles Jan 29 12:56:26.047062 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 12:56:26.047075 kernel: tsc: Detected 1996.249 MHz processor Jan 29 12:56:26.047088 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:56:26.047101 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:56:26.047114 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 29 12:56:26.047127 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 12:56:26.047140 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:56:26.047152 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 29 12:56:26.047165 kernel: ACPI: Early table checksum verification disabled Jan 29 12:56:26.047181 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 29 12:56:26.047193 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:56:26.047206 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:56:26.047219 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:56:26.047232 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 29 12:56:26.047245 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:56:26.047258 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:56:26.047270 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 29 12:56:26.047283 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 29 12:56:26.047298 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 29 12:56:26.047311 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 29 12:56:26.047324 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 29 12:56:26.047337 kernel: No NUMA configuration found Jan 29 12:56:26.047347 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 29 12:56:26.047357 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 29 12:56:26.047394 kernel: Zone ranges: Jan 29 12:56:26.047425 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:56:26.047459 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 12:56:26.047490 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 29 12:56:26.047524 kernel: Movable zone start for each node Jan 29 12:56:26.047534 kernel: Early memory node ranges Jan 29 12:56:26.047544 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 12:56:26.047554 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 29 12:56:26.047567 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 29 12:56:26.047577 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 29 12:56:26.047587 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:56:26.047597 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 12:56:26.047607 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 29 12:56:26.047617 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 12:56:26.047627 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 12:56:26.047636 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:56:26.047646 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 12:56:26.047659 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 12:56:26.047668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:56:26.047678 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 12:56:26.047688 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 12:56:26.047720 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:56:26.047731 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 12:56:26.047741 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 12:56:26.047752 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 29 12:56:26.047762 kernel: Booting paravirtualized kernel on KVM Jan 29 12:56:26.047775 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:56:26.047785 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 12:56:26.047795 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 12:56:26.047805 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 12:56:26.047815 kernel: pcpu-alloc: [0] 0 1 Jan 29 12:56:26.047824 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 12:56:26.047836 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:56:26.047850 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:56:26.047867 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:56:26.049920 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:56:26.049937 kernel: Fallback order for Node 0: 0 Jan 29 12:56:26.049947 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 29 12:56:26.049956 kernel: Policy zone: Normal Jan 29 12:56:26.049966 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:56:26.049975 kernel: software IO TLB: area num 2. Jan 29 12:56:26.049985 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 227308K reserved, 0K cma-reserved) Jan 29 12:56:26.049995 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:56:26.050008 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:56:26.050017 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:56:26.050026 kernel: Dynamic Preempt: voluntary Jan 29 12:56:26.050036 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:56:26.050046 kernel: rcu: RCU event tracing is enabled. Jan 29 12:56:26.050058 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:56:26.050070 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:56:26.050084 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:56:26.050097 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:56:26.050114 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:56:26.050128 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:56:26.050139 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 12:56:26.050148 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:56:26.050158 kernel: Console: colour VGA+ 80x25 Jan 29 12:56:26.050167 kernel: printk: console [tty0] enabled Jan 29 12:56:26.050176 kernel: printk: console [ttyS0] enabled Jan 29 12:56:26.050187 kernel: ACPI: Core revision 20230628 Jan 29 12:56:26.050200 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:56:26.050217 kernel: x2apic enabled Jan 29 12:56:26.050230 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 12:56:26.050241 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 12:56:26.050250 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 12:56:26.050260 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 29 12:56:26.050270 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 12:56:26.050279 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 12:56:26.050289 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:56:26.050298 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:56:26.050311 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:56:26.050324 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:56:26.050337 kernel: Speculative Store Bypass: Vulnerable Jan 29 12:56:26.050350 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 29 12:56:26.050363 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:56:26.050382 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:56:26.050394 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:56:26.050403 kernel: landlock: Up and running. Jan 29 12:56:26.050416 kernel: SELinux: Initializing. Jan 29 12:56:26.050430 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:56:26.050444 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:56:26.050458 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 29 12:56:26.050475 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:56:26.050489 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:56:26.050499 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:56:26.050510 kernel: Performance Events: AMD PMU driver. Jan 29 12:56:26.050523 kernel: ... version: 0 Jan 29 12:56:26.050541 kernel: ... bit width: 48 Jan 29 12:56:26.050555 kernel: ... generic registers: 4 Jan 29 12:56:26.050568 kernel: ... value mask: 0000ffffffffffff Jan 29 12:56:26.050581 kernel: ... max period: 00007fffffffffff Jan 29 12:56:26.050595 kernel: ... fixed-purpose events: 0 Jan 29 12:56:26.050608 kernel: ... event mask: 000000000000000f Jan 29 12:56:26.050622 kernel: signal: max sigframe size: 1440 Jan 29 12:56:26.050636 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:56:26.050649 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:56:26.050697 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:56:26.050712 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:56:26.050726 kernel: .... node #0, CPUs: #1 Jan 29 12:56:26.050745 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:56:26.050758 kernel: smpboot: Max logical packages: 2 Jan 29 12:56:26.050772 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 29 12:56:26.050785 kernel: devtmpfs: initialized Jan 29 12:56:26.050799 kernel: x86/mm: Memory block size: 128MB Jan 29 12:56:26.050813 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:56:26.050830 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:56:26.050843 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:56:26.050857 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:56:26.050870 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:56:26.050903 kernel: audit: type=2000 audit(1738155385.396:1): state=initialized audit_enabled=0 res=1 Jan 29 12:56:26.050917 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:56:26.050931 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:56:26.050944 kernel: cpuidle: using governor menu Jan 29 12:56:26.050957 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:56:26.050975 kernel: dca service started, version 1.12.1 Jan 29 12:56:26.050989 kernel: PCI: Using configuration type 1 for base access Jan 29 12:56:26.051003 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:56:26.051017 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:56:26.051030 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:56:26.051043 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:56:26.051057 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:56:26.051071 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:56:26.051085 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:56:26.051101 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:56:26.051114 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:56:26.051128 kernel: ACPI: Interpreter enabled Jan 29 12:56:26.051141 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 12:56:26.051155 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:56:26.051169 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:56:26.051184 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 12:56:26.051196 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 12:56:26.051211 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:56:26.051423 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:56:26.051540 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 12:56:26.051639 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 12:56:26.051653 kernel: acpiphp: Slot [3] registered Jan 29 12:56:26.051663 kernel: acpiphp: Slot [4] registered Jan 29 12:56:26.051673 kernel: acpiphp: Slot [5] registered Jan 29 12:56:26.051683 kernel: acpiphp: Slot [6] registered Jan 29 12:56:26.051693 kernel: acpiphp: Slot [7] registered Jan 29 12:56:26.051706 kernel: acpiphp: Slot [8] registered Jan 29 12:56:26.051716 kernel: acpiphp: Slot [9] registered Jan 29 12:56:26.051726 kernel: acpiphp: Slot [10] registered Jan 29 12:56:26.051736 kernel: acpiphp: Slot [11] registered Jan 29 12:56:26.051745 kernel: acpiphp: Slot [12] registered Jan 29 12:56:26.051755 kernel: acpiphp: Slot [13] registered Jan 29 12:56:26.051765 kernel: acpiphp: Slot [14] registered Jan 29 12:56:26.051775 kernel: acpiphp: Slot [15] registered Jan 29 12:56:26.051784 kernel: acpiphp: Slot [16] registered Jan 29 12:56:26.051796 kernel: acpiphp: Slot [17] registered Jan 29 12:56:26.051805 kernel: acpiphp: Slot [18] registered Jan 29 12:56:26.051815 kernel: acpiphp: Slot [19] registered Jan 29 12:56:26.051825 kernel: acpiphp: Slot [20] registered Jan 29 12:56:26.051834 kernel: acpiphp: Slot [21] registered Jan 29 12:56:26.051844 kernel: acpiphp: Slot [22] registered Jan 29 12:56:26.051854 kernel: acpiphp: Slot [23] registered Jan 29 12:56:26.051864 kernel: acpiphp: Slot [24] registered Jan 29 12:56:26.051873 kernel: acpiphp: Slot [25] registered Jan 29 12:56:26.052384 kernel: acpiphp: Slot [26] registered Jan 29 12:56:26.052400 kernel: acpiphp: Slot [27] registered Jan 29 12:56:26.052410 kernel: acpiphp: Slot [28] registered Jan 29 12:56:26.052420 kernel: acpiphp: Slot [29] registered Jan 29 12:56:26.052430 kernel: acpiphp: Slot [30] registered Jan 29 12:56:26.052439 kernel: acpiphp: Slot [31] registered Jan 29 12:56:26.052449 kernel: PCI host bridge to bus 0000:00 Jan 29 12:56:26.052564 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:56:26.052656 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:56:26.052749 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:56:26.052836 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 12:56:26.052971 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 29 12:56:26.053060 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:56:26.053190 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 12:56:26.053351 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 12:56:26.053514 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 12:56:26.053677 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 29 12:56:26.053828 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 12:56:26.055730 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 12:56:26.057926 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 12:56:26.058036 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 12:56:26.058166 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 12:56:26.058281 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 12:56:26.058378 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 12:56:26.058484 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 12:56:26.058586 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 12:56:26.058695 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 29 12:56:26.058809 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 29 12:56:26.058933 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 29 12:56:26.059033 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 12:56:26.059154 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 12:56:26.059257 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 29 12:56:26.059355 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 29 12:56:26.059455 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 29 12:56:26.059555 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 29 12:56:26.059662 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 12:56:26.059781 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 12:56:26.060955 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 29 12:56:26.061063 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 29 12:56:26.061199 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 12:56:26.061317 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 29 12:56:26.061423 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 29 12:56:26.061544 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 12:56:26.061666 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 29 12:56:26.061771 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 29 12:56:26.061877 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 29 12:56:26.062962 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 12:56:26.062976 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 12:56:26.062986 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 12:56:26.062996 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 12:56:26.063012 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 12:56:26.063027 kernel: iommu: Default domain type: Translated Jan 29 12:56:26.063041 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:56:26.063055 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:56:26.063069 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:56:26.063083 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 12:56:26.063097 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 29 12:56:26.063231 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 12:56:26.063368 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 12:56:26.063509 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 12:56:26.063531 kernel: vgaarb: loaded Jan 29 12:56:26.063546 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 12:56:26.063560 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:56:26.063573 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:56:26.063586 kernel: pnp: PnP ACPI init Jan 29 12:56:26.063701 kernel: pnp 00:03: [dma 2] Jan 29 12:56:26.063717 kernel: pnp: PnP ACPI: found 5 devices Jan 29 12:56:26.063728 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:56:26.063742 kernel: NET: Registered PF_INET protocol family Jan 29 12:56:26.063752 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:56:26.063762 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 12:56:26.063772 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:56:26.063783 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:56:26.063793 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 12:56:26.063803 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 12:56:26.063813 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:56:26.063825 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:56:26.063835 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:56:26.063846 kernel: NET: Registered PF_XDP protocol family Jan 29 12:56:26.065055 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:56:26.065157 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:56:26.065264 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:56:26.065355 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 29 12:56:26.065441 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 29 12:56:26.065545 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 12:56:26.065681 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 12:56:26.065696 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:56:26.065707 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 12:56:26.065717 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 29 12:56:26.065727 kernel: Initialise system trusted keyrings Jan 29 12:56:26.065737 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 12:56:26.065747 kernel: Key type asymmetric registered Jan 29 12:56:26.065757 kernel: Asymmetric key parser 'x509' registered Jan 29 12:56:26.065771 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:56:26.065781 kernel: io scheduler mq-deadline registered Jan 29 12:56:26.065791 kernel: io scheduler kyber registered Jan 29 12:56:26.065801 kernel: io scheduler bfq registered Jan 29 12:56:26.065811 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:56:26.065822 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 12:56:26.065832 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 12:56:26.065843 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 12:56:26.065853 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 12:56:26.065865 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:56:26.065875 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:56:26.065913 kernel: random: crng init done Jan 29 12:56:26.065923 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 12:56:26.065933 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 12:56:26.065943 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 12:56:26.066054 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 12:56:26.066070 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 12:56:26.066160 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 12:56:26.066247 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T12:56:25 UTC (1738155385) Jan 29 12:56:26.066354 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 29 12:56:26.066378 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 12:56:26.066392 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:56:26.066402 kernel: Segment Routing with IPv6 Jan 29 12:56:26.066412 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:56:26.066423 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:56:26.066433 kernel: Key type dns_resolver registered Jan 29 12:56:26.066447 kernel: IPI shorthand broadcast: enabled Jan 29 12:56:26.066457 kernel: sched_clock: Marking stable (1059007360, 181582431)->(1287420857, -46831066) Jan 29 12:56:26.066467 kernel: registered taskstats version 1 Jan 29 12:56:26.066477 kernel: Loading compiled-in X.509 certificates Jan 29 12:56:26.066487 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:56:26.066497 kernel: Key type .fscrypt registered Jan 29 12:56:26.066506 kernel: Key type fscrypt-provisioning registered Jan 29 12:56:26.066517 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:56:26.066529 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:56:26.066538 kernel: ima: No architecture policies found Jan 29 12:56:26.066548 kernel: clk: Disabling unused clocks Jan 29 12:56:26.066558 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:56:26.066568 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:56:26.066578 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:56:26.066591 kernel: Run /init as init process Jan 29 12:56:26.066605 kernel: with arguments: Jan 29 12:56:26.066618 kernel: /init Jan 29 12:56:26.066631 kernel: with environment: Jan 29 12:56:26.066646 kernel: HOME=/ Jan 29 12:56:26.066656 kernel: TERM=linux Jan 29 12:56:26.066666 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:56:26.066679 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:56:26.066692 systemd[1]: Detected virtualization kvm. Jan 29 12:56:26.066703 systemd[1]: Detected architecture x86-64. Jan 29 12:56:26.066714 systemd[1]: Running in initrd. Jan 29 12:56:26.066726 systemd[1]: No hostname configured, using default hostname. Jan 29 12:56:26.066737 systemd[1]: Hostname set to . Jan 29 12:56:26.066748 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:56:26.066758 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:56:26.066769 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:56:26.066779 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:56:26.066791 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:56:26.066811 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:56:26.066824 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:56:26.066835 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:56:26.066848 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:56:26.066859 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:56:26.066872 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:56:26.068920 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:56:26.068939 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:56:26.068955 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:56:26.068971 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:56:26.068982 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:56:26.068993 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:56:26.069004 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:56:26.069015 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:56:26.069031 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:56:26.069042 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:56:26.069053 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:56:26.069064 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:56:26.069075 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:56:26.069086 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:56:26.069097 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:56:26.069108 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:56:26.069121 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:56:26.069132 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:56:26.069143 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:56:26.069154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:56:26.069165 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:56:26.069201 systemd-journald[184]: Collecting audit messages is disabled. Jan 29 12:56:26.069231 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:56:26.069242 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:56:26.069258 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:56:26.069270 systemd-journald[184]: Journal started Jan 29 12:56:26.069295 systemd-journald[184]: Runtime Journal (/run/log/journal/3a52b605febe41b3bcabcac6fa94d097) is 8.0M, max 78.3M, 70.3M free. Jan 29 12:56:26.047761 systemd-modules-load[185]: Inserted module 'overlay' Jan 29 12:56:26.121143 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:56:26.121169 kernel: Bridge firewalling registered Jan 29 12:56:26.121184 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:56:26.083687 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 29 12:56:26.122382 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:56:26.123620 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:56:26.128339 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:56:26.137293 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:56:26.143731 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:56:26.150016 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:56:26.160207 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:56:26.161680 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:56:26.171094 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:56:26.177053 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:56:26.178724 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:56:26.184019 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:56:26.190027 dracut-cmdline[214]: dracut-dracut-053 Jan 29 12:56:26.193979 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:56:26.201592 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:56:26.237320 systemd-resolved[221]: Positive Trust Anchors: Jan 29 12:56:26.237336 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:56:26.237380 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:56:26.244671 systemd-resolved[221]: Defaulting to hostname 'linux'. Jan 29 12:56:26.245654 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:56:26.248323 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:56:26.266917 kernel: SCSI subsystem initialized Jan 29 12:56:26.277918 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:56:26.289953 kernel: iscsi: registered transport (tcp) Jan 29 12:56:26.347577 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:56:26.347708 kernel: QLogic iSCSI HBA Driver Jan 29 12:56:26.418248 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:56:26.426069 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:56:26.475085 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:56:26.475202 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:56:26.478468 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:56:26.538967 kernel: raid6: sse2x4 gen() 5143 MB/s Jan 29 12:56:26.556993 kernel: raid6: sse2x2 gen() 8887 MB/s Jan 29 12:56:26.575362 kernel: raid6: sse2x1 gen() 10163 MB/s Jan 29 12:56:26.575429 kernel: raid6: using algorithm sse2x1 gen() 10163 MB/s Jan 29 12:56:26.594525 kernel: raid6: .... xor() 7406 MB/s, rmw enabled Jan 29 12:56:26.594588 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 12:56:26.617181 kernel: xor: measuring software checksum speed Jan 29 12:56:26.617263 kernel: prefetch64-sse : 17277 MB/sec Jan 29 12:56:26.619346 kernel: generic_sse : 15734 MB/sec Jan 29 12:56:26.619399 kernel: xor: using function: prefetch64-sse (17277 MB/sec) Jan 29 12:56:26.808984 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:56:26.825945 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:56:26.834147 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:56:26.868629 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 29 12:56:26.874003 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:56:26.890209 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:56:26.917751 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jan 29 12:56:26.963387 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:56:26.968208 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:56:27.041214 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:56:27.046773 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:56:27.084122 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:56:27.087240 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:56:27.089257 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:56:27.093099 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:56:27.105244 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:56:27.129521 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:56:27.153936 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 29 12:56:27.189748 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 29 12:56:27.189872 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:56:27.189904 kernel: GPT:17805311 != 20971519 Jan 29 12:56:27.189918 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:56:27.189931 kernel: GPT:17805311 != 20971519 Jan 29 12:56:27.189949 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:56:27.189961 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:56:27.181811 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:56:27.181993 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:56:27.192948 kernel: libata version 3.00 loaded. Jan 29 12:56:27.188286 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:56:27.188794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:56:27.205314 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 12:56:27.205503 kernel: scsi host0: ata_piix Jan 29 12:56:27.205753 kernel: scsi host1: ata_piix Jan 29 12:56:27.205909 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 29 12:56:27.205926 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 29 12:56:27.188944 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:56:27.189460 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:56:27.197168 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:56:27.237914 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (451) Jan 29 12:56:27.246923 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (472) Jan 29 12:56:27.259844 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 12:56:27.282247 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:56:27.289040 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 12:56:27.294066 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 12:56:27.294696 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 12:56:27.301255 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:56:27.311070 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:56:27.315064 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:56:27.323182 disk-uuid[504]: Primary Header is updated. Jan 29 12:56:27.323182 disk-uuid[504]: Secondary Entries is updated. Jan 29 12:56:27.323182 disk-uuid[504]: Secondary Header is updated. Jan 29 12:56:27.331917 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:56:27.335098 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:56:27.341303 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:56:28.353907 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:56:28.355995 disk-uuid[506]: The operation has completed successfully. Jan 29 12:56:28.412391 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:56:28.412547 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:56:28.436084 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:56:28.459725 sh[527]: Success Jan 29 12:56:28.483286 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 29 12:56:28.563826 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:56:28.575246 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:56:28.578302 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:56:28.612945 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:56:28.613031 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:56:28.616835 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:56:28.621686 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:56:28.625471 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:56:28.651377 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:56:28.653613 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:56:28.660238 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:56:28.668448 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:56:28.696984 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:56:28.705013 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:56:28.705070 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:56:28.717938 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:56:28.742949 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:56:28.743697 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:56:28.760236 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:56:28.766086 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:56:28.802422 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:56:28.814457 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:56:28.838141 systemd-networkd[709]: lo: Link UP Jan 29 12:56:28.838154 systemd-networkd[709]: lo: Gained carrier Jan 29 12:56:28.839439 systemd-networkd[709]: Enumeration completed Jan 29 12:56:28.840007 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:56:28.840212 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:56:28.840216 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:56:28.841842 systemd-networkd[709]: eth0: Link UP Jan 29 12:56:28.841845 systemd-networkd[709]: eth0: Gained carrier Jan 29 12:56:28.841853 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:56:28.847923 systemd[1]: Reached target network.target - Network. Jan 29 12:56:28.856960 systemd-networkd[709]: eth0: DHCPv4 address 172.24.4.160/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 12:56:28.911724 ignition[674]: Ignition 2.19.0 Jan 29 12:56:28.911737 ignition[674]: Stage: fetch-offline Jan 29 12:56:28.913424 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:56:28.911784 ignition[674]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:56:28.911794 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:56:28.911942 ignition[674]: parsed url from cmdline: "" Jan 29 12:56:28.911946 ignition[674]: no config URL provided Jan 29 12:56:28.911952 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:56:28.911961 ignition[674]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:56:28.911966 ignition[674]: failed to fetch config: resource requires networking Jan 29 12:56:28.912212 ignition[674]: Ignition finished successfully Jan 29 12:56:28.921149 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:56:28.934158 ignition[719]: Ignition 2.19.0 Jan 29 12:56:28.934171 ignition[719]: Stage: fetch Jan 29 12:56:28.934338 ignition[719]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:56:28.934351 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:56:28.934441 ignition[719]: parsed url from cmdline: "" Jan 29 12:56:28.934445 ignition[719]: no config URL provided Jan 29 12:56:28.934451 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:56:28.934459 ignition[719]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:56:28.934580 ignition[719]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 12:56:28.934734 ignition[719]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 12:56:28.934769 ignition[719]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 12:56:29.054419 ignition[719]: GET result: OK Jan 29 12:56:29.054488 ignition[719]: parsing config with SHA512: 30a097b0504289e1aba0eb68b62101d92ce4b82ed47ddf5711ec9e032002faa243be20fb7a8aa86b4bb413c07d219bbdfb3a1fc0810dcf25fa43b675920b2b2e Jan 29 12:56:29.057752 unknown[719]: fetched base config from "system" Jan 29 12:56:29.057766 unknown[719]: fetched base config from "system" Jan 29 12:56:29.058087 ignition[719]: fetch: fetch complete Jan 29 12:56:29.057774 unknown[719]: fetched user config from "openstack" Jan 29 12:56:29.058093 ignition[719]: fetch: fetch passed Jan 29 12:56:29.061131 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:56:29.058135 ignition[719]: Ignition finished successfully Jan 29 12:56:29.070132 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:56:29.098935 ignition[725]: Ignition 2.19.0 Jan 29 12:56:29.098966 ignition[725]: Stage: kargs Jan 29 12:56:29.099375 ignition[725]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:56:29.099403 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:56:29.105029 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:56:29.101305 ignition[725]: kargs: kargs passed Jan 29 12:56:29.101404 ignition[725]: Ignition finished successfully Jan 29 12:56:29.115237 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:56:29.150715 ignition[731]: Ignition 2.19.0 Jan 29 12:56:29.152309 ignition[731]: Stage: disks Jan 29 12:56:29.152720 ignition[731]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:56:29.152746 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:56:29.159069 ignition[731]: disks: disks passed Jan 29 12:56:29.160303 ignition[731]: Ignition finished successfully Jan 29 12:56:29.162272 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:56:29.164715 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:56:29.166618 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:56:29.169597 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:56:29.172499 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:56:29.174982 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:56:29.183129 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:56:29.217252 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 12:56:29.231026 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:56:29.240115 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:56:29.487922 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:56:29.489335 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:56:29.492939 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:56:29.535024 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:56:29.574056 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:56:29.575807 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:56:29.584217 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 12:56:29.591806 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:56:29.592005 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:56:29.600365 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:56:29.608138 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:56:29.653947 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (747) Jan 29 12:56:29.709951 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:56:29.710041 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:56:29.717680 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:56:29.785967 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:56:29.792873 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:56:29.917921 initrd-setup-root[774]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:56:29.925047 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:56:29.931483 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:56:29.940189 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:56:30.044009 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:56:30.049992 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:56:30.052064 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:56:30.059495 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:56:30.061947 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:56:30.087099 ignition[863]: INFO : Ignition 2.19.0 Jan 29 12:56:30.088526 ignition[863]: INFO : Stage: mount Jan 29 12:56:30.089998 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:56:30.089998 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:56:30.091552 ignition[863]: INFO : mount: mount passed Jan 29 12:56:30.091552 ignition[863]: INFO : Ignition finished successfully Jan 29 12:56:30.093875 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:56:30.098186 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:56:30.783298 systemd-networkd[709]: eth0: Gained IPv6LL Jan 29 12:56:37.015695 coreos-metadata[749]: Jan 29 12:56:37.015 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:56:37.056401 coreos-metadata[749]: Jan 29 12:56:37.056 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 12:56:37.073611 coreos-metadata[749]: Jan 29 12:56:37.073 INFO Fetch successful Jan 29 12:56:37.075205 coreos-metadata[749]: Jan 29 12:56:37.074 INFO wrote hostname ci-4081-3-0-e-e47d9d4a8e.novalocal to /sysroot/etc/hostname Jan 29 12:56:37.077573 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 12:56:37.077812 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 12:56:37.089126 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:56:37.115231 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:56:37.143967 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (880) Jan 29 12:56:37.153094 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:56:37.153182 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:56:37.157271 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:56:37.170964 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:56:37.183285 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:56:37.240692 ignition[898]: INFO : Ignition 2.19.0 Jan 29 12:56:37.242486 ignition[898]: INFO : Stage: files Jan 29 12:56:37.242486 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:56:37.242486 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:56:37.248053 ignition[898]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:56:37.258472 ignition[898]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:56:37.258472 ignition[898]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:56:37.305039 ignition[898]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:56:37.307070 ignition[898]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:56:37.307070 ignition[898]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:56:37.306099 unknown[898]: wrote ssh authorized keys file for user: core Jan 29 12:56:37.320679 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:56:37.323120 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:56:37.323120 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:56:37.323120 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:56:37.323120 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:56:37.323120 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:56:37.323120 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:56:37.323120 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:56:37.323120 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:56:37.342762 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 12:56:37.916060 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 29 12:56:39.508508 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:56:39.508508 ignition[898]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 29 12:56:39.513137 ignition[898]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:56:39.513137 ignition[898]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:56:39.513137 ignition[898]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 29 12:56:39.513137 ignition[898]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:56:39.513137 ignition[898]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:56:39.513137 ignition[898]: INFO : files: files passed Jan 29 12:56:39.513137 ignition[898]: INFO : Ignition finished successfully Jan 29 12:56:39.513229 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:56:39.525060 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:56:39.538034 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:56:39.538931 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:56:39.539024 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:56:39.550461 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:56:39.550461 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:56:39.556548 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:56:39.561821 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:56:39.563076 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:56:39.576165 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:56:39.620252 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:56:39.620463 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:56:39.622713 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:56:39.624718 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:56:39.626748 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:56:39.633148 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:56:39.657552 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:56:39.666172 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:56:39.701218 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:56:39.702975 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:56:39.706189 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:56:39.718314 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:56:39.718606 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:56:39.722121 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:56:39.724128 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:56:39.727085 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:56:39.729653 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:56:39.732224 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:56:39.735270 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:56:39.738264 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:56:39.741345 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:56:39.744284 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:56:39.747324 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:56:39.750075 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:56:39.750346 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:56:39.753443 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:56:39.755401 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:56:39.758008 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:56:39.758825 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:56:39.761223 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:56:39.761596 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:56:39.765275 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:56:39.765609 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:56:39.767376 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:56:39.767642 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:56:39.778496 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:56:39.795982 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:56:39.796593 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:56:39.796851 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:56:39.803161 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:56:39.803335 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:56:39.811391 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:56:39.811489 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:56:39.821350 ignition[951]: INFO : Ignition 2.19.0 Jan 29 12:56:39.821350 ignition[951]: INFO : Stage: umount Jan 29 12:56:39.826995 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:56:39.826995 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:56:39.826995 ignition[951]: INFO : umount: umount passed Jan 29 12:56:39.826995 ignition[951]: INFO : Ignition finished successfully Jan 29 12:56:39.829231 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:56:39.829351 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:56:39.832389 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:56:39.832433 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:56:39.835993 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:56:39.836034 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:56:39.837436 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:56:39.837477 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:56:39.840265 systemd[1]: Stopped target network.target - Network. Jan 29 12:56:39.841223 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:56:39.841266 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:56:39.843231 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:56:39.844181 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:56:39.844416 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:56:39.845714 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:56:39.847643 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:56:39.850141 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:56:39.850227 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:56:39.852795 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:56:39.852880 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:56:39.854575 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:56:39.854674 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:56:39.857651 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:56:39.857755 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:56:39.859742 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:56:39.862577 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:56:39.863129 systemd-networkd[709]: eth0: DHCPv6 lease lost Jan 29 12:56:39.867629 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:56:39.872400 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:56:39.872706 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:56:39.876711 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:56:39.876866 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:56:39.881275 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:56:39.881351 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:56:39.885964 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:56:39.886457 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:56:39.886502 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:56:39.887111 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:56:39.887151 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:56:39.889052 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:56:39.889114 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:56:39.889679 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:56:39.889723 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:56:39.890421 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:56:39.902108 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:56:39.902663 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:56:39.903962 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:56:39.904070 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:56:39.905804 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:56:39.906063 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:56:39.907491 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:56:39.907529 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:56:39.912448 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:56:39.912536 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:56:39.914057 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:56:39.914107 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:56:39.915143 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:56:39.915226 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:56:39.922271 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:56:39.923604 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:56:39.923707 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:56:39.925353 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 12:56:39.925406 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:56:39.927754 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:56:39.927810 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:56:39.928353 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:56:39.928395 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:56:39.931940 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:56:39.932304 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:56:40.053430 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:56:40.053747 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:56:40.057278 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:56:40.059142 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:56:40.059263 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:56:40.069333 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:56:40.089275 systemd[1]: Switching root. Jan 29 12:56:40.127826 systemd-journald[184]: Journal stopped Jan 29 12:56:41.994495 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 29 12:56:41.994566 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:56:41.994585 kernel: SELinux: policy capability open_perms=1 Jan 29 12:56:41.994600 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:56:41.994615 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:56:41.994629 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:56:41.994642 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:56:41.994656 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:56:41.994671 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:56:41.994685 kernel: audit: type=1403 audit(1738155400.652:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:56:41.994699 systemd[1]: Successfully loaded SELinux policy in 76.664ms. Jan 29 12:56:41.994723 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.178ms. Jan 29 12:56:41.994739 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:56:41.994758 systemd[1]: Detected virtualization kvm. Jan 29 12:56:41.994772 systemd[1]: Detected architecture x86-64. Jan 29 12:56:41.994786 systemd[1]: Detected first boot. Jan 29 12:56:41.994802 systemd[1]: Hostname set to . Jan 29 12:56:41.994816 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:56:41.994830 zram_generator::config[1011]: No configuration found. Jan 29 12:56:41.994845 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:56:41.994860 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:56:41.994878 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 12:56:41.994932 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:56:41.994949 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:56:41.994966 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:56:41.994980 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:56:41.994995 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:56:41.995013 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:56:41.995027 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:56:41.995042 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:56:41.995058 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:56:41.995072 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:56:41.995087 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:56:41.995105 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:56:41.995119 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:56:41.995134 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:56:41.995148 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:56:41.995163 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:56:41.995176 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:56:41.995191 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:56:41.995205 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:56:41.995221 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:56:41.995236 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:56:41.995251 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:56:41.995265 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:56:41.995279 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:56:41.995293 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:56:41.995307 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:56:41.995321 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:56:41.995338 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:56:41.995352 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:56:41.995367 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:56:41.995382 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:56:41.995397 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:56:41.995411 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:56:41.995425 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:56:41.995439 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:56:41.995453 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:56:41.995472 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:56:41.995487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:56:41.995501 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:56:41.995515 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:56:41.995529 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:56:41.995544 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:56:41.995559 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:56:41.995573 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:56:41.995589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:56:41.995604 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:56:41.995619 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 12:56:41.995633 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 12:56:41.995648 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:56:41.995661 kernel: loop: module loaded Jan 29 12:56:41.995675 kernel: fuse: init (API version 7.39) Jan 29 12:56:41.995961 kernel: ACPI: bus type drm_connector registered Jan 29 12:56:41.995979 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:56:41.995999 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:56:41.996035 systemd-journald[1122]: Collecting audit messages is disabled. Jan 29 12:56:41.996064 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:56:41.996081 systemd-journald[1122]: Journal started Jan 29 12:56:41.996110 systemd-journald[1122]: Runtime Journal (/run/log/journal/3a52b605febe41b3bcabcac6fa94d097) is 8.0M, max 78.3M, 70.3M free. Jan 29 12:56:41.998915 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:56:42.003907 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:56:42.009940 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:56:42.011384 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:56:42.012037 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:56:42.012605 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:56:42.013164 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:56:42.013746 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:56:42.014344 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:56:42.015111 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:56:42.015862 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:56:42.016618 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:56:42.016767 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:56:42.017603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:56:42.017744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:56:42.018479 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:56:42.018625 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:56:42.019492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:56:42.019631 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:56:42.020651 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:56:42.020802 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:56:42.021637 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:56:42.021870 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:56:42.022795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:56:42.023630 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:56:42.024628 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:56:42.035420 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:56:42.041013 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:56:42.043938 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:56:42.045991 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:56:42.053069 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:56:42.061252 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:56:42.065060 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:56:42.072026 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:56:42.073003 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:56:42.084344 systemd-journald[1122]: Time spent on flushing to /var/log/journal/3a52b605febe41b3bcabcac6fa94d097 is 51.143ms for 914 entries. Jan 29 12:56:42.084344 systemd-journald[1122]: System Journal (/var/log/journal/3a52b605febe41b3bcabcac6fa94d097) is 8.0M, max 584.8M, 576.8M free. Jan 29 12:56:42.163696 systemd-journald[1122]: Received client request to flush runtime journal. Jan 29 12:56:42.083169 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:56:42.088868 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:56:42.090869 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:56:42.092541 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:56:42.114907 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:56:42.117738 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:56:42.133296 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:56:42.141199 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:56:42.156538 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 12:56:42.157630 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:56:42.170373 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:56:42.172492 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 29 12:56:42.172512 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 29 12:56:42.181405 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:56:42.187137 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:56:42.233835 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:56:42.253256 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:56:42.267913 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 29 12:56:42.267938 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 29 12:56:42.273217 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:56:43.124693 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:56:43.147243 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:56:43.201774 systemd-udevd[1196]: Using default interface naming scheme 'v255'. Jan 29 12:56:43.235090 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:56:43.250671 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:56:43.286164 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:56:43.319967 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 29 12:56:43.371997 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 12:56:43.378634 kernel: ACPI: button: Power Button [PWRF] Jan 29 12:56:43.393578 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:56:43.407905 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1205) Jan 29 12:56:43.420902 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 12:56:43.460058 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 12:56:43.495983 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:56:43.506872 systemd-networkd[1206]: lo: Link UP Jan 29 12:56:43.506913 systemd-networkd[1206]: lo: Gained carrier Jan 29 12:56:43.509181 systemd-networkd[1206]: Enumeration completed Jan 29 12:56:43.509303 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:56:43.509686 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:56:43.509692 systemd-networkd[1206]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:56:43.510612 systemd-networkd[1206]: eth0: Link UP Jan 29 12:56:43.510623 systemd-networkd[1206]: eth0: Gained carrier Jan 29 12:56:43.510638 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:56:43.516082 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:56:43.521282 systemd-networkd[1206]: eth0: DHCPv4 address 172.24.4.160/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 12:56:43.527005 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:56:43.551265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:56:43.556877 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 12:56:43.556940 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 12:56:43.559632 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:56:43.564291 kernel: Console: switching to colour dummy device 80x25 Jan 29 12:56:43.567859 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 12:56:43.567942 kernel: [drm] features: -context_init Jan 29 12:56:43.574630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:56:43.574877 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:56:43.578128 kernel: [drm] number of scanouts: 1 Jan 29 12:56:43.578213 kernel: [drm] number of cap sets: 0 Jan 29 12:56:43.580903 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 12:56:43.582111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:56:43.591558 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 12:56:43.591599 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 12:56:43.603147 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 12:56:43.604480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:56:43.604724 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:56:43.610131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:56:43.617275 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:56:43.620053 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:56:43.648590 lvm[1245]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:56:43.677942 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:56:43.680570 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:56:43.688315 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:56:43.707301 lvm[1250]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:56:43.729326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:56:43.746835 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:56:43.747498 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:56:43.747618 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:56:43.747647 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:56:43.747731 systemd[1]: Reached target machines.target - Containers. Jan 29 12:56:43.749306 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:56:43.757172 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:56:43.759719 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:56:43.761466 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:56:43.764133 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:56:43.768781 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:56:43.774163 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:56:43.775854 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:56:43.792275 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:56:43.807910 kernel: loop0: detected capacity change from 0 to 142488 Jan 29 12:56:43.847015 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:56:43.848878 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:56:43.909562 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:56:43.932980 kernel: loop1: detected capacity change from 0 to 210664 Jan 29 12:56:44.000948 kernel: loop2: detected capacity change from 0 to 8 Jan 29 12:56:44.034060 kernel: loop3: detected capacity change from 0 to 140768 Jan 29 12:56:44.120672 kernel: loop4: detected capacity change from 0 to 142488 Jan 29 12:56:44.170617 kernel: loop5: detected capacity change from 0 to 210664 Jan 29 12:56:44.236061 kernel: loop6: detected capacity change from 0 to 8 Jan 29 12:56:44.244853 kernel: loop7: detected capacity change from 0 to 140768 Jan 29 12:56:44.290859 (sd-merge)[1276]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 12:56:44.292094 (sd-merge)[1276]: Merged extensions into '/usr'. Jan 29 12:56:44.302466 systemd[1]: Reloading requested from client PID 1262 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:56:44.302505 systemd[1]: Reloading... Jan 29 12:56:44.378991 zram_generator::config[1303]: No configuration found. Jan 29 12:56:44.560368 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:56:44.629254 systemd[1]: Reloading finished in 325 ms. Jan 29 12:56:44.643709 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:56:44.657109 systemd[1]: Starting ensure-sysext.service... Jan 29 12:56:44.665048 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:56:44.674998 systemd[1]: Reloading requested from client PID 1366 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:56:44.675020 systemd[1]: Reloading... Jan 29 12:56:44.742152 zram_generator::config[1396]: No configuration found. Jan 29 12:56:44.809687 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:56:44.810109 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:56:44.811681 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:56:44.812044 systemd-tmpfiles[1367]: ACLs are not supported, ignoring. Jan 29 12:56:44.812115 systemd-tmpfiles[1367]: ACLs are not supported, ignoring. Jan 29 12:56:44.838318 systemd-tmpfiles[1367]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:56:44.838333 systemd-tmpfiles[1367]: Skipping /boot Jan 29 12:56:44.847113 systemd-tmpfiles[1367]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:56:44.847241 systemd-tmpfiles[1367]: Skipping /boot Jan 29 12:56:44.910585 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:56:44.976997 systemd[1]: Reloading finished in 301 ms. Jan 29 12:56:44.994994 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:56:45.009071 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:56:45.053119 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:56:45.060117 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:56:45.077110 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:56:45.093137 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:56:45.121974 systemd-networkd[1206]: eth0: Gained IPv6LL Jan 29 12:56:45.127450 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:56:45.140760 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:56:45.141141 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:56:45.147142 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:56:45.156317 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:56:45.182194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:56:45.183496 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:56:45.183700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:56:45.188209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:56:45.188396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:56:45.198734 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:56:45.198941 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:56:45.210813 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:56:45.211030 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:56:45.224590 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:56:45.233903 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:56:45.236946 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:56:45.241838 systemd-resolved[1463]: Positive Trust Anchors: Jan 29 12:56:45.241859 systemd-resolved[1463]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:56:45.241943 systemd-resolved[1463]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:56:45.242151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:56:45.252249 systemd-resolved[1463]: Using system hostname 'ci-4081-3-0-e-e47d9d4a8e.novalocal'. Jan 29 12:56:45.256198 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:56:45.273160 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:56:45.286058 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:56:45.286932 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:56:45.287007 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:56:45.287454 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:56:45.292213 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:56:45.294260 systemd[1]: Finished ensure-sysext.service. Jan 29 12:56:45.298224 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:56:45.298610 augenrules[1504]: No rules Jan 29 12:56:45.298414 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:56:45.302218 ldconfig[1258]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:56:45.302927 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:56:45.306318 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:56:45.306482 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:56:45.309711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:56:45.310374 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:56:45.313722 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:56:45.314035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:56:45.323344 systemd[1]: Reached target network.target - Network. Jan 29 12:56:45.326553 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:56:45.330226 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:56:45.330823 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:56:45.330913 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:56:45.340026 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 12:56:45.341286 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:56:45.351173 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:56:45.382081 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:56:45.405127 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:56:45.407732 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:56:45.421276 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 12:56:45.422322 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:56:45.422898 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:56:45.424798 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:56:45.425305 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:56:45.425806 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:56:45.425841 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:56:45.429848 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:56:45.433862 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:56:45.436063 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:56:45.437301 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:56:45.439460 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:56:45.442343 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:56:45.449440 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:56:46.324653 systemd-resolved[1463]: Clock change detected. Flushing caches. Jan 29 12:56:46.325078 systemd-timesyncd[1520]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Jan 29 12:56:46.325150 systemd-timesyncd[1520]: Initial clock synchronization to Wed 2025-01-29 12:56:46.324577 UTC. Jan 29 12:56:46.326222 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:56:46.328781 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:56:46.329351 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:56:46.332188 systemd[1]: System is tainted: cgroupsv1 Jan 29 12:56:46.332252 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:56:46.332277 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:56:46.336863 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:56:46.342023 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:56:46.358035 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:56:46.361397 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:56:46.364708 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:56:46.373687 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:56:46.385999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:56:46.397018 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:56:46.409720 jq[1534]: false Jan 29 12:56:46.410999 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:56:46.425349 extend-filesystems[1535]: Found loop4 Jan 29 12:56:46.433809 extend-filesystems[1535]: Found loop5 Jan 29 12:56:46.433809 extend-filesystems[1535]: Found loop6 Jan 29 12:56:46.433809 extend-filesystems[1535]: Found loop7 Jan 29 12:56:46.433809 extend-filesystems[1535]: Found vda Jan 29 12:56:46.433809 extend-filesystems[1535]: Found vda1 Jan 29 12:56:46.433809 extend-filesystems[1535]: Found vda2 Jan 29 12:56:46.433809 extend-filesystems[1535]: Found vda3 Jan 29 12:56:46.433809 extend-filesystems[1535]: Found usr Jan 29 12:56:46.433809 extend-filesystems[1535]: Found vda4 Jan 29 12:56:46.433809 extend-filesystems[1535]: Found vda6 Jan 29 12:56:46.433809 extend-filesystems[1535]: Found vda7 Jan 29 12:56:46.433809 extend-filesystems[1535]: Found vda9 Jan 29 12:56:46.433809 extend-filesystems[1535]: Checking size of /dev/vda9 Jan 29 12:56:46.428369 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:56:46.500991 extend-filesystems[1535]: Resized partition /dev/vda9 Jan 29 12:56:46.451216 dbus-daemon[1533]: [system] SELinux support is enabled Jan 29 12:56:46.442354 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:56:46.506921 extend-filesystems[1568]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:56:46.522931 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 29 12:56:46.459839 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:56:46.476080 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:56:46.481647 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:56:46.495947 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:56:46.508497 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:56:46.531970 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 29 12:56:46.570984 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1204) Jan 29 12:56:46.524225 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:56:46.571114 jq[1567]: true Jan 29 12:56:46.571283 extend-filesystems[1568]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 12:56:46.571283 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:56:46.571283 extend-filesystems[1568]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 29 12:56:46.591080 update_engine[1566]: I20250129 12:56:46.536170 1566 main.cc:92] Flatcar Update Engine starting Jan 29 12:56:46.591080 update_engine[1566]: I20250129 12:56:46.537714 1566 update_check_scheduler.cc:74] Next update check in 2m46s Jan 29 12:56:46.524468 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:56:46.595847 extend-filesystems[1535]: Resized filesystem in /dev/vda9 Jan 29 12:56:46.525753 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:56:46.528043 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:56:46.540387 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:56:46.548463 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:56:46.548706 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:56:46.569209 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:56:46.569519 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:56:46.610152 (ntainerd)[1581]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:56:46.612116 jq[1580]: true Jan 29 12:56:46.650448 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:56:46.660726 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:56:46.661862 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:56:46.662531 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:56:46.662549 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:56:46.664542 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:56:46.669928 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:56:46.678812 systemd-logind[1561]: New seat seat0. Jan 29 12:56:46.688904 systemd-logind[1561]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 12:56:46.688928 systemd-logind[1561]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:56:46.701245 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:56:46.806787 bash[1606]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:56:46.805759 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:56:46.822247 systemd[1]: Starting sshkeys.service... Jan 29 12:56:46.844033 locksmithd[1592]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:56:46.851625 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 12:56:46.863166 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 12:56:47.088893 containerd[1581]: time="2025-01-29T12:56:47.087729659Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:56:47.146826 containerd[1581]: time="2025-01-29T12:56:47.145572944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:56:47.148191 containerd[1581]: time="2025-01-29T12:56:47.148147984Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:56:47.148247 containerd[1581]: time="2025-01-29T12:56:47.148190293Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:56:47.148247 containerd[1581]: time="2025-01-29T12:56:47.148211202Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:56:47.148417 containerd[1581]: time="2025-01-29T12:56:47.148391190Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:56:47.148447 containerd[1581]: time="2025-01-29T12:56:47.148419453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:56:47.148509 containerd[1581]: time="2025-01-29T12:56:47.148483603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:56:47.148542 containerd[1581]: time="2025-01-29T12:56:47.148506686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:56:47.148785 containerd[1581]: time="2025-01-29T12:56:47.148756385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:56:47.148842 containerd[1581]: time="2025-01-29T12:56:47.148782824Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:56:47.148842 containerd[1581]: time="2025-01-29T12:56:47.148819693Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:56:47.148842 containerd[1581]: time="2025-01-29T12:56:47.148833038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:56:47.148940 containerd[1581]: time="2025-01-29T12:56:47.148918118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:56:47.149171 containerd[1581]: time="2025-01-29T12:56:47.149146266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:56:47.149325 containerd[1581]: time="2025-01-29T12:56:47.149299754Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:56:47.149352 containerd[1581]: time="2025-01-29T12:56:47.149323017Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:56:47.149456 containerd[1581]: time="2025-01-29T12:56:47.149428114Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:56:47.149562 containerd[1581]: time="2025-01-29T12:56:47.149528994Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:56:47.167813 containerd[1581]: time="2025-01-29T12:56:47.167755878Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:56:47.168076 containerd[1581]: time="2025-01-29T12:56:47.168047175Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:56:47.168118 containerd[1581]: time="2025-01-29T12:56:47.168078113Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:56:47.168118 containerd[1581]: time="2025-01-29T12:56:47.168098150Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:56:47.168499 containerd[1581]: time="2025-01-29T12:56:47.168470438Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:56:47.169890 containerd[1581]: time="2025-01-29T12:56:47.169864303Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:56:47.171802 containerd[1581]: time="2025-01-29T12:56:47.170443849Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:56:47.171802 containerd[1581]: time="2025-01-29T12:56:47.170606875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:56:47.171802 containerd[1581]: time="2025-01-29T12:56:47.170629167Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:56:47.171802 containerd[1581]: time="2025-01-29T12:56:47.170667369Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:56:47.171802 containerd[1581]: time="2025-01-29T12:56:47.170685433Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:56:47.171802 containerd[1581]: time="2025-01-29T12:56:47.170702124Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:56:47.171802 containerd[1581]: time="2025-01-29T12:56:47.170717032Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:56:47.171802 containerd[1581]: time="2025-01-29T12:56:47.170750525Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:56:47.171802 containerd[1581]: time="2025-01-29T12:56:47.170769300Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:56:47.171802 containerd[1581]: time="2025-01-29T12:56:47.170785300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:56:47.171802 containerd[1581]: time="2025-01-29T12:56:47.170823061Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:56:47.171802 containerd[1581]: time="2025-01-29T12:56:47.170838991Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:56:47.171802 containerd[1581]: time="2025-01-29T12:56:47.170862895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.171802 containerd[1581]: time="2025-01-29T12:56:47.170898152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172209 containerd[1581]: time="2025-01-29T12:56:47.170915514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172209 containerd[1581]: time="2025-01-29T12:56:47.170931965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172209 containerd[1581]: time="2025-01-29T12:56:47.170946873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172209 containerd[1581]: time="2025-01-29T12:56:47.170979925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172209 containerd[1581]: time="2025-01-29T12:56:47.170996857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172209 containerd[1581]: time="2025-01-29T12:56:47.171014019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172209 containerd[1581]: time="2025-01-29T12:56:47.171029438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172209 containerd[1581]: time="2025-01-29T12:56:47.171063411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172209 containerd[1581]: time="2025-01-29T12:56:47.171079842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172209 containerd[1581]: time="2025-01-29T12:56:47.171095081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172209 containerd[1581]: time="2025-01-29T12:56:47.171108887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172209 containerd[1581]: time="2025-01-29T12:56:47.171126339Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:56:47.172209 containerd[1581]: time="2025-01-29T12:56:47.171153380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172209 containerd[1581]: time="2025-01-29T12:56:47.171175041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172209 containerd[1581]: time="2025-01-29T12:56:47.171189127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:56:47.172585 containerd[1581]: time="2025-01-29T12:56:47.171235264Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:56:47.172585 containerd[1581]: time="2025-01-29T12:56:47.171255231Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:56:47.172585 containerd[1581]: time="2025-01-29T12:56:47.171267905Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:56:47.172585 containerd[1581]: time="2025-01-29T12:56:47.171281921Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:56:47.172585 containerd[1581]: time="2025-01-29T12:56:47.171295437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172585 containerd[1581]: time="2025-01-29T12:56:47.171309463Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:56:47.172585 containerd[1581]: time="2025-01-29T12:56:47.171324301Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:56:47.172585 containerd[1581]: time="2025-01-29T12:56:47.171336193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:56:47.172778 containerd[1581]: time="2025-01-29T12:56:47.171643529Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:56:47.172778 containerd[1581]: time="2025-01-29T12:56:47.171718851Z" level=info msg="Connect containerd service" Jan 29 12:56:47.172778 containerd[1581]: time="2025-01-29T12:56:47.171758495Z" level=info msg="using legacy CRI server" Jan 29 12:56:47.172778 containerd[1581]: time="2025-01-29T12:56:47.171766560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:56:47.173047 containerd[1581]: time="2025-01-29T12:56:47.172944900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:56:47.175819 containerd[1581]: time="2025-01-29T12:56:47.174539942Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:56:47.175819 containerd[1581]: time="2025-01-29T12:56:47.175274800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:56:47.175819 containerd[1581]: time="2025-01-29T12:56:47.175361543Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:56:47.175819 containerd[1581]: time="2025-01-29T12:56:47.175633262Z" level=info msg="Start subscribing containerd event" Jan 29 12:56:47.175819 containerd[1581]: time="2025-01-29T12:56:47.175723291Z" level=info msg="Start recovering state" Jan 29 12:56:47.178510 containerd[1581]: time="2025-01-29T12:56:47.175832646Z" level=info msg="Start event monitor" Jan 29 12:56:47.178510 containerd[1581]: time="2025-01-29T12:56:47.175849648Z" level=info msg="Start snapshots syncer" Jan 29 12:56:47.178510 containerd[1581]: time="2025-01-29T12:56:47.175867401Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:56:47.178510 containerd[1581]: time="2025-01-29T12:56:47.175875917Z" level=info msg="Start streaming server" Jan 29 12:56:47.178510 containerd[1581]: time="2025-01-29T12:56:47.175990342Z" level=info msg="containerd successfully booted in 0.090670s" Jan 29 12:56:47.176116 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:56:47.284497 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:56:47.315723 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:56:47.330401 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:56:47.349969 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:56:47.350217 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:56:47.363743 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:56:47.374152 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:56:47.385317 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:56:47.394183 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:56:47.395209 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:56:48.889108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:56:48.909374 (kubelet)[1654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:56:50.187246 kubelet[1654]: E0129 12:56:50.187085 1654 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:56:50.192401 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:56:50.194349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:56:50.818650 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:56:50.830990 systemd[1]: Started sshd@0-172.24.4.160:22-172.24.4.1:34388.service - OpenSSH per-connection server daemon (172.24.4.1:34388). Jan 29 12:56:52.007422 sshd[1665]: Accepted publickey for core from 172.24.4.1 port 34388 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:56:52.014305 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:56:52.046900 systemd-logind[1561]: New session 1 of user core. Jan 29 12:56:52.050435 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:56:52.065008 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:56:52.114042 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:56:52.129203 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:56:52.148750 (systemd)[1671]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:56:52.314454 systemd[1671]: Queued start job for default target default.target. Jan 29 12:56:52.314951 systemd[1671]: Created slice app.slice - User Application Slice. Jan 29 12:56:52.314982 systemd[1671]: Reached target paths.target - Paths. Jan 29 12:56:52.315006 systemd[1671]: Reached target timers.target - Timers. Jan 29 12:56:52.319925 systemd[1671]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:56:52.329014 systemd[1671]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:56:52.329094 systemd[1671]: Reached target sockets.target - Sockets. Jan 29 12:56:52.329117 systemd[1671]: Reached target basic.target - Basic System. Jan 29 12:56:52.329171 systemd[1671]: Reached target default.target - Main User Target. Jan 29 12:56:52.329210 systemd[1671]: Startup finished in 170ms. Jan 29 12:56:52.330301 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:56:52.342276 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:56:52.447361 login[1642]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:56:52.455303 systemd-logind[1561]: New session 2 of user core. Jan 29 12:56:52.456463 login[1643]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:56:52.461484 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:56:52.474191 systemd-logind[1561]: New session 3 of user core. Jan 29 12:56:52.477143 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:56:52.834445 systemd[1]: Started sshd@1-172.24.4.160:22-172.24.4.1:34390.service - OpenSSH per-connection server daemon (172.24.4.1:34390). Jan 29 12:56:53.457984 coreos-metadata[1531]: Jan 29 12:56:53.457 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:56:53.511488 coreos-metadata[1531]: Jan 29 12:56:53.511 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 12:56:53.708374 coreos-metadata[1531]: Jan 29 12:56:53.708 INFO Fetch successful Jan 29 12:56:53.708374 coreos-metadata[1531]: Jan 29 12:56:53.708 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 12:56:53.721934 coreos-metadata[1531]: Jan 29 12:56:53.721 INFO Fetch successful Jan 29 12:56:53.721934 coreos-metadata[1531]: Jan 29 12:56:53.721 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 12:56:53.735778 coreos-metadata[1531]: Jan 29 12:56:53.735 INFO Fetch successful Jan 29 12:56:53.735778 coreos-metadata[1531]: Jan 29 12:56:53.735 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 12:56:53.749348 coreos-metadata[1531]: Jan 29 12:56:53.749 INFO Fetch successful Jan 29 12:56:53.749348 coreos-metadata[1531]: Jan 29 12:56:53.749 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 12:56:53.762962 coreos-metadata[1531]: Jan 29 12:56:53.762 INFO Fetch successful Jan 29 12:56:53.762962 coreos-metadata[1531]: Jan 29 12:56:53.762 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 12:56:53.776923 coreos-metadata[1531]: Jan 29 12:56:53.776 INFO Fetch successful Jan 29 12:56:53.827299 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:56:53.830379 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:56:53.955725 coreos-metadata[1614]: Jan 29 12:56:53.955 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:56:54.004011 coreos-metadata[1614]: Jan 29 12:56:54.003 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 12:56:54.019825 coreos-metadata[1614]: Jan 29 12:56:54.019 INFO Fetch successful Jan 29 12:56:54.019825 coreos-metadata[1614]: Jan 29 12:56:54.019 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 12:56:54.034536 coreos-metadata[1614]: Jan 29 12:56:54.034 INFO Fetch successful Jan 29 12:56:54.065854 unknown[1614]: wrote ssh authorized keys file for user: core Jan 29 12:56:54.495487 update-ssh-keys[1725]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:56:54.495972 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 12:56:54.501127 systemd[1]: Finished sshkeys.service. Jan 29 12:56:54.509474 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:56:54.510020 systemd[1]: Startup finished in 16.112s (kernel) + 13.060s (userspace) = 29.173s. Jan 29 12:56:54.607551 sshd[1711]: Accepted publickey for core from 172.24.4.1 port 34390 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:56:54.610881 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:56:54.621782 systemd-logind[1561]: New session 4 of user core. Jan 29 12:56:54.632625 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:56:55.346747 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 29 12:56:55.357588 systemd[1]: Started sshd@2-172.24.4.160:22-172.24.4.1:58848.service - OpenSSH per-connection server daemon (172.24.4.1:58848). Jan 29 12:56:55.360462 systemd[1]: sshd@1-172.24.4.160:22-172.24.4.1:34390.service: Deactivated successfully. Jan 29 12:56:55.363234 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:56:55.365784 systemd-logind[1561]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:56:55.371019 systemd-logind[1561]: Removed session 4. Jan 29 12:56:56.711934 sshd[1736]: Accepted publickey for core from 172.24.4.1 port 58848 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:56:56.714774 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:56:56.725225 systemd-logind[1561]: New session 5 of user core. Jan 29 12:56:56.746233 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:56:57.320324 sshd[1736]: pam_unix(sshd:session): session closed for user core Jan 29 12:56:57.330216 systemd[1]: Started sshd@3-172.24.4.160:22-172.24.4.1:58856.service - OpenSSH per-connection server daemon (172.24.4.1:58856). Jan 29 12:56:57.331209 systemd[1]: sshd@2-172.24.4.160:22-172.24.4.1:58848.service: Deactivated successfully. Jan 29 12:56:57.344269 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:56:57.347066 systemd-logind[1561]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:56:57.350144 systemd-logind[1561]: Removed session 5. Jan 29 12:56:58.823282 sshd[1744]: Accepted publickey for core from 172.24.4.1 port 58856 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:56:58.826613 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:56:58.838330 systemd-logind[1561]: New session 6 of user core. Jan 29 12:56:58.856407 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:56:59.370974 sshd[1744]: pam_unix(sshd:session): session closed for user core Jan 29 12:56:59.389020 systemd[1]: Started sshd@4-172.24.4.160:22-172.24.4.1:58864.service - OpenSSH per-connection server daemon (172.24.4.1:58864). Jan 29 12:56:59.390156 systemd[1]: sshd@3-172.24.4.160:22-172.24.4.1:58856.service: Deactivated successfully. Jan 29 12:56:59.398467 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:56:59.399049 systemd-logind[1561]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:56:59.404651 systemd-logind[1561]: Removed session 6. Jan 29 12:57:00.218902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:57:00.230117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:57:00.705878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:57:00.709962 (kubelet)[1769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:57:00.722017 sshd[1752]: Accepted publickey for core from 172.24.4.1 port 58864 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:57:00.727715 sshd[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:00.742047 systemd-logind[1561]: New session 7 of user core. Jan 29 12:57:00.750542 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:57:00.878196 kubelet[1769]: E0129 12:57:00.878083 1769 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:57:00.886353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:57:00.887569 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:57:01.363099 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:57:01.363741 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:57:01.383256 sudo[1780]: pam_unix(sudo:session): session closed for user root Jan 29 12:57:01.613231 sshd[1752]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:01.625254 systemd[1]: Started sshd@5-172.24.4.160:22-172.24.4.1:58880.service - OpenSSH per-connection server daemon (172.24.4.1:58880). Jan 29 12:57:01.626273 systemd[1]: sshd@4-172.24.4.160:22-172.24.4.1:58864.service: Deactivated successfully. Jan 29 12:57:01.635315 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:57:01.637961 systemd-logind[1561]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:57:01.641588 systemd-logind[1561]: Removed session 7. Jan 29 12:57:02.761570 sshd[1782]: Accepted publickey for core from 172.24.4.1 port 58880 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:57:02.764168 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:02.772763 systemd-logind[1561]: New session 8 of user core. Jan 29 12:57:02.781381 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:57:03.240827 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:57:03.241539 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:57:03.248506 sudo[1790]: pam_unix(sudo:session): session closed for user root Jan 29 12:57:03.259789 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:57:03.261147 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:57:03.284293 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:57:03.304757 auditctl[1793]: No rules Jan 29 12:57:03.305879 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:57:03.306385 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:57:03.318952 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:57:03.384541 augenrules[1812]: No rules Jan 29 12:57:03.385923 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:57:03.390694 sudo[1789]: pam_unix(sudo:session): session closed for user root Jan 29 12:57:03.652114 sshd[1782]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:03.670015 systemd[1]: Started sshd@6-172.24.4.160:22-172.24.4.1:56062.service - OpenSSH per-connection server daemon (172.24.4.1:56062). Jan 29 12:57:03.671104 systemd[1]: sshd@5-172.24.4.160:22-172.24.4.1:58880.service: Deactivated successfully. Jan 29 12:57:03.683572 systemd-logind[1561]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:57:03.684971 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:57:03.690727 systemd-logind[1561]: Removed session 8. Jan 29 12:57:05.065598 sshd[1818]: Accepted publickey for core from 172.24.4.1 port 56062 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:57:05.068272 sshd[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:05.078869 systemd-logind[1561]: New session 9 of user core. Jan 29 12:57:05.089383 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:57:05.504636 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:57:05.505500 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:57:07.123722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:57:07.140628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:57:07.183463 systemd[1]: Reloading requested from client PID 1863 ('systemctl') (unit session-9.scope)... Jan 29 12:57:07.183649 systemd[1]: Reloading... Jan 29 12:57:07.279833 zram_generator::config[1901]: No configuration found. Jan 29 12:57:07.453054 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:57:07.532476 systemd[1]: Reloading finished in 348 ms. Jan 29 12:57:07.578176 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:57:07.578384 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:57:07.578714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:57:07.594422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:57:07.731960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:57:07.737922 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:57:08.266388 kubelet[1976]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:57:08.266388 kubelet[1976]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:57:08.266388 kubelet[1976]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:57:08.267192 kubelet[1976]: I0129 12:57:08.266508 1976 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:57:08.700651 kubelet[1976]: I0129 12:57:08.700453 1976 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:57:08.700651 kubelet[1976]: I0129 12:57:08.700499 1976 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:57:08.701489 kubelet[1976]: I0129 12:57:08.700734 1976 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:57:08.717252 kubelet[1976]: I0129 12:57:08.717179 1976 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:57:08.742881 kubelet[1976]: I0129 12:57:08.742770 1976 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:57:08.743188 kubelet[1976]: I0129 12:57:08.743086 1976 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:57:08.743382 kubelet[1976]: I0129 12:57:08.743142 1976 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.160","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:57:08.744243 kubelet[1976]: I0129 12:57:08.744178 1976 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:57:08.744243 kubelet[1976]: I0129 12:57:08.744203 1976 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:57:08.744838 kubelet[1976]: I0129 12:57:08.744751 1976 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:57:08.745934 kubelet[1976]: I0129 12:57:08.745902 1976 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:57:08.745934 kubelet[1976]: I0129 12:57:08.745922 1976 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:57:08.746059 kubelet[1976]: I0129 12:57:08.745943 1976 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:57:08.746059 kubelet[1976]: I0129 12:57:08.745963 1976 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:57:08.746517 kubelet[1976]: E0129 12:57:08.746451 1976 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:08.746517 kubelet[1976]: E0129 12:57:08.746501 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:08.751877 kubelet[1976]: I0129 12:57:08.751845 1976 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:57:08.753893 kubelet[1976]: I0129 12:57:08.753835 1976 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:57:08.753893 kubelet[1976]: W0129 12:57:08.753886 1976 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:57:08.754640 kubelet[1976]: I0129 12:57:08.754489 1976 server.go:1264] "Started kubelet" Jan 29 12:57:08.754894 kubelet[1976]: I0129 12:57:08.754783 1976 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:57:08.757535 kubelet[1976]: I0129 12:57:08.757135 1976 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:57:08.760843 kubelet[1976]: I0129 12:57:08.760402 1976 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:57:08.760843 kubelet[1976]: I0129 12:57:08.760653 1976 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:57:08.765390 kubelet[1976]: I0129 12:57:08.765057 1976 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:57:08.767361 kubelet[1976]: I0129 12:57:08.767349 1976 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:57:08.767592 kubelet[1976]: I0129 12:57:08.767563 1976 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:57:08.767758 kubelet[1976]: I0129 12:57:08.767733 1976 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:57:08.769564 kubelet[1976]: I0129 12:57:08.769546 1976 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:57:08.769732 kubelet[1976]: I0129 12:57:08.769714 1976 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:57:08.770905 kubelet[1976]: E0129 12:57:08.770870 1976 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:57:08.771656 kubelet[1976]: I0129 12:57:08.771645 1976 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:57:08.792364 kubelet[1976]: W0129 12:57:08.792331 1976 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 12:57:08.792635 kubelet[1976]: E0129 12:57:08.792569 1976 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 12:57:08.794809 kubelet[1976]: E0129 12:57:08.793868 1976 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.160.181f2b273f53aa7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.160,UID:172.24.4.160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.24.4.160,},FirstTimestamp:2025-01-29 12:57:08.75446745 +0000 UTC m=+1.012500846,LastTimestamp:2025-01-29 12:57:08.75446745 +0000 UTC m=+1.012500846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.160,}" Jan 29 12:57:08.804723 kubelet[1976]: W0129 12:57:08.804693 1976 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.160" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 12:57:08.804880 kubelet[1976]: E0129 12:57:08.804870 1976 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.160" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 12:57:08.805566 kubelet[1976]: W0129 12:57:08.805551 1976 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 12:57:08.805833 kubelet[1976]: E0129 12:57:08.805820 1976 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 12:57:08.807332 kubelet[1976]: I0129 12:57:08.807308 1976 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:57:08.807473 kubelet[1976]: I0129 12:57:08.807390 1976 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:57:08.807473 kubelet[1976]: I0129 12:57:08.807410 1976 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:57:08.811372 kubelet[1976]: E0129 12:57:08.811311 1976 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.160\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 29 12:57:08.815808 kubelet[1976]: I0129 12:57:08.814037 1976 policy_none.go:49] "None policy: Start" Jan 29 12:57:08.815808 kubelet[1976]: E0129 12:57:08.813983 1976 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.160.181f2b27404dc977 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.160,UID:172.24.4.160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.24.4.160,},FirstTimestamp:2025-01-29 12:57:08.770859383 +0000 UTC m=+1.028892779,LastTimestamp:2025-01-29 12:57:08.770859383 +0000 UTC m=+1.028892779,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.160,}" Jan 29 12:57:08.817499 kubelet[1976]: I0129 12:57:08.817486 1976 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:57:08.817588 kubelet[1976]: I0129 12:57:08.817579 1976 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:57:08.828018 kubelet[1976]: I0129 12:57:08.827977 1976 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:57:08.828223 kubelet[1976]: I0129 12:57:08.828171 1976 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:57:08.828304 kubelet[1976]: I0129 12:57:08.828290 1976 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:57:08.832458 kubelet[1976]: E0129 12:57:08.832292 1976 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.160\" not found" Jan 29 12:57:08.869483 kubelet[1976]: I0129 12:57:08.868724 1976 kubelet_node_status.go:73] "Attempting to register node" node="172.24.4.160" Jan 29 12:57:08.875244 kubelet[1976]: I0129 12:57:08.875097 1976 kubelet_node_status.go:76] "Successfully registered node" node="172.24.4.160" Jan 29 12:57:08.886178 kubelet[1976]: I0129 12:57:08.886018 1976 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:57:08.886377 kubelet[1976]: E0129 12:57:08.886279 1976 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.160\" not found" Jan 29 12:57:08.887894 kubelet[1976]: I0129 12:57:08.887277 1976 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:57:08.887894 kubelet[1976]: I0129 12:57:08.887304 1976 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:57:08.887894 kubelet[1976]: I0129 12:57:08.887324 1976 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:57:08.887894 kubelet[1976]: E0129 12:57:08.887368 1976 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 12:57:08.986765 kubelet[1976]: E0129 12:57:08.986699 1976 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.160\" not found" Jan 29 12:57:09.087237 kubelet[1976]: E0129 12:57:09.087131 1976 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.160\" not found" Jan 29 12:57:09.187992 kubelet[1976]: E0129 12:57:09.187910 1976 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.160\" not found" Jan 29 12:57:09.271383 sudo[1825]: pam_unix(sudo:session): session closed for user root Jan 29 12:57:09.289100 kubelet[1976]: E0129 12:57:09.288998 1976 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.160\" not found" Jan 29 12:57:09.390057 kubelet[1976]: E0129 12:57:09.389884 1976 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.160\" not found" Jan 29 12:57:09.433412 sshd[1818]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:09.439507 systemd[1]: sshd@6-172.24.4.160:22-172.24.4.1:56062.service: Deactivated successfully. Jan 29 12:57:09.448339 systemd-logind[1561]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:57:09.449855 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:57:09.452620 systemd-logind[1561]: Removed session 9. Jan 29 12:57:09.490903 kubelet[1976]: E0129 12:57:09.490754 1976 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.160\" not found" Jan 29 12:57:09.592053 kubelet[1976]: E0129 12:57:09.591730 1976 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.160\" not found" Jan 29 12:57:09.692965 kubelet[1976]: E0129 12:57:09.692863 1976 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.160\" not found" Jan 29 12:57:09.703415 kubelet[1976]: I0129 12:57:09.703276 1976 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 12:57:09.703822 kubelet[1976]: W0129 12:57:09.703735 1976 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:57:09.747495 kubelet[1976]: E0129 12:57:09.747423 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:09.793200 kubelet[1976]: E0129 12:57:09.793118 1976 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.160\" not found" Jan 29 12:57:09.894394 kubelet[1976]: E0129 12:57:09.893851 1976 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.160\" not found" Jan 29 12:57:09.994622 kubelet[1976]: E0129 12:57:09.994480 1976 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.160\" not found" Jan 29 12:57:10.095700 kubelet[1976]: E0129 12:57:10.095541 1976 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.160\" not found" Jan 29 12:57:10.197252 kubelet[1976]: I0129 12:57:10.197078 1976 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 12:57:10.198032 containerd[1581]: time="2025-01-29T12:57:10.197864872Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:57:10.199083 kubelet[1976]: I0129 12:57:10.198597 1976 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 12:57:10.751363 kubelet[1976]: E0129 12:57:10.749479 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:10.751363 kubelet[1976]: I0129 12:57:10.751356 1976 apiserver.go:52] "Watching apiserver" Jan 29 12:57:10.766834 kubelet[1976]: I0129 12:57:10.765298 1976 topology_manager.go:215] "Topology Admit Handler" podUID="9739d5b7-b969-4dee-bd91-286c6a3c532d" podNamespace="kube-system" podName="kube-proxy-qjsdw" Jan 29 12:57:10.766834 kubelet[1976]: I0129 12:57:10.766740 1976 topology_manager.go:215] "Topology Admit Handler" podUID="b90fe22e-be05-49ce-8da6-f15a509540a3" podNamespace="calico-system" podName="calico-node-8kw4f" Jan 29 12:57:10.767134 kubelet[1976]: I0129 12:57:10.766936 1976 topology_manager.go:215] "Topology Admit Handler" podUID="43cc5aeb-8d72-4137-8afa-6422de953051" podNamespace="calico-system" podName="csi-node-driver-wqlkd" Jan 29 12:57:10.767256 kubelet[1976]: E0129 12:57:10.767223 1976 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wqlkd" podUID="43cc5aeb-8d72-4137-8afa-6422de953051" Jan 29 12:57:10.772877 kubelet[1976]: I0129 12:57:10.771305 1976 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:57:10.786134 kubelet[1976]: I0129 12:57:10.786045 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/43cc5aeb-8d72-4137-8afa-6422de953051-kubelet-dir\") pod \"csi-node-driver-wqlkd\" (UID: \"43cc5aeb-8d72-4137-8afa-6422de953051\") " pod="calico-system/csi-node-driver-wqlkd" Jan 29 12:57:10.786134 kubelet[1976]: I0129 12:57:10.786124 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9739d5b7-b969-4dee-bd91-286c6a3c532d-xtables-lock\") pod \"kube-proxy-qjsdw\" (UID: \"9739d5b7-b969-4dee-bd91-286c6a3c532d\") " pod="kube-system/kube-proxy-qjsdw" Jan 29 12:57:10.786313 kubelet[1976]: I0129 12:57:10.786195 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7vp7\" (UniqueName: \"kubernetes.io/projected/9739d5b7-b969-4dee-bd91-286c6a3c532d-kube-api-access-z7vp7\") pod \"kube-proxy-qjsdw\" (UID: \"9739d5b7-b969-4dee-bd91-286c6a3c532d\") " pod="kube-system/kube-proxy-qjsdw" Jan 29 12:57:10.786313 kubelet[1976]: I0129 12:57:10.786239 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b90fe22e-be05-49ce-8da6-f15a509540a3-var-lib-calico\") pod \"calico-node-8kw4f\" (UID: \"b90fe22e-be05-49ce-8da6-f15a509540a3\") " pod="calico-system/calico-node-8kw4f" Jan 29 12:57:10.786313 kubelet[1976]: I0129 12:57:10.786276 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b90fe22e-be05-49ce-8da6-f15a509540a3-cni-log-dir\") pod \"calico-node-8kw4f\" (UID: \"b90fe22e-be05-49ce-8da6-f15a509540a3\") " pod="calico-system/calico-node-8kw4f" Jan 29 12:57:10.786514 kubelet[1976]: I0129 12:57:10.786315 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/43cc5aeb-8d72-4137-8afa-6422de953051-varrun\") pod \"csi-node-driver-wqlkd\" (UID: \"43cc5aeb-8d72-4137-8afa-6422de953051\") " pod="calico-system/csi-node-driver-wqlkd" Jan 29 12:57:10.786514 kubelet[1976]: I0129 12:57:10.786379 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/43cc5aeb-8d72-4137-8afa-6422de953051-registration-dir\") pod \"csi-node-driver-wqlkd\" (UID: \"43cc5aeb-8d72-4137-8afa-6422de953051\") " pod="calico-system/csi-node-driver-wqlkd" Jan 29 12:57:10.786514 kubelet[1976]: I0129 12:57:10.786417 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9739d5b7-b969-4dee-bd91-286c6a3c532d-lib-modules\") pod \"kube-proxy-qjsdw\" (UID: \"9739d5b7-b969-4dee-bd91-286c6a3c532d\") " pod="kube-system/kube-proxy-qjsdw" Jan 29 12:57:10.786514 kubelet[1976]: I0129 12:57:10.786456 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b90fe22e-be05-49ce-8da6-f15a509540a3-xtables-lock\") pod \"calico-node-8kw4f\" (UID: \"b90fe22e-be05-49ce-8da6-f15a509540a3\") " pod="calico-system/calico-node-8kw4f" Jan 29 12:57:10.786514 kubelet[1976]: I0129 12:57:10.786497 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b90fe22e-be05-49ce-8da6-f15a509540a3-flexvol-driver-host\") pod \"calico-node-8kw4f\" (UID: \"b90fe22e-be05-49ce-8da6-f15a509540a3\") " pod="calico-system/calico-node-8kw4f" Jan 29 12:57:10.786782 kubelet[1976]: I0129 12:57:10.786535 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mrm7\" (UniqueName: \"kubernetes.io/projected/b90fe22e-be05-49ce-8da6-f15a509540a3-kube-api-access-4mrm7\") pod \"calico-node-8kw4f\" (UID: \"b90fe22e-be05-49ce-8da6-f15a509540a3\") " pod="calico-system/calico-node-8kw4f" Jan 29 12:57:10.786782 kubelet[1976]: I0129 12:57:10.786572 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b90fe22e-be05-49ce-8da6-f15a509540a3-lib-modules\") pod \"calico-node-8kw4f\" (UID: \"b90fe22e-be05-49ce-8da6-f15a509540a3\") " pod="calico-system/calico-node-8kw4f" Jan 29 12:57:10.786782 kubelet[1976]: I0129 12:57:10.786606 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b90fe22e-be05-49ce-8da6-f15a509540a3-cni-net-dir\") pod \"calico-node-8kw4f\" (UID: \"b90fe22e-be05-49ce-8da6-f15a509540a3\") " pod="calico-system/calico-node-8kw4f" Jan 29 12:57:10.786782 kubelet[1976]: I0129 12:57:10.786641 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/43cc5aeb-8d72-4137-8afa-6422de953051-socket-dir\") pod \"csi-node-driver-wqlkd\" (UID: \"43cc5aeb-8d72-4137-8afa-6422de953051\") " pod="calico-system/csi-node-driver-wqlkd" Jan 29 12:57:10.786782 kubelet[1976]: I0129 12:57:10.786677 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b90fe22e-be05-49ce-8da6-f15a509540a3-var-run-calico\") pod \"calico-node-8kw4f\" (UID: \"b90fe22e-be05-49ce-8da6-f15a509540a3\") " pod="calico-system/calico-node-8kw4f" Jan 29 12:57:10.787116 kubelet[1976]: I0129 12:57:10.786715 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b90fe22e-be05-49ce-8da6-f15a509540a3-cni-bin-dir\") pod \"calico-node-8kw4f\" (UID: \"b90fe22e-be05-49ce-8da6-f15a509540a3\") " pod="calico-system/calico-node-8kw4f" Jan 29 12:57:10.787116 kubelet[1976]: I0129 12:57:10.786751 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbr9j\" (UniqueName: \"kubernetes.io/projected/43cc5aeb-8d72-4137-8afa-6422de953051-kube-api-access-vbr9j\") pod \"csi-node-driver-wqlkd\" (UID: \"43cc5aeb-8d72-4137-8afa-6422de953051\") " pod="calico-system/csi-node-driver-wqlkd" Jan 29 12:57:10.787116 kubelet[1976]: I0129 12:57:10.786789 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9739d5b7-b969-4dee-bd91-286c6a3c532d-kube-proxy\") pod \"kube-proxy-qjsdw\" (UID: \"9739d5b7-b969-4dee-bd91-286c6a3c532d\") " pod="kube-system/kube-proxy-qjsdw" Jan 29 12:57:10.787116 kubelet[1976]: I0129 12:57:10.786866 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b90fe22e-be05-49ce-8da6-f15a509540a3-policysync\") pod \"calico-node-8kw4f\" (UID: \"b90fe22e-be05-49ce-8da6-f15a509540a3\") " pod="calico-system/calico-node-8kw4f" Jan 29 12:57:10.787116 kubelet[1976]: I0129 12:57:10.786902 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b90fe22e-be05-49ce-8da6-f15a509540a3-tigera-ca-bundle\") pod \"calico-node-8kw4f\" (UID: \"b90fe22e-be05-49ce-8da6-f15a509540a3\") " pod="calico-system/calico-node-8kw4f" Jan 29 12:57:10.787418 kubelet[1976]: I0129 12:57:10.786939 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b90fe22e-be05-49ce-8da6-f15a509540a3-node-certs\") pod \"calico-node-8kw4f\" (UID: \"b90fe22e-be05-49ce-8da6-f15a509540a3\") " pod="calico-system/calico-node-8kw4f" Jan 29 12:57:10.915857 kubelet[1976]: E0129 12:57:10.915527 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.915857 kubelet[1976]: W0129 12:57:10.915575 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.915857 kubelet[1976]: E0129 12:57:10.915625 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.920601 kubelet[1976]: E0129 12:57:10.920542 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.920715 kubelet[1976]: W0129 12:57:10.920589 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.920715 kubelet[1976]: E0129 12:57:10.920653 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.924583 kubelet[1976]: E0129 12:57:10.924319 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.924583 kubelet[1976]: W0129 12:57:10.924366 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.924583 kubelet[1976]: E0129 12:57:10.924408 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.926129 kubelet[1976]: E0129 12:57:10.925917 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.926129 kubelet[1976]: W0129 12:57:10.925945 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.926129 kubelet[1976]: E0129 12:57:10.925972 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.926633 kubelet[1976]: E0129 12:57:10.926605 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.927065 kubelet[1976]: W0129 12:57:10.926746 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.927065 kubelet[1976]: E0129 12:57:10.926782 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.927859 kubelet[1976]: E0129 12:57:10.927678 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.927859 kubelet[1976]: W0129 12:57:10.927706 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.927859 kubelet[1976]: E0129 12:57:10.927745 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.928403 kubelet[1976]: E0129 12:57:10.928376 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.938141 kubelet[1976]: W0129 12:57:10.938085 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.938643 kubelet[1976]: E0129 12:57:10.938421 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.943991 kubelet[1976]: E0129 12:57:10.943947 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.944351 kubelet[1976]: W0129 12:57:10.944128 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.944351 kubelet[1976]: E0129 12:57:10.944185 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.954532 kubelet[1976]: E0129 12:57:10.952296 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.954532 kubelet[1976]: W0129 12:57:10.952327 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.954532 kubelet[1976]: E0129 12:57:10.952374 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.955137 kubelet[1976]: E0129 12:57:10.955093 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.955278 kubelet[1976]: W0129 12:57:10.955253 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.955447 kubelet[1976]: E0129 12:57:10.955399 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.959844 kubelet[1976]: E0129 12:57:10.958567 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.959844 kubelet[1976]: W0129 12:57:10.958591 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.959844 kubelet[1976]: E0129 12:57:10.958773 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.959844 kubelet[1976]: W0129 12:57:10.958828 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.959844 kubelet[1976]: E0129 12:57:10.959005 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.959844 kubelet[1976]: W0129 12:57:10.959013 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.959844 kubelet[1976]: E0129 12:57:10.959272 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.959844 kubelet[1976]: W0129 12:57:10.959281 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.959844 kubelet[1976]: E0129 12:57:10.959296 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.959844 kubelet[1976]: E0129 12:57:10.959588 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.960450 kubelet[1976]: E0129 12:57:10.960112 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.960450 kubelet[1976]: W0129 12:57:10.960123 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.960450 kubelet[1976]: E0129 12:57:10.960135 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.960450 kubelet[1976]: E0129 12:57:10.960302 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.960450 kubelet[1976]: W0129 12:57:10.960311 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.960450 kubelet[1976]: E0129 12:57:10.960320 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.963040 kubelet[1976]: E0129 12:57:10.960994 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.963040 kubelet[1976]: W0129 12:57:10.961011 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.963040 kubelet[1976]: E0129 12:57:10.961021 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.966908 kubelet[1976]: E0129 12:57:10.959923 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.966908 kubelet[1976]: E0129 12:57:10.959911 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:10.966908 kubelet[1976]: E0129 12:57:10.966664 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:10.966908 kubelet[1976]: W0129 12:57:10.966678 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:10.966908 kubelet[1976]: E0129 12:57:10.966695 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:11.075245 containerd[1581]: time="2025-01-29T12:57:11.074949864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8kw4f,Uid:b90fe22e-be05-49ce-8da6-f15a509540a3,Namespace:calico-system,Attempt:0,}" Jan 29 12:57:11.079981 containerd[1581]: time="2025-01-29T12:57:11.079903565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qjsdw,Uid:9739d5b7-b969-4dee-bd91-286c6a3c532d,Namespace:kube-system,Attempt:0,}" Jan 29 12:57:11.750949 kubelet[1976]: E0129 12:57:11.750843 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:11.823446 containerd[1581]: time="2025-01-29T12:57:11.823314352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:57:11.827057 containerd[1581]: time="2025-01-29T12:57:11.826979716Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:57:11.828922 containerd[1581]: time="2025-01-29T12:57:11.828841097Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 12:57:11.832087 containerd[1581]: time="2025-01-29T12:57:11.831514060Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:57:11.832087 containerd[1581]: time="2025-01-29T12:57:11.832017855Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:57:11.837646 containerd[1581]: time="2025-01-29T12:57:11.837550071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:57:11.840972 containerd[1581]: time="2025-01-29T12:57:11.840477431Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 765.379088ms" Jan 29 12:57:11.847209 containerd[1581]: time="2025-01-29T12:57:11.847087048Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 766.992845ms" Jan 29 12:57:11.919881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1332445092.mount: Deactivated successfully. Jan 29 12:57:12.044004 containerd[1581]: time="2025-01-29T12:57:12.042808774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:57:12.044004 containerd[1581]: time="2025-01-29T12:57:12.043638610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:57:12.044004 containerd[1581]: time="2025-01-29T12:57:12.043693874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:57:12.044004 containerd[1581]: time="2025-01-29T12:57:12.043708982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:57:12.044004 containerd[1581]: time="2025-01-29T12:57:12.043823867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:57:12.044004 containerd[1581]: time="2025-01-29T12:57:12.043925448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:57:12.044004 containerd[1581]: time="2025-01-29T12:57:12.043964952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:57:12.045931 containerd[1581]: time="2025-01-29T12:57:12.044140090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:57:12.173333 containerd[1581]: time="2025-01-29T12:57:12.173275584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qjsdw,Uid:9739d5b7-b969-4dee-bd91-286c6a3c532d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc9ff4eecfcaadaa96f8d6b09ec7ebec66a1c5d59e5d02cd9a7d7b6b4266f9d8\"" Jan 29 12:57:12.178605 containerd[1581]: time="2025-01-29T12:57:12.178263329Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:57:12.182153 containerd[1581]: time="2025-01-29T12:57:12.182112418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8kw4f,Uid:b90fe22e-be05-49ce-8da6-f15a509540a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"13231fada097003812e40bf47d084747e5243bf4301bceb5fed0e65526b9b3e6\"" Jan 29 12:57:12.751495 kubelet[1976]: E0129 12:57:12.751340 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:12.889210 kubelet[1976]: E0129 12:57:12.889042 1976 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wqlkd" podUID="43cc5aeb-8d72-4137-8afa-6422de953051" Jan 29 12:57:13.602922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552125601.mount: Deactivated successfully. Jan 29 12:57:13.752201 kubelet[1976]: E0129 12:57:13.752149 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:14.090385 containerd[1581]: time="2025-01-29T12:57:14.089615138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:14.091857 containerd[1581]: time="2025-01-29T12:57:14.091785168Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 29 12:57:14.093248 containerd[1581]: time="2025-01-29T12:57:14.093197517Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:14.095967 containerd[1581]: time="2025-01-29T12:57:14.095924522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:14.099202 containerd[1581]: time="2025-01-29T12:57:14.099168536Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.920842419s" Jan 29 12:57:14.099307 containerd[1581]: time="2025-01-29T12:57:14.099290525Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 12:57:14.101635 containerd[1581]: time="2025-01-29T12:57:14.101397987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 12:57:14.104135 containerd[1581]: time="2025-01-29T12:57:14.104085848Z" level=info msg="CreateContainer within sandbox \"dc9ff4eecfcaadaa96f8d6b09ec7ebec66a1c5d59e5d02cd9a7d7b6b4266f9d8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:57:14.130009 containerd[1581]: time="2025-01-29T12:57:14.129965005Z" level=info msg="CreateContainer within sandbox \"dc9ff4eecfcaadaa96f8d6b09ec7ebec66a1c5d59e5d02cd9a7d7b6b4266f9d8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"09d7390d76734be784b593a4ffb61851ad9f7358b4758e3c9f4acc14d22f962c\"" Jan 29 12:57:14.131856 containerd[1581]: time="2025-01-29T12:57:14.130765446Z" level=info msg="StartContainer for \"09d7390d76734be784b593a4ffb61851ad9f7358b4758e3c9f4acc14d22f962c\"" Jan 29 12:57:14.205142 containerd[1581]: time="2025-01-29T12:57:14.205084511Z" level=info msg="StartContainer for \"09d7390d76734be784b593a4ffb61851ad9f7358b4758e3c9f4acc14d22f962c\" returns successfully" Jan 29 12:57:14.753006 kubelet[1976]: E0129 12:57:14.752918 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:14.888121 kubelet[1976]: E0129 12:57:14.888008 1976 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wqlkd" podUID="43cc5aeb-8d72-4137-8afa-6422de953051" Jan 29 12:57:15.104991 kubelet[1976]: E0129 12:57:15.104044 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.104991 kubelet[1976]: W0129 12:57:15.104129 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.104991 kubelet[1976]: E0129 12:57:15.104199 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.104991 kubelet[1976]: I0129 12:57:15.104530 1976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qjsdw" podStartSLOduration=5.18027991 podStartE2EDuration="7.104498983s" podCreationTimestamp="2025-01-29 12:57:08 +0000 UTC" firstStartedPulling="2025-01-29 12:57:12.176852382 +0000 UTC m=+4.434885788" lastFinishedPulling="2025-01-29 12:57:14.101071455 +0000 UTC m=+6.359104861" observedRunningTime="2025-01-29 12:57:15.103583195 +0000 UTC m=+7.361616641" watchObservedRunningTime="2025-01-29 12:57:15.104498983 +0000 UTC m=+7.362532479" Jan 29 12:57:15.104991 kubelet[1976]: E0129 12:57:15.104709 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.104991 kubelet[1976]: W0129 12:57:15.104737 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.104991 kubelet[1976]: E0129 12:57:15.104760 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.105700 kubelet[1976]: E0129 12:57:15.105287 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.105700 kubelet[1976]: W0129 12:57:15.105364 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.105700 kubelet[1976]: E0129 12:57:15.105387 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.107757 kubelet[1976]: E0129 12:57:15.105891 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.107757 kubelet[1976]: W0129 12:57:15.105953 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.107757 kubelet[1976]: E0129 12:57:15.105977 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.107757 kubelet[1976]: E0129 12:57:15.106558 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.107757 kubelet[1976]: W0129 12:57:15.106577 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.107757 kubelet[1976]: E0129 12:57:15.106599 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.107757 kubelet[1976]: E0129 12:57:15.107072 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.107757 kubelet[1976]: W0129 12:57:15.107180 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.107757 kubelet[1976]: E0129 12:57:15.107207 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.108363 kubelet[1976]: E0129 12:57:15.107886 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.108363 kubelet[1976]: W0129 12:57:15.107910 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.108363 kubelet[1976]: E0129 12:57:15.107935 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.108561 kubelet[1976]: E0129 12:57:15.108356 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.108561 kubelet[1976]: W0129 12:57:15.108430 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.108561 kubelet[1976]: E0129 12:57:15.108460 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.109186 kubelet[1976]: E0129 12:57:15.109132 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.109186 kubelet[1976]: W0129 12:57:15.109164 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.109507 kubelet[1976]: E0129 12:57:15.109248 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.109782 kubelet[1976]: E0129 12:57:15.109747 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.109782 kubelet[1976]: W0129 12:57:15.109776 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.110089 kubelet[1976]: E0129 12:57:15.109849 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.110502 kubelet[1976]: E0129 12:57:15.110321 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.110502 kubelet[1976]: W0129 12:57:15.110355 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.110502 kubelet[1976]: E0129 12:57:15.110380 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.110899 kubelet[1976]: E0129 12:57:15.110838 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.110899 kubelet[1976]: W0129 12:57:15.110895 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.111065 kubelet[1976]: E0129 12:57:15.110918 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.111292 kubelet[1976]: E0129 12:57:15.111261 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.111292 kubelet[1976]: W0129 12:57:15.111289 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.111442 kubelet[1976]: E0129 12:57:15.111310 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.111645 kubelet[1976]: E0129 12:57:15.111615 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.111645 kubelet[1976]: W0129 12:57:15.111642 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.111780 kubelet[1976]: E0129 12:57:15.111663 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.112152 kubelet[1976]: E0129 12:57:15.112118 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.112152 kubelet[1976]: W0129 12:57:15.112142 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.112355 kubelet[1976]: E0129 12:57:15.112164 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.112521 kubelet[1976]: E0129 12:57:15.112484 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.112521 kubelet[1976]: W0129 12:57:15.112511 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.112659 kubelet[1976]: E0129 12:57:15.112532 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.112923 kubelet[1976]: E0129 12:57:15.112884 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.112923 kubelet[1976]: W0129 12:57:15.112913 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.113086 kubelet[1976]: E0129 12:57:15.112934 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.113288 kubelet[1976]: E0129 12:57:15.113251 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.113288 kubelet[1976]: W0129 12:57:15.113279 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.113464 kubelet[1976]: E0129 12:57:15.113301 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.113677 kubelet[1976]: E0129 12:57:15.113640 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.113677 kubelet[1976]: W0129 12:57:15.113667 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.113872 kubelet[1976]: E0129 12:57:15.113693 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.114116 kubelet[1976]: E0129 12:57:15.114078 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.114116 kubelet[1976]: W0129 12:57:15.114106 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.114242 kubelet[1976]: E0129 12:57:15.114133 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.122171 kubelet[1976]: E0129 12:57:15.121982 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.122171 kubelet[1976]: W0129 12:57:15.122027 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.122171 kubelet[1976]: E0129 12:57:15.122063 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.122954 kubelet[1976]: E0129 12:57:15.122528 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.122954 kubelet[1976]: W0129 12:57:15.122549 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.122954 kubelet[1976]: E0129 12:57:15.122585 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.123147 kubelet[1976]: E0129 12:57:15.122996 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.123147 kubelet[1976]: W0129 12:57:15.123023 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.123147 kubelet[1976]: E0129 12:57:15.123064 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.123462 kubelet[1976]: E0129 12:57:15.123385 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.123462 kubelet[1976]: W0129 12:57:15.123421 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.123462 kubelet[1976]: E0129 12:57:15.123442 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.123770 kubelet[1976]: E0129 12:57:15.123739 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.123770 kubelet[1976]: W0129 12:57:15.123766 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.123980 kubelet[1976]: E0129 12:57:15.123842 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.124356 kubelet[1976]: E0129 12:57:15.124284 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.124356 kubelet[1976]: W0129 12:57:15.124320 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.124987 kubelet[1976]: E0129 12:57:15.124518 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.124987 kubelet[1976]: E0129 12:57:15.124636 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.124987 kubelet[1976]: W0129 12:57:15.124656 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.124987 kubelet[1976]: E0129 12:57:15.124678 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.125271 kubelet[1976]: E0129 12:57:15.125046 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.125271 kubelet[1976]: W0129 12:57:15.125066 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.125271 kubelet[1976]: E0129 12:57:15.125125 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.125564 kubelet[1976]: E0129 12:57:15.125505 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.125564 kubelet[1976]: W0129 12:57:15.125539 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.125729 kubelet[1976]: E0129 12:57:15.125569 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.126081 kubelet[1976]: E0129 12:57:15.126024 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.126081 kubelet[1976]: W0129 12:57:15.126063 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.126245 kubelet[1976]: E0129 12:57:15.126097 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.126911 kubelet[1976]: E0129 12:57:15.126666 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.126911 kubelet[1976]: W0129 12:57:15.126695 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.126911 kubelet[1976]: E0129 12:57:15.126736 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.127249 kubelet[1976]: E0129 12:57:15.127140 1976 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:57:15.127249 kubelet[1976]: W0129 12:57:15.127161 1976 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:57:15.127249 kubelet[1976]: E0129 12:57:15.127191 1976 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:57:15.685788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603815483.mount: Deactivated successfully. Jan 29 12:57:15.753301 kubelet[1976]: E0129 12:57:15.753194 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:15.824258 containerd[1581]: time="2025-01-29T12:57:15.823371037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:15.824746 containerd[1581]: time="2025-01-29T12:57:15.824711191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 12:57:15.826068 containerd[1581]: time="2025-01-29T12:57:15.826032198Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:15.829623 containerd[1581]: time="2025-01-29T12:57:15.828854541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:15.829623 containerd[1581]: time="2025-01-29T12:57:15.829492598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.727391633s" Jan 29 12:57:15.829623 containerd[1581]: time="2025-01-29T12:57:15.829528305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 12:57:15.832306 containerd[1581]: time="2025-01-29T12:57:15.832249539Z" level=info msg="CreateContainer within sandbox \"13231fada097003812e40bf47d084747e5243bf4301bceb5fed0e65526b9b3e6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:57:15.854024 containerd[1581]: time="2025-01-29T12:57:15.853877211Z" level=info msg="CreateContainer within sandbox \"13231fada097003812e40bf47d084747e5243bf4301bceb5fed0e65526b9b3e6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"00e69a90bb376b3755e918d4d209559ea8cf7dc8cbbec4114e19c07bdb163edc\"" Jan 29 12:57:15.854840 containerd[1581]: time="2025-01-29T12:57:15.854601861Z" level=info msg="StartContainer for \"00e69a90bb376b3755e918d4d209559ea8cf7dc8cbbec4114e19c07bdb163edc\"" Jan 29 12:57:15.943533 containerd[1581]: time="2025-01-29T12:57:15.942906155Z" level=info msg="StartContainer for \"00e69a90bb376b3755e918d4d209559ea8cf7dc8cbbec4114e19c07bdb163edc\" returns successfully" Jan 29 12:57:16.621003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00e69a90bb376b3755e918d4d209559ea8cf7dc8cbbec4114e19c07bdb163edc-rootfs.mount: Deactivated successfully. Jan 29 12:57:16.753640 kubelet[1976]: E0129 12:57:16.753486 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:16.890325 kubelet[1976]: E0129 12:57:16.889351 1976 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wqlkd" podUID="43cc5aeb-8d72-4137-8afa-6422de953051" Jan 29 12:57:17.054559 containerd[1581]: time="2025-01-29T12:57:17.054445216Z" level=info msg="shim disconnected" id=00e69a90bb376b3755e918d4d209559ea8cf7dc8cbbec4114e19c07bdb163edc namespace=k8s.io Jan 29 12:57:17.055432 containerd[1581]: time="2025-01-29T12:57:17.055078063Z" level=warning msg="cleaning up after shim disconnected" id=00e69a90bb376b3755e918d4d209559ea8cf7dc8cbbec4114e19c07bdb163edc namespace=k8s.io Jan 29 12:57:17.055432 containerd[1581]: time="2025-01-29T12:57:17.055177380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:57:17.084030 containerd[1581]: time="2025-01-29T12:57:17.083881143Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:57:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:57:17.106538 containerd[1581]: time="2025-01-29T12:57:17.106453518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 12:57:17.754191 kubelet[1976]: E0129 12:57:17.754120 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:18.754903 kubelet[1976]: E0129 12:57:18.754823 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:18.892577 kubelet[1976]: E0129 12:57:18.892441 1976 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wqlkd" podUID="43cc5aeb-8d72-4137-8afa-6422de953051" Jan 29 12:57:19.755780 kubelet[1976]: E0129 12:57:19.755692 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:20.756701 kubelet[1976]: E0129 12:57:20.756523 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:20.889770 kubelet[1976]: E0129 12:57:20.888970 1976 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wqlkd" podUID="43cc5aeb-8d72-4137-8afa-6422de953051" Jan 29 12:57:21.758893 kubelet[1976]: E0129 12:57:21.758842 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:22.759975 kubelet[1976]: E0129 12:57:22.759921 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:22.818336 containerd[1581]: time="2025-01-29T12:57:22.817858757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:22.819967 containerd[1581]: time="2025-01-29T12:57:22.819885781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 12:57:22.821597 containerd[1581]: time="2025-01-29T12:57:22.821556849Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:22.824510 containerd[1581]: time="2025-01-29T12:57:22.824463694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:22.825546 containerd[1581]: time="2025-01-29T12:57:22.825433004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.718905838s" Jan 29 12:57:22.825546 containerd[1581]: time="2025-01-29T12:57:22.825464422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 12:57:22.828677 containerd[1581]: time="2025-01-29T12:57:22.828651610Z" level=info msg="CreateContainer within sandbox \"13231fada097003812e40bf47d084747e5243bf4301bceb5fed0e65526b9b3e6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:57:22.849576 containerd[1581]: time="2025-01-29T12:57:22.849480293Z" level=info msg="CreateContainer within sandbox \"13231fada097003812e40bf47d084747e5243bf4301bceb5fed0e65526b9b3e6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5215e16816df5eff0537829c191066d2ef4ce524d64d6f89d09b0f0300676e96\"" Jan 29 12:57:22.850127 containerd[1581]: time="2025-01-29T12:57:22.850100961Z" level=info msg="StartContainer for \"5215e16816df5eff0537829c191066d2ef4ce524d64d6f89d09b0f0300676e96\"" Jan 29 12:57:22.884654 systemd[1]: run-containerd-runc-k8s.io-5215e16816df5eff0537829c191066d2ef4ce524d64d6f89d09b0f0300676e96-runc.lf1im9.mount: Deactivated successfully. Jan 29 12:57:22.889883 kubelet[1976]: E0129 12:57:22.889071 1976 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wqlkd" podUID="43cc5aeb-8d72-4137-8afa-6422de953051" Jan 29 12:57:22.927635 containerd[1581]: time="2025-01-29T12:57:22.927579259Z" level=info msg="StartContainer for \"5215e16816df5eff0537829c191066d2ef4ce524d64d6f89d09b0f0300676e96\" returns successfully" Jan 29 12:57:23.761143 kubelet[1976]: E0129 12:57:23.761010 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:24.134052 containerd[1581]: time="2025-01-29T12:57:24.133765363Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:57:24.147916 kubelet[1976]: I0129 12:57:24.147472 1976 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:57:24.199231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5215e16816df5eff0537829c191066d2ef4ce524d64d6f89d09b0f0300676e96-rootfs.mount: Deactivated successfully. Jan 29 12:57:24.762139 kubelet[1976]: E0129 12:57:24.761975 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:25.037711 containerd[1581]: time="2025-01-29T12:57:25.036852435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wqlkd,Uid:43cc5aeb-8d72-4137-8afa-6422de953051,Namespace:calico-system,Attempt:0,}" Jan 29 12:57:25.082706 containerd[1581]: time="2025-01-29T12:57:25.082530688Z" level=info msg="shim disconnected" id=5215e16816df5eff0537829c191066d2ef4ce524d64d6f89d09b0f0300676e96 namespace=k8s.io Jan 29 12:57:25.082706 containerd[1581]: time="2025-01-29T12:57:25.082650171Z" level=warning msg="cleaning up after shim disconnected" id=5215e16816df5eff0537829c191066d2ef4ce524d64d6f89d09b0f0300676e96 namespace=k8s.io Jan 29 12:57:25.082706 containerd[1581]: time="2025-01-29T12:57:25.082674526Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:57:25.157289 containerd[1581]: time="2025-01-29T12:57:25.156942352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 12:57:25.200666 containerd[1581]: time="2025-01-29T12:57:25.200596525Z" level=error msg="Failed to destroy network for sandbox \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:57:25.201293 containerd[1581]: time="2025-01-29T12:57:25.201252771Z" level=error msg="encountered an error cleaning up failed sandbox \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:57:25.201452 containerd[1581]: time="2025-01-29T12:57:25.201391139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wqlkd,Uid:43cc5aeb-8d72-4137-8afa-6422de953051,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:57:25.203354 kubelet[1976]: E0129 12:57:25.202961 1976 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:57:25.203354 kubelet[1976]: E0129 12:57:25.203037 1976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wqlkd" Jan 29 12:57:25.203354 kubelet[1976]: E0129 12:57:25.203071 1976 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wqlkd" Jan 29 12:57:25.203063 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706-shm.mount: Deactivated successfully. Jan 29 12:57:25.203819 kubelet[1976]: E0129 12:57:25.203123 1976 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wqlkd_calico-system(43cc5aeb-8d72-4137-8afa-6422de953051)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wqlkd_calico-system(43cc5aeb-8d72-4137-8afa-6422de953051)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wqlkd" podUID="43cc5aeb-8d72-4137-8afa-6422de953051" Jan 29 12:57:25.762840 kubelet[1976]: E0129 12:57:25.762700 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:26.156577 kubelet[1976]: I0129 12:57:26.155220 1976 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Jan 29 12:57:26.160193 containerd[1581]: time="2025-01-29T12:57:26.160106083Z" level=info msg="StopPodSandbox for \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\"" Jan 29 12:57:26.161721 containerd[1581]: time="2025-01-29T12:57:26.160739476Z" level=info msg="Ensure that sandbox 4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706 in task-service has been cleanup successfully" Jan 29 12:57:26.222849 containerd[1581]: time="2025-01-29T12:57:26.222624953Z" level=error msg="StopPodSandbox for \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\" failed" error="failed to destroy network for sandbox \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:57:26.223482 kubelet[1976]: E0129 12:57:26.223391 1976 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Jan 29 12:57:26.223647 kubelet[1976]: E0129 12:57:26.223523 1976 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706"} Jan 29 12:57:26.223721 kubelet[1976]: E0129 12:57:26.223649 1976 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43cc5aeb-8d72-4137-8afa-6422de953051\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:57:26.223942 kubelet[1976]: E0129 12:57:26.223716 1976 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43cc5aeb-8d72-4137-8afa-6422de953051\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wqlkd" podUID="43cc5aeb-8d72-4137-8afa-6422de953051" Jan 29 12:57:26.763698 kubelet[1976]: E0129 12:57:26.763584 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:27.764436 kubelet[1976]: E0129 12:57:27.764332 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:28.097923 kubelet[1976]: I0129 12:57:28.097014 1976 topology_manager.go:215] "Topology Admit Handler" podUID="8f708e0b-c2a1-437f-bd4d-207b0ef20694" podNamespace="default" podName="nginx-deployment-85f456d6dd-62vp4" Jan 29 12:57:28.215218 kubelet[1976]: I0129 12:57:28.214953 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4jj8\" (UniqueName: \"kubernetes.io/projected/8f708e0b-c2a1-437f-bd4d-207b0ef20694-kube-api-access-m4jj8\") pod \"nginx-deployment-85f456d6dd-62vp4\" (UID: \"8f708e0b-c2a1-437f-bd4d-207b0ef20694\") " pod="default/nginx-deployment-85f456d6dd-62vp4" Jan 29 12:57:28.411421 containerd[1581]: time="2025-01-29T12:57:28.410564094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-62vp4,Uid:8f708e0b-c2a1-437f-bd4d-207b0ef20694,Namespace:default,Attempt:0,}" Jan 29 12:57:28.558749 containerd[1581]: time="2025-01-29T12:57:28.558695268Z" level=error msg="Failed to destroy network for sandbox \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:57:28.559248 containerd[1581]: time="2025-01-29T12:57:28.559222183Z" level=error msg="encountered an error cleaning up failed sandbox \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:57:28.559359 containerd[1581]: time="2025-01-29T12:57:28.559334082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-62vp4,Uid:8f708e0b-c2a1-437f-bd4d-207b0ef20694,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:57:28.561686 kubelet[1976]: E0129 12:57:28.559944 1976 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:57:28.561924 kubelet[1976]: E0129 12:57:28.561900 1976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-62vp4" Jan 29 12:57:28.561959 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030-shm.mount: Deactivated successfully. Jan 29 12:57:28.562318 kubelet[1976]: E0129 12:57:28.562294 1976 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-62vp4" Jan 29 12:57:28.563109 kubelet[1976]: E0129 12:57:28.563066 1976 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-62vp4_default(8f708e0b-c2a1-437f-bd4d-207b0ef20694)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-62vp4_default(8f708e0b-c2a1-437f-bd4d-207b0ef20694)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-62vp4" podUID="8f708e0b-c2a1-437f-bd4d-207b0ef20694" Jan 29 12:57:28.747003 kubelet[1976]: E0129 12:57:28.746926 1976 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:28.765345 kubelet[1976]: E0129 12:57:28.765311 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:29.167121 kubelet[1976]: I0129 12:57:29.165709 1976 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Jan 29 12:57:29.168867 containerd[1581]: time="2025-01-29T12:57:29.168604718Z" level=info msg="StopPodSandbox for \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\"" Jan 29 12:57:29.170985 containerd[1581]: time="2025-01-29T12:57:29.169895100Z" level=info msg="Ensure that sandbox eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030 in task-service has been cleanup successfully" Jan 29 12:57:29.261650 containerd[1581]: time="2025-01-29T12:57:29.261585589Z" level=error msg="StopPodSandbox for \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\" failed" error="failed to destroy network for sandbox \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:57:29.262202 kubelet[1976]: E0129 12:57:29.262023 1976 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Jan 29 12:57:29.262202 kubelet[1976]: E0129 12:57:29.262081 1976 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030"} Jan 29 12:57:29.262202 kubelet[1976]: E0129 12:57:29.262135 1976 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f708e0b-c2a1-437f-bd4d-207b0ef20694\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:57:29.262202 kubelet[1976]: E0129 12:57:29.262163 1976 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f708e0b-c2a1-437f-bd4d-207b0ef20694\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-62vp4" podUID="8f708e0b-c2a1-437f-bd4d-207b0ef20694" Jan 29 12:57:29.766145 kubelet[1976]: E0129 12:57:29.766046 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:30.767233 kubelet[1976]: E0129 12:57:30.767125 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:31.353697 update_engine[1566]: I20250129 12:57:31.353600 1566 update_attempter.cc:509] Updating boot flags... Jan 29 12:57:31.425842 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2573) Jan 29 12:57:31.474280 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2573) Jan 29 12:57:31.767336 kubelet[1976]: E0129 12:57:31.767271 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:32.768311 kubelet[1976]: E0129 12:57:32.768245 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:33.769177 kubelet[1976]: E0129 12:57:33.769099 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:34.087415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount542297531.mount: Deactivated successfully. Jan 29 12:57:34.148279 containerd[1581]: time="2025-01-29T12:57:34.147456582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:34.148863 containerd[1581]: time="2025-01-29T12:57:34.148822028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 12:57:34.150380 containerd[1581]: time="2025-01-29T12:57:34.150353934Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:34.153884 containerd[1581]: time="2025-01-29T12:57:34.153823697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:34.154564 containerd[1581]: time="2025-01-29T12:57:34.154536381Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.997485957s" Jan 29 12:57:34.154649 containerd[1581]: time="2025-01-29T12:57:34.154632781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 12:57:34.171669 containerd[1581]: time="2025-01-29T12:57:34.171632823Z" level=info msg="CreateContainer within sandbox \"13231fada097003812e40bf47d084747e5243bf4301bceb5fed0e65526b9b3e6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:57:34.194865 containerd[1581]: time="2025-01-29T12:57:34.194817958Z" level=info msg="CreateContainer within sandbox \"13231fada097003812e40bf47d084747e5243bf4301bceb5fed0e65526b9b3e6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d0b6ae4659000f7ab8c91252277a15941db9ed9c5bf5562cc60e68a2b857fd41\"" Jan 29 12:57:34.197438 containerd[1581]: time="2025-01-29T12:57:34.195729924Z" level=info msg="StartContainer for \"d0b6ae4659000f7ab8c91252277a15941db9ed9c5bf5562cc60e68a2b857fd41\"" Jan 29 12:57:34.273624 containerd[1581]: time="2025-01-29T12:57:34.273543533Z" level=info msg="StartContainer for \"d0b6ae4659000f7ab8c91252277a15941db9ed9c5bf5562cc60e68a2b857fd41\" returns successfully" Jan 29 12:57:34.375734 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 12:57:34.376003 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 12:57:34.770455 kubelet[1976]: E0129 12:57:34.770357 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:35.219869 kubelet[1976]: I0129 12:57:35.219262 1976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8kw4f" podStartSLOduration=5.247681794 podStartE2EDuration="27.219226815s" podCreationTimestamp="2025-01-29 12:57:08 +0000 UTC" firstStartedPulling="2025-01-29 12:57:12.183822365 +0000 UTC m=+4.441855771" lastFinishedPulling="2025-01-29 12:57:34.155367396 +0000 UTC m=+26.413400792" observedRunningTime="2025-01-29 12:57:35.218787482 +0000 UTC m=+27.476820958" watchObservedRunningTime="2025-01-29 12:57:35.219226815 +0000 UTC m=+27.477260271" Jan 29 12:57:35.771694 kubelet[1976]: E0129 12:57:35.771598 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:36.167828 kernel: bpftool[2766]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 12:57:36.473322 systemd-networkd[1206]: vxlan.calico: Link UP Jan 29 12:57:36.473332 systemd-networkd[1206]: vxlan.calico: Gained carrier Jan 29 12:57:36.773778 kubelet[1976]: E0129 12:57:36.772991 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:37.517023 kubelet[1976]: I0129 12:57:37.516909 1976 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:57:37.575083 systemd-networkd[1206]: vxlan.calico: Gained IPv6LL Jan 29 12:57:37.773623 kubelet[1976]: E0129 12:57:37.773268 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:37.889547 containerd[1581]: time="2025-01-29T12:57:37.889430076Z" level=info msg="StopPodSandbox for \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\"" Jan 29 12:57:38.124319 containerd[1581]: 2025-01-29 12:57:38.039 [INFO][2891] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Jan 29 12:57:38.124319 containerd[1581]: 2025-01-29 12:57:38.040 [INFO][2891] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" iface="eth0" netns="/var/run/netns/cni-61e1f91c-704d-a322-1571-e01a4d33e954" Jan 29 12:57:38.124319 containerd[1581]: 2025-01-29 12:57:38.040 [INFO][2891] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" iface="eth0" netns="/var/run/netns/cni-61e1f91c-704d-a322-1571-e01a4d33e954" Jan 29 12:57:38.124319 containerd[1581]: 2025-01-29 12:57:38.041 [INFO][2891] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" iface="eth0" netns="/var/run/netns/cni-61e1f91c-704d-a322-1571-e01a4d33e954" Jan 29 12:57:38.124319 containerd[1581]: 2025-01-29 12:57:38.041 [INFO][2891] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Jan 29 12:57:38.124319 containerd[1581]: 2025-01-29 12:57:38.041 [INFO][2891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Jan 29 12:57:38.124319 containerd[1581]: 2025-01-29 12:57:38.097 [INFO][2898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" HandleID="k8s-pod-network.4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Workload="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:57:38.124319 containerd[1581]: 2025-01-29 12:57:38.097 [INFO][2898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:57:38.124319 containerd[1581]: 2025-01-29 12:57:38.097 [INFO][2898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:57:38.124319 containerd[1581]: 2025-01-29 12:57:38.113 [WARNING][2898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" HandleID="k8s-pod-network.4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Workload="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:57:38.124319 containerd[1581]: 2025-01-29 12:57:38.113 [INFO][2898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" HandleID="k8s-pod-network.4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Workload="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:57:38.124319 containerd[1581]: 2025-01-29 12:57:38.116 [INFO][2898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:57:38.124319 containerd[1581]: 2025-01-29 12:57:38.120 [INFO][2891] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Jan 29 12:57:38.124319 containerd[1581]: time="2025-01-29T12:57:38.123855899Z" level=info msg="TearDown network for sandbox \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\" successfully" Jan 29 12:57:38.124319 containerd[1581]: time="2025-01-29T12:57:38.123906143Z" level=info msg="StopPodSandbox for \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\" returns successfully" Jan 29 12:57:38.130081 containerd[1581]: time="2025-01-29T12:57:38.127359678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wqlkd,Uid:43cc5aeb-8d72-4137-8afa-6422de953051,Namespace:calico-system,Attempt:1,}" Jan 29 12:57:38.134155 systemd[1]: run-netns-cni\x2d61e1f91c\x2d704d\x2da322\x2d1571\x2de01a4d33e954.mount: Deactivated successfully. Jan 29 12:57:38.774468 kubelet[1976]: E0129 12:57:38.774385 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:38.955106 systemd-networkd[1206]: cali57a7cde3725: Link UP Jan 29 12:57:38.957204 systemd-networkd[1206]: cali57a7cde3725: Gained carrier Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.756 [INFO][2906] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.160-k8s-csi--node--driver--wqlkd-eth0 csi-node-driver- calico-system 43cc5aeb-8d72-4137-8afa-6422de953051 1142 0 2025-01-29 12:57:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.24.4.160 csi-node-driver-wqlkd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali57a7cde3725 [] []}} ContainerID="de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" Namespace="calico-system" Pod="csi-node-driver-wqlkd" WorkloadEndpoint="172.24.4.160-k8s-csi--node--driver--wqlkd-" Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.756 [INFO][2906] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" Namespace="calico-system" Pod="csi-node-driver-wqlkd" WorkloadEndpoint="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.832 [INFO][2917] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" HandleID="k8s-pod-network.de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" Workload="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.870 [INFO][2917] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" HandleID="k8s-pod-network.de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" Workload="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000336f60), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.160", "pod":"csi-node-driver-wqlkd", "timestamp":"2025-01-29 12:57:38.832275458 +0000 UTC"}, Hostname:"172.24.4.160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.870 [INFO][2917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.870 [INFO][2917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.870 [INFO][2917] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.160' Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.874 [INFO][2917] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" host="172.24.4.160" Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.886 [INFO][2917] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.160" Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.897 [INFO][2917] ipam/ipam.go 489: Trying affinity for 192.168.77.192/26 host="172.24.4.160" Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.902 [INFO][2917] ipam/ipam.go 155: Attempting to load block cidr=192.168.77.192/26 host="172.24.4.160" Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.907 [INFO][2917] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="172.24.4.160" Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.907 [INFO][2917] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" host="172.24.4.160" Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.910 [INFO][2917] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.920 [INFO][2917] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" host="172.24.4.160" Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.944 [INFO][2917] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.77.193/26] block=192.168.77.192/26 handle="k8s-pod-network.de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" host="172.24.4.160" Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.945 [INFO][2917] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.193/26] handle="k8s-pod-network.de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" host="172.24.4.160" Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.945 [INFO][2917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:57:38.999647 containerd[1581]: 2025-01-29 12:57:38.945 [INFO][2917] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.193/26] IPv6=[] ContainerID="de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" HandleID="k8s-pod-network.de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" Workload="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:57:39.002425 containerd[1581]: 2025-01-29 12:57:38.948 [INFO][2906] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" Namespace="calico-system" Pod="csi-node-driver-wqlkd" WorkloadEndpoint="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.160-k8s-csi--node--driver--wqlkd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"43cc5aeb-8d72-4137-8afa-6422de953051", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 57, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.160", ContainerID:"", Pod:"csi-node-driver-wqlkd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali57a7cde3725", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:57:39.002425 containerd[1581]: 2025-01-29 12:57:38.949 [INFO][2906] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.77.193/32] ContainerID="de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" Namespace="calico-system" Pod="csi-node-driver-wqlkd" WorkloadEndpoint="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:57:39.002425 containerd[1581]: 2025-01-29 12:57:38.949 [INFO][2906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57a7cde3725 ContainerID="de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" Namespace="calico-system" Pod="csi-node-driver-wqlkd" WorkloadEndpoint="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:57:39.002425 containerd[1581]: 2025-01-29 12:57:38.954 [INFO][2906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" Namespace="calico-system" Pod="csi-node-driver-wqlkd" WorkloadEndpoint="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:57:39.002425 containerd[1581]: 2025-01-29 12:57:38.955 [INFO][2906] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" Namespace="calico-system" Pod="csi-node-driver-wqlkd" WorkloadEndpoint="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.160-k8s-csi--node--driver--wqlkd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"43cc5aeb-8d72-4137-8afa-6422de953051", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 57, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.160", ContainerID:"de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de", Pod:"csi-node-driver-wqlkd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali57a7cde3725", MAC:"42:98:a0:05:0a:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:57:39.002425 containerd[1581]: 2025-01-29 12:57:38.996 [INFO][2906] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de" Namespace="calico-system" Pod="csi-node-driver-wqlkd" WorkloadEndpoint="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:57:39.510684 containerd[1581]: time="2025-01-29T12:57:39.510009291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:57:39.510684 containerd[1581]: time="2025-01-29T12:57:39.510123805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:57:39.510684 containerd[1581]: time="2025-01-29T12:57:39.510158590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:57:39.510684 containerd[1581]: time="2025-01-29T12:57:39.510326414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:57:39.566263 systemd[1]: run-containerd-runc-k8s.io-de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de-runc.y5YBbX.mount: Deactivated successfully. Jan 29 12:57:39.596855 containerd[1581]: time="2025-01-29T12:57:39.596814728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wqlkd,Uid:43cc5aeb-8d72-4137-8afa-6422de953051,Namespace:calico-system,Attempt:1,} returns sandbox id \"de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de\"" Jan 29 12:57:39.599641 containerd[1581]: time="2025-01-29T12:57:39.599387154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 12:57:39.775713 kubelet[1976]: E0129 12:57:39.775500 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:40.519312 systemd-networkd[1206]: cali57a7cde3725: Gained IPv6LL Jan 29 12:57:40.776957 kubelet[1976]: E0129 12:57:40.776686 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:41.622295 containerd[1581]: time="2025-01-29T12:57:41.622222900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:41.623479 containerd[1581]: time="2025-01-29T12:57:41.623413569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 12:57:41.624924 containerd[1581]: time="2025-01-29T12:57:41.624875358Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:41.627609 containerd[1581]: time="2025-01-29T12:57:41.627583639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:41.628442 containerd[1581]: time="2025-01-29T12:57:41.628291074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.028863674s" Jan 29 12:57:41.628442 containerd[1581]: time="2025-01-29T12:57:41.628335287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 12:57:41.631190 containerd[1581]: time="2025-01-29T12:57:41.631071512Z" level=info msg="CreateContainer within sandbox \"de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 12:57:41.656324 containerd[1581]: time="2025-01-29T12:57:41.656191847Z" level=info msg="CreateContainer within sandbox \"de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"49ebc7697432a6de7277af8372a1949b943b0ee9d0142e9e02616db3fb5c4d58\"" Jan 29 12:57:41.657297 containerd[1581]: time="2025-01-29T12:57:41.657272651Z" level=info msg="StartContainer for \"49ebc7697432a6de7277af8372a1949b943b0ee9d0142e9e02616db3fb5c4d58\"" Jan 29 12:57:41.731880 containerd[1581]: time="2025-01-29T12:57:41.731826164Z" level=info msg="StartContainer for \"49ebc7697432a6de7277af8372a1949b943b0ee9d0142e9e02616db3fb5c4d58\" returns successfully" Jan 29 12:57:41.733275 containerd[1581]: time="2025-01-29T12:57:41.733188677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 12:57:41.777011 kubelet[1976]: E0129 12:57:41.776905 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:42.778104 kubelet[1976]: E0129 12:57:42.778022 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:43.778731 kubelet[1976]: E0129 12:57:43.778650 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:43.889620 containerd[1581]: time="2025-01-29T12:57:43.889556960Z" level=info msg="StopPodSandbox for \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\"" Jan 29 12:57:44.072853 containerd[1581]: 2025-01-29 12:57:43.981 [INFO][3038] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Jan 29 12:57:44.072853 containerd[1581]: 2025-01-29 12:57:43.981 [INFO][3038] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" iface="eth0" netns="/var/run/netns/cni-740125f1-154c-1f67-47db-d10edb5195e3" Jan 29 12:57:44.072853 containerd[1581]: 2025-01-29 12:57:43.982 [INFO][3038] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" iface="eth0" netns="/var/run/netns/cni-740125f1-154c-1f67-47db-d10edb5195e3" Jan 29 12:57:44.072853 containerd[1581]: 2025-01-29 12:57:43.982 [INFO][3038] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" iface="eth0" netns="/var/run/netns/cni-740125f1-154c-1f67-47db-d10edb5195e3" Jan 29 12:57:44.072853 containerd[1581]: 2025-01-29 12:57:43.982 [INFO][3038] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Jan 29 12:57:44.072853 containerd[1581]: 2025-01-29 12:57:43.982 [INFO][3038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Jan 29 12:57:44.072853 containerd[1581]: 2025-01-29 12:57:44.043 [INFO][3045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" HandleID="k8s-pod-network.eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Workload="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:57:44.072853 containerd[1581]: 2025-01-29 12:57:44.043 [INFO][3045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:57:44.072853 containerd[1581]: 2025-01-29 12:57:44.043 [INFO][3045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:57:44.072853 containerd[1581]: 2025-01-29 12:57:44.067 [WARNING][3045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" HandleID="k8s-pod-network.eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Workload="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:57:44.072853 containerd[1581]: 2025-01-29 12:57:44.067 [INFO][3045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" HandleID="k8s-pod-network.eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Workload="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:57:44.072853 containerd[1581]: 2025-01-29 12:57:44.069 [INFO][3045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:57:44.072853 containerd[1581]: 2025-01-29 12:57:44.071 [INFO][3038] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Jan 29 12:57:44.077160 containerd[1581]: time="2025-01-29T12:57:44.075128530Z" level=info msg="TearDown network for sandbox \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\" successfully" Jan 29 12:57:44.077160 containerd[1581]: time="2025-01-29T12:57:44.076959440Z" level=info msg="StopPodSandbox for \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\" returns successfully" Jan 29 12:57:44.078606 systemd[1]: run-netns-cni\x2d740125f1\x2d154c\x2d1f67\x2d47db\x2dd10edb5195e3.mount: Deactivated successfully. Jan 29 12:57:44.080911 containerd[1581]: time="2025-01-29T12:57:44.080381560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-62vp4,Uid:8f708e0b-c2a1-437f-bd4d-207b0ef20694,Namespace:default,Attempt:1,}" Jan 29 12:57:44.128648 containerd[1581]: time="2025-01-29T12:57:44.126862543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:44.129708 containerd[1581]: time="2025-01-29T12:57:44.129660924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 12:57:44.133705 containerd[1581]: time="2025-01-29T12:57:44.133654735Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:44.139419 containerd[1581]: time="2025-01-29T12:57:44.139368930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:44.141990 containerd[1581]: time="2025-01-29T12:57:44.141940968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.408714682s" Jan 29 12:57:44.142080 containerd[1581]: time="2025-01-29T12:57:44.141999939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 12:57:44.144760 containerd[1581]: time="2025-01-29T12:57:44.144675180Z" level=info msg="CreateContainer within sandbox \"de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 12:57:44.169468 containerd[1581]: time="2025-01-29T12:57:44.169384858Z" level=info msg="CreateContainer within sandbox \"de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8133a2773afcaa353d00e2dc14f9732460bc6451a3ab912b8c43372b790b7af1\"" Jan 29 12:57:44.170366 containerd[1581]: time="2025-01-29T12:57:44.170251141Z" level=info msg="StartContainer for \"8133a2773afcaa353d00e2dc14f9732460bc6451a3ab912b8c43372b790b7af1\"" Jan 29 12:57:44.249691 containerd[1581]: time="2025-01-29T12:57:44.249514272Z" level=info msg="StartContainer for \"8133a2773afcaa353d00e2dc14f9732460bc6451a3ab912b8c43372b790b7af1\" returns successfully" Jan 29 12:57:44.275337 systemd-networkd[1206]: caliefb2941855e: Link UP Jan 29 12:57:44.275538 systemd-networkd[1206]: caliefb2941855e: Gained carrier Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.152 [INFO][3054] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0 nginx-deployment-85f456d6dd- default 8f708e0b-c2a1-437f-bd4d-207b0ef20694 1166 0 2025-01-29 12:57:28 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.160 nginx-deployment-85f456d6dd-62vp4 eth0 default [] [] [kns.default ksa.default.default] caliefb2941855e [] []}} ContainerID="91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" Namespace="default" Pod="nginx-deployment-85f456d6dd-62vp4" WorkloadEndpoint="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-" Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.152 [INFO][3054] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" Namespace="default" Pod="nginx-deployment-85f456d6dd-62vp4" WorkloadEndpoint="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.208 [INFO][3066] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" HandleID="k8s-pod-network.91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" Workload="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.222 [INFO][3066] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" HandleID="k8s-pod-network.91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" Workload="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000284610), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.160", "pod":"nginx-deployment-85f456d6dd-62vp4", "timestamp":"2025-01-29 12:57:44.208305668 +0000 UTC"}, Hostname:"172.24.4.160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.223 [INFO][3066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.223 [INFO][3066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.223 [INFO][3066] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.160' Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.225 [INFO][3066] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" host="172.24.4.160" Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.230 [INFO][3066] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.160" Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.239 [INFO][3066] ipam/ipam.go 489: Trying affinity for 192.168.77.192/26 host="172.24.4.160" Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.242 [INFO][3066] ipam/ipam.go 155: Attempting to load block cidr=192.168.77.192/26 host="172.24.4.160" Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.247 [INFO][3066] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="172.24.4.160" Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.248 [INFO][3066] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" host="172.24.4.160" Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.251 [INFO][3066] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8 Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.259 [INFO][3066] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" host="172.24.4.160" Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.268 [INFO][3066] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.77.194/26] block=192.168.77.192/26 handle="k8s-pod-network.91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" host="172.24.4.160" Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.268 [INFO][3066] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.194/26] handle="k8s-pod-network.91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" host="172.24.4.160" Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.268 [INFO][3066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:57:44.298865 containerd[1581]: 2025-01-29 12:57:44.268 [INFO][3066] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.194/26] IPv6=[] ContainerID="91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" HandleID="k8s-pod-network.91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" Workload="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:57:44.301268 containerd[1581]: 2025-01-29 12:57:44.270 [INFO][3054] cni-plugin/k8s.go 386: Populated endpoint ContainerID="91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" Namespace="default" Pod="nginx-deployment-85f456d6dd-62vp4" WorkloadEndpoint="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8f708e0b-c2a1-437f-bd4d-207b0ef20694", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.160", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-62vp4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliefb2941855e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:57:44.301268 containerd[1581]: 2025-01-29 12:57:44.270 [INFO][3054] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.77.194/32] ContainerID="91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" Namespace="default" Pod="nginx-deployment-85f456d6dd-62vp4" WorkloadEndpoint="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:57:44.301268 containerd[1581]: 2025-01-29 12:57:44.270 [INFO][3054] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefb2941855e ContainerID="91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" Namespace="default" Pod="nginx-deployment-85f456d6dd-62vp4" WorkloadEndpoint="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:57:44.301268 containerd[1581]: 2025-01-29 12:57:44.275 [INFO][3054] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" Namespace="default" Pod="nginx-deployment-85f456d6dd-62vp4" WorkloadEndpoint="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:57:44.301268 containerd[1581]: 2025-01-29 12:57:44.276 [INFO][3054] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" Namespace="default" Pod="nginx-deployment-85f456d6dd-62vp4" WorkloadEndpoint="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8f708e0b-c2a1-437f-bd4d-207b0ef20694", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.160", ContainerID:"91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8", Pod:"nginx-deployment-85f456d6dd-62vp4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliefb2941855e", MAC:"2a:91:65:e4:25:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:57:44.301268 containerd[1581]: 2025-01-29 12:57:44.296 [INFO][3054] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8" Namespace="default" Pod="nginx-deployment-85f456d6dd-62vp4" WorkloadEndpoint="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:57:44.329661 containerd[1581]: time="2025-01-29T12:57:44.328936382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:57:44.329661 containerd[1581]: time="2025-01-29T12:57:44.329023616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:57:44.329661 containerd[1581]: time="2025-01-29T12:57:44.329045847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:57:44.329661 containerd[1581]: time="2025-01-29T12:57:44.329255049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:57:44.389266 containerd[1581]: time="2025-01-29T12:57:44.389211234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-62vp4,Uid:8f708e0b-c2a1-437f-bd4d-207b0ef20694,Namespace:default,Attempt:1,} returns sandbox id \"91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8\"" Jan 29 12:57:44.391677 containerd[1581]: time="2025-01-29T12:57:44.391647968Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 12:57:44.779875 kubelet[1976]: E0129 12:57:44.779502 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:44.852776 kubelet[1976]: I0129 12:57:44.852714 1976 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 12:57:44.852980 kubelet[1976]: I0129 12:57:44.852788 1976 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 12:57:45.267906 kubelet[1976]: I0129 12:57:45.267689 1976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wqlkd" podStartSLOduration=32.723678618 podStartE2EDuration="37.267649666s" podCreationTimestamp="2025-01-29 12:57:08 +0000 UTC" firstStartedPulling="2025-01-29 12:57:39.598763356 +0000 UTC m=+31.856796752" lastFinishedPulling="2025-01-29 12:57:44.142734404 +0000 UTC m=+36.400767800" observedRunningTime="2025-01-29 12:57:45.267525294 +0000 UTC m=+37.525558750" watchObservedRunningTime="2025-01-29 12:57:45.267649666 +0000 UTC m=+37.525683163" Jan 29 12:57:45.703266 systemd-networkd[1206]: caliefb2941855e: Gained IPv6LL Jan 29 12:57:45.780414 kubelet[1976]: E0129 12:57:45.780271 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:46.781814 kubelet[1976]: E0129 12:57:46.781047 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:47.781472 kubelet[1976]: E0129 12:57:47.781406 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:48.286975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount896498869.mount: Deactivated successfully. Jan 29 12:57:48.746464 kubelet[1976]: E0129 12:57:48.746302 1976 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:48.782338 kubelet[1976]: E0129 12:57:48.782223 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:49.782785 kubelet[1976]: E0129 12:57:49.782707 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:50.686164 containerd[1581]: time="2025-01-29T12:57:50.686010017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:50.688667 containerd[1581]: time="2025-01-29T12:57:50.687988194Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 12:57:50.690384 containerd[1581]: time="2025-01-29T12:57:50.690232589Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:50.699843 containerd[1581]: time="2025-01-29T12:57:50.697381566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:50.700510 containerd[1581]: time="2025-01-29T12:57:50.700431773Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 6.308723723s" Jan 29 12:57:50.700761 containerd[1581]: time="2025-01-29T12:57:50.700711186Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 12:57:50.706848 containerd[1581]: time="2025-01-29T12:57:50.706746917Z" level=info msg="CreateContainer within sandbox \"91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 12:57:50.737154 containerd[1581]: time="2025-01-29T12:57:50.736932375Z" level=info msg="CreateContainer within sandbox \"91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"503c7faa66b5d7b38d6334f7ff698daf3215392f1511f2609e4e3c53e9886eea\"" Jan 29 12:57:50.738663 containerd[1581]: time="2025-01-29T12:57:50.738221942Z" level=info msg="StartContainer for \"503c7faa66b5d7b38d6334f7ff698daf3215392f1511f2609e4e3c53e9886eea\"" Jan 29 12:57:50.783071 kubelet[1976]: E0129 12:57:50.782945 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:50.849675 containerd[1581]: time="2025-01-29T12:57:50.849551742Z" level=info msg="StartContainer for \"503c7faa66b5d7b38d6334f7ff698daf3215392f1511f2609e4e3c53e9886eea\" returns successfully" Jan 29 12:57:51.783952 kubelet[1976]: E0129 12:57:51.783855 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:52.784506 kubelet[1976]: E0129 12:57:52.784446 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:53.784982 kubelet[1976]: E0129 12:57:53.784890 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:54.785662 kubelet[1976]: E0129 12:57:54.785548 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:55.786343 kubelet[1976]: E0129 12:57:55.786168 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:56.786555 kubelet[1976]: E0129 12:57:56.786463 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:57.786771 kubelet[1976]: E0129 12:57:57.786663 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:58.787618 kubelet[1976]: E0129 12:57:58.787534 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:57:59.788321 kubelet[1976]: E0129 12:57:59.788200 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:00.788973 kubelet[1976]: E0129 12:58:00.788852 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:01.789268 kubelet[1976]: E0129 12:58:01.789109 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:02.790457 kubelet[1976]: E0129 12:58:02.790368 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:03.791677 kubelet[1976]: E0129 12:58:03.791547 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:04.791948 kubelet[1976]: E0129 12:58:04.791790 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:05.398712 kubelet[1976]: I0129 12:58:05.398523 1976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-62vp4" podStartSLOduration=31.086981644 podStartE2EDuration="37.398487729s" podCreationTimestamp="2025-01-29 12:57:28 +0000 UTC" firstStartedPulling="2025-01-29 12:57:44.391149194 +0000 UTC m=+36.649182590" lastFinishedPulling="2025-01-29 12:57:50.702655229 +0000 UTC m=+42.960688675" observedRunningTime="2025-01-29 12:57:51.377212568 +0000 UTC m=+43.635246014" watchObservedRunningTime="2025-01-29 12:58:05.398487729 +0000 UTC m=+57.656521175" Jan 29 12:58:05.399915 kubelet[1976]: I0129 12:58:05.399140 1976 topology_manager.go:215] "Topology Admit Handler" podUID="b0e08591-eca9-4b60-8f40-e988e5d5258b" podNamespace="default" podName="nfs-server-provisioner-0" Jan 29 12:58:05.468070 kubelet[1976]: I0129 12:58:05.467967 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b0e08591-eca9-4b60-8f40-e988e5d5258b-data\") pod \"nfs-server-provisioner-0\" (UID: \"b0e08591-eca9-4b60-8f40-e988e5d5258b\") " pod="default/nfs-server-provisioner-0" Jan 29 12:58:05.468070 kubelet[1976]: I0129 12:58:05.468066 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfk59\" (UniqueName: \"kubernetes.io/projected/b0e08591-eca9-4b60-8f40-e988e5d5258b-kube-api-access-tfk59\") pod \"nfs-server-provisioner-0\" (UID: \"b0e08591-eca9-4b60-8f40-e988e5d5258b\") " pod="default/nfs-server-provisioner-0" Jan 29 12:58:05.707292 containerd[1581]: time="2025-01-29T12:58:05.706273796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b0e08591-eca9-4b60-8f40-e988e5d5258b,Namespace:default,Attempt:0,}" Jan 29 12:58:05.792326 kubelet[1976]: E0129 12:58:05.792236 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:05.971042 systemd-networkd[1206]: cali60e51b789ff: Link UP Jan 29 12:58:05.974951 systemd-networkd[1206]: cali60e51b789ff: Gained carrier Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.824 [INFO][3271] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.160-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default b0e08591-eca9-4b60-8f40-e988e5d5258b 1242 0 2025-01-29 12:58:05 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.24.4.160 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.160-k8s-nfs--server--provisioner--0-" Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.824 [INFO][3271] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.160-k8s-nfs--server--provisioner--0-eth0" Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.893 [INFO][3282] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" HandleID="k8s-pod-network.83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" Workload="172.24.4.160-k8s-nfs--server--provisioner--0-eth0" Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.904 [INFO][3282] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" HandleID="k8s-pod-network.83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" Workload="172.24.4.160-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c54f0), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.160", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 12:58:05.893244932 +0000 UTC"}, Hostname:"172.24.4.160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.904 [INFO][3282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.904 [INFO][3282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.904 [INFO][3282] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.160' Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.907 [INFO][3282] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" host="172.24.4.160" Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.912 [INFO][3282] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.160" Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.919 [INFO][3282] ipam/ipam.go 489: Trying affinity for 192.168.77.192/26 host="172.24.4.160" Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.922 [INFO][3282] ipam/ipam.go 155: Attempting to load block cidr=192.168.77.192/26 host="172.24.4.160" Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.926 [INFO][3282] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="172.24.4.160" Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.926 [INFO][3282] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" host="172.24.4.160" Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.929 [INFO][3282] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892 Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.935 [INFO][3282] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" host="172.24.4.160" Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.948 [INFO][3282] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.77.195/26] block=192.168.77.192/26 handle="k8s-pod-network.83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" host="172.24.4.160" Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.948 [INFO][3282] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.195/26] handle="k8s-pod-network.83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" host="172.24.4.160" Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.948 [INFO][3282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:58:06.004517 containerd[1581]: 2025-01-29 12:58:05.949 [INFO][3282] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.195/26] IPv6=[] ContainerID="83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" HandleID="k8s-pod-network.83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" Workload="172.24.4.160-k8s-nfs--server--provisioner--0-eth0" Jan 29 12:58:06.011228 containerd[1581]: 2025-01-29 12:58:05.952 [INFO][3271] cni-plugin/k8s.go 386: Populated endpoint ContainerID="83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.160-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.160-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b0e08591-eca9-4b60-8f40-e988e5d5258b", ResourceVersion:"1242", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 58, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.160", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.77.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:58:06.011228 containerd[1581]: 2025-01-29 12:58:05.952 [INFO][3271] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.77.195/32] ContainerID="83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.160-k8s-nfs--server--provisioner--0-eth0" Jan 29 12:58:06.011228 containerd[1581]: 2025-01-29 12:58:05.952 [INFO][3271] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.160-k8s-nfs--server--provisioner--0-eth0" Jan 29 12:58:06.011228 containerd[1581]: 2025-01-29 12:58:05.976 [INFO][3271] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.160-k8s-nfs--server--provisioner--0-eth0" Jan 29 12:58:06.011691 containerd[1581]: 2025-01-29 12:58:05.977 [INFO][3271] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.160-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.160-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b0e08591-eca9-4b60-8f40-e988e5d5258b", ResourceVersion:"1242", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 58, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.160", ContainerID:"83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.77.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"32:cd:b5:7b:3b:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:58:06.011691 containerd[1581]: 2025-01-29 12:58:05.997 [INFO][3271] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.160-k8s-nfs--server--provisioner--0-eth0" Jan 29 12:58:06.050425 containerd[1581]: time="2025-01-29T12:58:06.049743818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:58:06.050425 containerd[1581]: time="2025-01-29T12:58:06.050378288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:58:06.050761 containerd[1581]: time="2025-01-29T12:58:06.050618267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:06.050894 containerd[1581]: time="2025-01-29T12:58:06.050849250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:06.125943 containerd[1581]: time="2025-01-29T12:58:06.125879630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b0e08591-eca9-4b60-8f40-e988e5d5258b,Namespace:default,Attempt:0,} returns sandbox id \"83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892\"" Jan 29 12:58:06.127859 containerd[1581]: time="2025-01-29T12:58:06.127832512Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 12:58:06.590299 systemd[1]: run-containerd-runc-k8s.io-83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892-runc.yGbNJv.mount: Deactivated successfully. Jan 29 12:58:06.794829 kubelet[1976]: E0129 12:58:06.793161 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:07.463223 systemd-networkd[1206]: cali60e51b789ff: Gained IPv6LL Jan 29 12:58:07.682714 systemd[1]: run-containerd-runc-k8s.io-d0b6ae4659000f7ab8c91252277a15941db9ed9c5bf5562cc60e68a2b857fd41-runc.T36mE4.mount: Deactivated successfully. Jan 29 12:58:07.794091 kubelet[1976]: E0129 12:58:07.794010 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:08.746940 kubelet[1976]: E0129 12:58:08.746738 1976 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:08.772616 containerd[1581]: time="2025-01-29T12:58:08.772428572Z" level=info msg="StopPodSandbox for \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\"" Jan 29 12:58:08.795061 kubelet[1976]: E0129 12:58:08.794986 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:08.940993 containerd[1581]: 2025-01-29 12:58:08.867 [WARNING][3380] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.160-k8s-csi--node--driver--wqlkd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"43cc5aeb-8d72-4137-8afa-6422de953051", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 57, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.160", ContainerID:"de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de", Pod:"csi-node-driver-wqlkd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali57a7cde3725", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:58:08.940993 containerd[1581]: 2025-01-29 12:58:08.867 [INFO][3380] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Jan 29 12:58:08.940993 containerd[1581]: 2025-01-29 12:58:08.867 [INFO][3380] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" iface="eth0" netns="" Jan 29 12:58:08.940993 containerd[1581]: 2025-01-29 12:58:08.867 [INFO][3380] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Jan 29 12:58:08.940993 containerd[1581]: 2025-01-29 12:58:08.867 [INFO][3380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Jan 29 12:58:08.940993 containerd[1581]: 2025-01-29 12:58:08.920 [INFO][3386] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" HandleID="k8s-pod-network.4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Workload="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:58:08.940993 containerd[1581]: 2025-01-29 12:58:08.920 [INFO][3386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:58:08.940993 containerd[1581]: 2025-01-29 12:58:08.921 [INFO][3386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:58:08.940993 containerd[1581]: 2025-01-29 12:58:08.935 [WARNING][3386] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" HandleID="k8s-pod-network.4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Workload="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:58:08.940993 containerd[1581]: 2025-01-29 12:58:08.935 [INFO][3386] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" HandleID="k8s-pod-network.4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Workload="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:58:08.940993 containerd[1581]: 2025-01-29 12:58:08.937 [INFO][3386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:58:08.940993 containerd[1581]: 2025-01-29 12:58:08.938 [INFO][3380] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Jan 29 12:58:08.941652 containerd[1581]: time="2025-01-29T12:58:08.941034473Z" level=info msg="TearDown network for sandbox \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\" successfully" Jan 29 12:58:08.941652 containerd[1581]: time="2025-01-29T12:58:08.941067956Z" level=info msg="StopPodSandbox for \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\" returns successfully" Jan 29 12:58:08.942266 containerd[1581]: time="2025-01-29T12:58:08.941722463Z" level=info msg="RemovePodSandbox for \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\"" Jan 29 12:58:08.942266 containerd[1581]: time="2025-01-29T12:58:08.941761296Z" level=info msg="Forcibly stopping sandbox \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\"" Jan 29 12:58:09.069240 containerd[1581]: 2025-01-29 12:58:09.013 [WARNING][3406] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.160-k8s-csi--node--driver--wqlkd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"43cc5aeb-8d72-4137-8afa-6422de953051", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 57, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.160", ContainerID:"de90545a9ebbdf42003ca1622ac7e8b1753252acadd46d12f61f72d95b9ad4de", Pod:"csi-node-driver-wqlkd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali57a7cde3725", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:58:09.069240 containerd[1581]: 2025-01-29 12:58:09.013 [INFO][3406] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Jan 29 12:58:09.069240 containerd[1581]: 2025-01-29 12:58:09.013 [INFO][3406] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" iface="eth0" netns="" Jan 29 12:58:09.069240 containerd[1581]: 2025-01-29 12:58:09.014 [INFO][3406] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Jan 29 12:58:09.069240 containerd[1581]: 2025-01-29 12:58:09.014 [INFO][3406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Jan 29 12:58:09.069240 containerd[1581]: 2025-01-29 12:58:09.053 [INFO][3412] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" HandleID="k8s-pod-network.4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Workload="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:58:09.069240 containerd[1581]: 2025-01-29 12:58:09.054 [INFO][3412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:58:09.069240 containerd[1581]: 2025-01-29 12:58:09.054 [INFO][3412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:58:09.069240 containerd[1581]: 2025-01-29 12:58:09.064 [WARNING][3412] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" HandleID="k8s-pod-network.4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Workload="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:58:09.069240 containerd[1581]: 2025-01-29 12:58:09.064 [INFO][3412] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" HandleID="k8s-pod-network.4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Workload="172.24.4.160-k8s-csi--node--driver--wqlkd-eth0" Jan 29 12:58:09.069240 containerd[1581]: 2025-01-29 12:58:09.066 [INFO][3412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:58:09.069240 containerd[1581]: 2025-01-29 12:58:09.067 [INFO][3406] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706" Jan 29 12:58:09.069240 containerd[1581]: time="2025-01-29T12:58:09.069150536Z" level=info msg="TearDown network for sandbox \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\" successfully" Jan 29 12:58:09.073288 containerd[1581]: time="2025-01-29T12:58:09.073250985Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:58:09.073353 containerd[1581]: time="2025-01-29T12:58:09.073306548Z" level=info msg="RemovePodSandbox \"4ff0b977da72bfe60c78411c4333d3f987adbd4a9450343a5ec891a3ce63b706\" returns successfully" Jan 29 12:58:09.074343 containerd[1581]: time="2025-01-29T12:58:09.074071092Z" level=info msg="StopPodSandbox for \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\"" Jan 29 12:58:09.179738 containerd[1581]: 2025-01-29 12:58:09.138 [WARNING][3430] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8f708e0b-c2a1-437f-bd4d-207b0ef20694", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.160", ContainerID:"91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8", Pod:"nginx-deployment-85f456d6dd-62vp4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliefb2941855e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:58:09.179738 containerd[1581]: 2025-01-29 12:58:09.138 [INFO][3430] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Jan 29 12:58:09.179738 containerd[1581]: 2025-01-29 12:58:09.138 [INFO][3430] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" iface="eth0" netns="" Jan 29 12:58:09.179738 containerd[1581]: 2025-01-29 12:58:09.138 [INFO][3430] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Jan 29 12:58:09.179738 containerd[1581]: 2025-01-29 12:58:09.138 [INFO][3430] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Jan 29 12:58:09.179738 containerd[1581]: 2025-01-29 12:58:09.168 [INFO][3436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" HandleID="k8s-pod-network.eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Workload="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:58:09.179738 containerd[1581]: 2025-01-29 12:58:09.169 [INFO][3436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:58:09.179738 containerd[1581]: 2025-01-29 12:58:09.169 [INFO][3436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:58:09.179738 containerd[1581]: 2025-01-29 12:58:09.175 [WARNING][3436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" HandleID="k8s-pod-network.eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Workload="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:58:09.179738 containerd[1581]: 2025-01-29 12:58:09.175 [INFO][3436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" HandleID="k8s-pod-network.eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Workload="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:58:09.179738 containerd[1581]: 2025-01-29 12:58:09.177 [INFO][3436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:58:09.179738 containerd[1581]: 2025-01-29 12:58:09.178 [INFO][3430] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Jan 29 12:58:09.180221 containerd[1581]: time="2025-01-29T12:58:09.179775824Z" level=info msg="TearDown network for sandbox \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\" successfully" Jan 29 12:58:09.180221 containerd[1581]: time="2025-01-29T12:58:09.179835747Z" level=info msg="StopPodSandbox for \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\" returns successfully" Jan 29 12:58:09.180738 containerd[1581]: time="2025-01-29T12:58:09.180699336Z" level=info msg="RemovePodSandbox for \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\"" Jan 29 12:58:09.180738 containerd[1581]: time="2025-01-29T12:58:09.180736025Z" level=info msg="Forcibly stopping sandbox \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\"" Jan 29 12:58:09.324767 containerd[1581]: 2025-01-29 12:58:09.235 [WARNING][3454] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8f708e0b-c2a1-437f-bd4d-207b0ef20694", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.160", ContainerID:"91260522d08d9fe8f76399fc5d8489266e8f78c356983d4f70f30451ac347ae8", Pod:"nginx-deployment-85f456d6dd-62vp4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliefb2941855e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:58:09.324767 containerd[1581]: 2025-01-29 12:58:09.235 [INFO][3454] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Jan 29 12:58:09.324767 containerd[1581]: 2025-01-29 12:58:09.236 [INFO][3454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" iface="eth0" netns="" Jan 29 12:58:09.324767 containerd[1581]: 2025-01-29 12:58:09.236 [INFO][3454] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Jan 29 12:58:09.324767 containerd[1581]: 2025-01-29 12:58:09.236 [INFO][3454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Jan 29 12:58:09.324767 containerd[1581]: 2025-01-29 12:58:09.298 [INFO][3460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" HandleID="k8s-pod-network.eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Workload="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:58:09.324767 containerd[1581]: 2025-01-29 12:58:09.298 [INFO][3460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:58:09.324767 containerd[1581]: 2025-01-29 12:58:09.298 [INFO][3460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:58:09.324767 containerd[1581]: 2025-01-29 12:58:09.314 [WARNING][3460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" HandleID="k8s-pod-network.eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Workload="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:58:09.324767 containerd[1581]: 2025-01-29 12:58:09.314 [INFO][3460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" HandleID="k8s-pod-network.eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Workload="172.24.4.160-k8s-nginx--deployment--85f456d6dd--62vp4-eth0" Jan 29 12:58:09.324767 containerd[1581]: 2025-01-29 12:58:09.316 [INFO][3460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:58:09.324767 containerd[1581]: 2025-01-29 12:58:09.318 [INFO][3454] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030" Jan 29 12:58:09.324767 containerd[1581]: time="2025-01-29T12:58:09.324534856Z" level=info msg="TearDown network for sandbox \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\" successfully" Jan 29 12:58:09.330506 containerd[1581]: time="2025-01-29T12:58:09.330466979Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:58:09.330601 containerd[1581]: time="2025-01-29T12:58:09.330514688Z" level=info msg="RemovePodSandbox \"eea0d1e6ab9d78486ba5b295340182c7d4ee37747087a4af708c28ec36ead030\" returns successfully" Jan 29 12:58:09.795840 kubelet[1976]: E0129 12:58:09.795777 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:10.080919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4221842436.mount: Deactivated successfully. Jan 29 12:58:10.797016 kubelet[1976]: E0129 12:58:10.796866 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:11.797339 kubelet[1976]: E0129 12:58:11.797300 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:12.798171 kubelet[1976]: E0129 12:58:12.798113 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:13.182530 containerd[1581]: time="2025-01-29T12:58:13.182283159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:13.184562 containerd[1581]: time="2025-01-29T12:58:13.184236823Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 29 12:58:13.187819 containerd[1581]: time="2025-01-29T12:58:13.186067285Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:13.189763 containerd[1581]: time="2025-01-29T12:58:13.189725335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:13.190976 containerd[1581]: time="2025-01-29T12:58:13.190947006Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 7.063073978s" Jan 29 12:58:13.191080 containerd[1581]: time="2025-01-29T12:58:13.191059717Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 12:58:13.194096 containerd[1581]: time="2025-01-29T12:58:13.194060323Z" level=info msg="CreateContainer within sandbox \"83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 12:58:13.211966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3355010380.mount: Deactivated successfully. Jan 29 12:58:13.220195 containerd[1581]: time="2025-01-29T12:58:13.219709179Z" level=info msg="CreateContainer within sandbox \"83c285fdcccabbd846d62dca34c98bb1a2280738ed7486cbba651295f7cd8892\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4b8f6659c80caf162f8509f2c619b99221450752fc8d5da866ff4f876620ce32\"" Jan 29 12:58:13.220576 containerd[1581]: time="2025-01-29T12:58:13.220464936Z" level=info msg="StartContainer for \"4b8f6659c80caf162f8509f2c619b99221450752fc8d5da866ff4f876620ce32\"" Jan 29 12:58:13.320620 containerd[1581]: time="2025-01-29T12:58:13.320483216Z" level=info msg="StartContainer for \"4b8f6659c80caf162f8509f2c619b99221450752fc8d5da866ff4f876620ce32\" returns successfully" Jan 29 12:58:13.490244 kubelet[1976]: I0129 12:58:13.490130 1976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.425512207 podStartE2EDuration="8.490093021s" podCreationTimestamp="2025-01-29 12:58:05 +0000 UTC" firstStartedPulling="2025-01-29 12:58:06.127304511 +0000 UTC m=+58.385337917" lastFinishedPulling="2025-01-29 12:58:13.191885325 +0000 UTC m=+65.449918731" observedRunningTime="2025-01-29 12:58:13.489401244 +0000 UTC m=+65.747434690" watchObservedRunningTime="2025-01-29 12:58:13.490093021 +0000 UTC m=+65.748126467" Jan 29 12:58:13.799685 kubelet[1976]: E0129 12:58:13.799380 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:14.800025 kubelet[1976]: E0129 12:58:14.799922 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:15.800372 kubelet[1976]: E0129 12:58:15.800092 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:16.803834 kubelet[1976]: E0129 12:58:16.801294 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:17.801697 kubelet[1976]: E0129 12:58:17.801578 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:18.802223 kubelet[1976]: E0129 12:58:18.802091 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:19.802921 kubelet[1976]: E0129 12:58:19.802844 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:20.804034 kubelet[1976]: E0129 12:58:20.803922 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:21.804341 kubelet[1976]: E0129 12:58:21.804244 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:22.805190 kubelet[1976]: E0129 12:58:22.805071 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:23.806408 kubelet[1976]: E0129 12:58:23.806255 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:24.807093 kubelet[1976]: E0129 12:58:24.807010 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:25.807749 kubelet[1976]: E0129 12:58:25.807648 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:26.808624 kubelet[1976]: E0129 12:58:26.808496 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:27.809521 kubelet[1976]: E0129 12:58:27.809394 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:28.747071 kubelet[1976]: E0129 12:58:28.746991 1976 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:28.810268 kubelet[1976]: E0129 12:58:28.810200 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:29.810471 kubelet[1976]: E0129 12:58:29.810395 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:30.811786 kubelet[1976]: E0129 12:58:30.811679 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:31.812846 kubelet[1976]: E0129 12:58:31.812728 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:32.813066 kubelet[1976]: E0129 12:58:32.812926 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:33.813658 kubelet[1976]: E0129 12:58:33.813544 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:34.814592 kubelet[1976]: E0129 12:58:34.814494 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:35.814778 kubelet[1976]: E0129 12:58:35.814684 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:36.815212 kubelet[1976]: E0129 12:58:36.815132 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:37.816093 kubelet[1976]: E0129 12:58:37.816011 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:38.075087 kubelet[1976]: I0129 12:58:38.074658 1976 topology_manager.go:215] "Topology Admit Handler" podUID="a8430d61-c88b-4a83-9aa9-c62afe1e2abf" podNamespace="default" podName="test-pod-1" Jan 29 12:58:38.194867 kubelet[1976]: I0129 12:58:38.194595 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fcbe748e-0457-46ff-8cca-9bb53e838315\" (UniqueName: \"kubernetes.io/nfs/a8430d61-c88b-4a83-9aa9-c62afe1e2abf-pvc-fcbe748e-0457-46ff-8cca-9bb53e838315\") pod \"test-pod-1\" (UID: \"a8430d61-c88b-4a83-9aa9-c62afe1e2abf\") " pod="default/test-pod-1" Jan 29 12:58:38.194867 kubelet[1976]: I0129 12:58:38.194690 1976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nbqt\" (UniqueName: \"kubernetes.io/projected/a8430d61-c88b-4a83-9aa9-c62afe1e2abf-kube-api-access-8nbqt\") pod \"test-pod-1\" (UID: \"a8430d61-c88b-4a83-9aa9-c62afe1e2abf\") " pod="default/test-pod-1" Jan 29 12:58:38.369139 kernel: FS-Cache: Loaded Jan 29 12:58:38.464475 kernel: RPC: Registered named UNIX socket transport module. Jan 29 12:58:38.464639 kernel: RPC: Registered udp transport module. Jan 29 12:58:38.464684 kernel: RPC: Registered tcp transport module. Jan 29 12:58:38.464720 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 12:58:38.465211 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 12:58:38.816858 kubelet[1976]: E0129 12:58:38.816348 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:38.835467 kernel: NFS: Registering the id_resolver key type Jan 29 12:58:38.835596 kernel: Key type id_resolver registered Jan 29 12:58:38.835660 kernel: Key type id_legacy registered Jan 29 12:58:38.888307 nfsidmap[3617]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 29 12:58:38.899310 nfsidmap[3618]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 29 12:58:38.981574 containerd[1581]: time="2025-01-29T12:58:38.981444095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a8430d61-c88b-4a83-9aa9-c62afe1e2abf,Namespace:default,Attempt:0,}" Jan 29 12:58:39.224742 systemd-networkd[1206]: cali5ec59c6bf6e: Link UP Jan 29 12:58:39.227213 systemd-networkd[1206]: cali5ec59c6bf6e: Gained carrier Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.091 [INFO][3620] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.160-k8s-test--pod--1-eth0 default a8430d61-c88b-4a83-9aa9-c62afe1e2abf 1346 0 2025-01-29 12:58:07 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.160 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.160-k8s-test--pod--1-" Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.091 [INFO][3620] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.160-k8s-test--pod--1-eth0" Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.146 [INFO][3630] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" HandleID="k8s-pod-network.cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" Workload="172.24.4.160-k8s-test--pod--1-eth0" Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.168 [INFO][3630] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" HandleID="k8s-pod-network.cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" Workload="172.24.4.160-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000336db0), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.160", "pod":"test-pod-1", "timestamp":"2025-01-29 12:58:39.146177507 +0000 UTC"}, Hostname:"172.24.4.160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.168 [INFO][3630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.168 [INFO][3630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.168 [INFO][3630] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.160' Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.172 [INFO][3630] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" host="172.24.4.160" Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.179 [INFO][3630] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.160" Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.187 [INFO][3630] ipam/ipam.go 489: Trying affinity for 192.168.77.192/26 host="172.24.4.160" Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.190 [INFO][3630] ipam/ipam.go 155: Attempting to load block cidr=192.168.77.192/26 host="172.24.4.160" Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.195 [INFO][3630] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="172.24.4.160" Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.195 [INFO][3630] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" host="172.24.4.160" Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.198 [INFO][3630] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.205 [INFO][3630] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" host="172.24.4.160" Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.217 [INFO][3630] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.77.196/26] block=192.168.77.192/26 handle="k8s-pod-network.cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" host="172.24.4.160" Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.218 [INFO][3630] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.196/26] handle="k8s-pod-network.cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" host="172.24.4.160" Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.218 [INFO][3630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.218 [INFO][3630] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.196/26] IPv6=[] ContainerID="cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" HandleID="k8s-pod-network.cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" Workload="172.24.4.160-k8s-test--pod--1-eth0" Jan 29 12:58:39.243574 containerd[1581]: 2025-01-29 12:58:39.220 [INFO][3620] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.160-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.160-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a8430d61-c88b-4a83-9aa9-c62afe1e2abf", ResourceVersion:"1346", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 58, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.160", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:58:39.245287 containerd[1581]: 2025-01-29 12:58:39.221 [INFO][3620] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.77.196/32] ContainerID="cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.160-k8s-test--pod--1-eth0" Jan 29 12:58:39.245287 containerd[1581]: 2025-01-29 12:58:39.221 [INFO][3620] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.160-k8s-test--pod--1-eth0" Jan 29 12:58:39.245287 containerd[1581]: 2025-01-29 12:58:39.224 [INFO][3620] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.160-k8s-test--pod--1-eth0" Jan 29 12:58:39.245287 containerd[1581]: 2025-01-29 12:58:39.226 [INFO][3620] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.160-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.160-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a8430d61-c88b-4a83-9aa9-c62afe1e2abf", ResourceVersion:"1346", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 58, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.160", ContainerID:"cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"5e:53:1a:41:71:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:58:39.245287 containerd[1581]: 2025-01-29 12:58:39.241 [INFO][3620] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.160-k8s-test--pod--1-eth0" Jan 29 12:58:39.278498 containerd[1581]: time="2025-01-29T12:58:39.275252838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:58:39.278498 containerd[1581]: time="2025-01-29T12:58:39.275329541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:58:39.278498 containerd[1581]: time="2025-01-29T12:58:39.275349980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:39.278498 containerd[1581]: time="2025-01-29T12:58:39.275474264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:39.344839 containerd[1581]: time="2025-01-29T12:58:39.344784112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a8430d61-c88b-4a83-9aa9-c62afe1e2abf,Namespace:default,Attempt:0,} returns sandbox id \"cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b\"" Jan 29 12:58:39.346748 containerd[1581]: time="2025-01-29T12:58:39.346711844Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 12:58:39.754045 containerd[1581]: time="2025-01-29T12:58:39.753927852Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:39.755874 containerd[1581]: time="2025-01-29T12:58:39.755726773Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 12:58:39.766950 containerd[1581]: time="2025-01-29T12:58:39.766866196Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 419.953975ms" Jan 29 12:58:39.767315 containerd[1581]: time="2025-01-29T12:58:39.767116086Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 12:58:39.771824 containerd[1581]: time="2025-01-29T12:58:39.771726480Z" level=info msg="CreateContainer within sandbox \"cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 12:58:39.800613 containerd[1581]: time="2025-01-29T12:58:39.800513307Z" level=info msg="CreateContainer within sandbox \"cad2a1253d85d69633a7047a03f24fda87eba121884c7264cbf38f9bc00f0b0b\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"d8a2172e238c73abfc75b968aa1a93c5dcc481d8962060f39cd178f836d6ca18\"" Jan 29 12:58:39.802267 containerd[1581]: time="2025-01-29T12:58:39.802210215Z" level=info msg="StartContainer for \"d8a2172e238c73abfc75b968aa1a93c5dcc481d8962060f39cd178f836d6ca18\"" Jan 29 12:58:39.816774 kubelet[1976]: E0129 12:58:39.816705 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:39.917854 containerd[1581]: time="2025-01-29T12:58:39.917331734Z" level=info msg="StartContainer for \"d8a2172e238c73abfc75b968aa1a93c5dcc481d8962060f39cd178f836d6ca18\" returns successfully" Jan 29 12:58:40.423207 systemd-networkd[1206]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 12:58:40.818006 kubelet[1976]: E0129 12:58:40.817889 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:41.819065 kubelet[1976]: E0129 12:58:41.818886 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:42.819935 kubelet[1976]: E0129 12:58:42.819853 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:43.821055 kubelet[1976]: E0129 12:58:43.820916 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:44.821251 kubelet[1976]: E0129 12:58:44.821136 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:45.821737 kubelet[1976]: E0129 12:58:45.821637 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:46.822027 kubelet[1976]: E0129 12:58:46.821941 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:47.822760 kubelet[1976]: E0129 12:58:47.822600 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:48.746994 kubelet[1976]: E0129 12:58:48.746919 1976 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:48.823307 kubelet[1976]: E0129 12:58:48.823173 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:49.823747 kubelet[1976]: E0129 12:58:49.823621 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:50.824858 kubelet[1976]: E0129 12:58:50.824743 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:51.828531 kubelet[1976]: E0129 12:58:51.828409 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:52.829573 kubelet[1976]: E0129 12:58:52.829439 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:58:53.830228 kubelet[1976]: E0129 12:58:53.830128 1976 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"