Jan 17 12:47:24.055779 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:47:24.055809 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:47:24.055819 kernel: BIOS-provided physical RAM map: Jan 17 12:47:24.055877 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:47:24.055885 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:47:24.055895 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:47:24.055904 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 17 12:47:24.055912 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 17 12:47:24.055920 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:47:24.055928 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:47:24.055935 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 17 12:47:24.055943 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 17 12:47:24.055951 kernel: NX (Execute Disable) protection: active Jan 17 12:47:24.055959 kernel: APIC: Static calls initialized Jan 17 12:47:24.055970 kernel: SMBIOS 3.0.0 present. Jan 17 12:47:24.055978 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 17 12:47:24.055987 kernel: Hypervisor detected: KVM Jan 17 12:47:24.055995 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:47:24.056003 kernel: kvm-clock: using sched offset of 4560242371 cycles Jan 17 12:47:24.056014 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:47:24.056022 kernel: tsc: Detected 1996.249 MHz processor Jan 17 12:47:24.056031 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:47:24.056039 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:47:24.056048 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 17 12:47:24.056056 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:47:24.056065 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:47:24.056073 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 17 12:47:24.056081 kernel: ACPI: Early table checksum verification disabled Jan 17 12:47:24.056091 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 17 12:47:24.056099 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:47:24.056108 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:47:24.056116 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:47:24.056124 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 17 12:47:24.056133 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:47:24.056141 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:47:24.056177 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 17 12:47:24.056186 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 17 12:47:24.056197 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 17 12:47:24.056206 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 17 12:47:24.056214 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 17 12:47:24.056225 kernel: No NUMA configuration found Jan 17 12:47:24.056234 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 17 12:47:24.056243 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 17 12:47:24.056255 kernel: Zone ranges: Jan 17 12:47:24.056264 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:47:24.056273 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 12:47:24.056282 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 17 12:47:24.056290 kernel: Movable zone start for each node Jan 17 12:47:24.056299 kernel: Early memory node ranges Jan 17 12:47:24.056307 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:47:24.056316 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 17 12:47:24.056327 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 17 12:47:24.056335 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 17 12:47:24.056344 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:47:24.056353 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:47:24.056362 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 17 12:47:24.056370 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:47:24.056379 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:47:24.056388 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:47:24.056397 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:47:24.056408 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:47:24.056416 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:47:24.056425 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:47:24.056434 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:47:24.056442 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:47:24.056451 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:47:24.056460 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:47:24.056468 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 17 12:47:24.056477 kernel: Booting paravirtualized kernel on KVM Jan 17 12:47:24.056488 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:47:24.056497 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:47:24.056506 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:47:24.056514 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:47:24.056523 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:47:24.056531 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 12:47:24.056541 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:47:24.056551 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:47:24.056562 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:47:24.056571 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:47:24.056580 kernel: Fallback order for Node 0: 0 Jan 17 12:47:24.056588 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 17 12:47:24.056597 kernel: Policy zone: Normal Jan 17 12:47:24.056606 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:47:24.056614 kernel: software IO TLB: area num 2. Jan 17 12:47:24.056623 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 227308K reserved, 0K cma-reserved) Jan 17 12:47:24.056632 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:47:24.056643 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:47:24.056652 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:47:24.056660 kernel: Dynamic Preempt: voluntary Jan 17 12:47:24.056669 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:47:24.056678 kernel: rcu: RCU event tracing is enabled. Jan 17 12:47:24.056688 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:47:24.056697 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:47:24.056705 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:47:24.056714 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:47:24.056725 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:47:24.056734 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:47:24.056742 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:47:24.056751 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:47:24.056760 kernel: Console: colour VGA+ 80x25 Jan 17 12:47:24.056768 kernel: printk: console [tty0] enabled Jan 17 12:47:24.056777 kernel: printk: console [ttyS0] enabled Jan 17 12:47:24.056786 kernel: ACPI: Core revision 20230628 Jan 17 12:47:24.056795 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:47:24.056805 kernel: x2apic enabled Jan 17 12:47:24.056814 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:47:24.056823 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:47:24.056832 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 12:47:24.056840 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 17 12:47:24.056849 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 12:47:24.056858 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 12:47:24.056867 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:47:24.056876 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:47:24.056886 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:47:24.056896 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:47:24.056904 kernel: Speculative Store Bypass: Vulnerable Jan 17 12:47:24.056913 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 17 12:47:24.056922 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:47:24.056937 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:47:24.056948 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:47:24.056957 kernel: landlock: Up and running. Jan 17 12:47:24.056966 kernel: SELinux: Initializing. Jan 17 12:47:24.056975 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:47:24.056985 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:47:24.056994 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 17 12:47:24.057006 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:47:24.057016 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:47:24.057025 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:47:24.057034 kernel: Performance Events: AMD PMU driver. Jan 17 12:47:24.057043 kernel: ... version: 0 Jan 17 12:47:24.057054 kernel: ... bit width: 48 Jan 17 12:47:24.057063 kernel: ... generic registers: 4 Jan 17 12:47:24.057072 kernel: ... value mask: 0000ffffffffffff Jan 17 12:47:24.057081 kernel: ... max period: 00007fffffffffff Jan 17 12:47:24.057090 kernel: ... fixed-purpose events: 0 Jan 17 12:47:24.057100 kernel: ... event mask: 000000000000000f Jan 17 12:47:24.057109 kernel: signal: max sigframe size: 1440 Jan 17 12:47:24.057118 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:47:24.057127 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:47:24.057138 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:47:24.057148 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:47:24.057171 kernel: .... node #0, CPUs: #1 Jan 17 12:47:24.057180 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:47:24.057189 kernel: smpboot: Max logical packages: 2 Jan 17 12:47:24.057215 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 17 12:47:24.057224 kernel: devtmpfs: initialized Jan 17 12:47:24.057233 kernel: x86/mm: Memory block size: 128MB Jan 17 12:47:24.057243 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:47:24.057255 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:47:24.057264 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:47:24.057274 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:47:24.057283 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:47:24.057292 kernel: audit: type=2000 audit(1737118043.051:1): state=initialized audit_enabled=0 res=1 Jan 17 12:47:24.057301 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:47:24.057310 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:47:24.057319 kernel: cpuidle: using governor menu Jan 17 12:47:24.057328 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:47:24.057339 kernel: dca service started, version 1.12.1 Jan 17 12:47:24.057349 kernel: PCI: Using configuration type 1 for base access Jan 17 12:47:24.057358 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:47:24.057368 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:47:24.057377 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:47:24.057386 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:47:24.057395 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:47:24.057404 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:47:24.057413 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:47:24.057425 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:47:24.057434 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:47:24.057443 kernel: ACPI: Interpreter enabled Jan 17 12:47:24.057452 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:47:24.057461 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:47:24.057470 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:47:24.057480 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:47:24.057489 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 12:47:24.057498 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:47:24.057662 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:47:24.057767 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:47:24.057863 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:47:24.057878 kernel: acpiphp: Slot [3] registered Jan 17 12:47:24.057888 kernel: acpiphp: Slot [4] registered Jan 17 12:47:24.057897 kernel: acpiphp: Slot [5] registered Jan 17 12:47:24.057906 kernel: acpiphp: Slot [6] registered Jan 17 12:47:24.057918 kernel: acpiphp: Slot [7] registered Jan 17 12:47:24.057927 kernel: acpiphp: Slot [8] registered Jan 17 12:47:24.057936 kernel: acpiphp: Slot [9] registered Jan 17 12:47:24.057946 kernel: acpiphp: Slot [10] registered Jan 17 12:47:24.057955 kernel: acpiphp: Slot [11] registered Jan 17 12:47:24.057964 kernel: acpiphp: Slot [12] registered Jan 17 12:47:24.057973 kernel: acpiphp: Slot [13] registered Jan 17 12:47:24.057982 kernel: acpiphp: Slot [14] registered Jan 17 12:47:24.057991 kernel: acpiphp: Slot [15] registered Jan 17 12:47:24.058000 kernel: acpiphp: Slot [16] registered Jan 17 12:47:24.058011 kernel: acpiphp: Slot [17] registered Jan 17 12:47:24.058020 kernel: acpiphp: Slot [18] registered Jan 17 12:47:24.058029 kernel: acpiphp: Slot [19] registered Jan 17 12:47:24.058038 kernel: acpiphp: Slot [20] registered Jan 17 12:47:24.058047 kernel: acpiphp: Slot [21] registered Jan 17 12:47:24.058056 kernel: acpiphp: Slot [22] registered Jan 17 12:47:24.058065 kernel: acpiphp: Slot [23] registered Jan 17 12:47:24.058074 kernel: acpiphp: Slot [24] registered Jan 17 12:47:24.058083 kernel: acpiphp: Slot [25] registered Jan 17 12:47:24.058095 kernel: acpiphp: Slot [26] registered Jan 17 12:47:24.058104 kernel: acpiphp: Slot [27] registered Jan 17 12:47:24.058113 kernel: acpiphp: Slot [28] registered Jan 17 12:47:24.058122 kernel: acpiphp: Slot [29] registered Jan 17 12:47:24.058131 kernel: acpiphp: Slot [30] registered Jan 17 12:47:24.058140 kernel: acpiphp: Slot [31] registered Jan 17 12:47:24.058171 kernel: PCI host bridge to bus 0000:00 Jan 17 12:47:24.058274 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:47:24.058362 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:47:24.058455 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:47:24.058541 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:47:24.058626 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 17 12:47:24.058715 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:47:24.058825 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:47:24.058932 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 12:47:24.059043 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 12:47:24.059143 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 17 12:47:24.059366 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 12:47:24.059465 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 12:47:24.059562 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 12:47:24.059659 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 12:47:24.059766 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:47:24.059877 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 12:47:24.059973 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 12:47:24.060083 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 12:47:24.060204 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 12:47:24.060306 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 17 12:47:24.060403 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 17 12:47:24.060506 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 17 12:47:24.060603 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:47:24.060711 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:47:24.060810 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 17 12:47:24.060906 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 17 12:47:24.061002 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 17 12:47:24.061097 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 17 12:47:24.061249 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:47:24.061352 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 12:47:24.061450 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 17 12:47:24.061551 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 17 12:47:24.061655 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 12:47:24.061754 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 17 12:47:24.061853 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 17 12:47:24.061964 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:47:24.062060 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 17 12:47:24.062205 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 17 12:47:24.062308 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 17 12:47:24.062322 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:47:24.062332 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:47:24.062342 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:47:24.062351 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:47:24.062365 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:47:24.062374 kernel: iommu: Default domain type: Translated Jan 17 12:47:24.062383 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:47:24.062393 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:47:24.062402 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:47:24.062411 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:47:24.062421 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 17 12:47:24.062515 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 12:47:24.062610 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 12:47:24.062709 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:47:24.062723 kernel: vgaarb: loaded Jan 17 12:47:24.062733 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:47:24.062742 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:47:24.062751 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:47:24.062761 kernel: pnp: PnP ACPI init Jan 17 12:47:24.062859 kernel: pnp 00:03: [dma 2] Jan 17 12:47:24.062873 kernel: pnp: PnP ACPI: found 5 devices Jan 17 12:47:24.062887 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:47:24.062896 kernel: NET: Registered PF_INET protocol family Jan 17 12:47:24.062906 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:47:24.062916 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:47:24.062925 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:47:24.062935 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:47:24.062944 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:47:24.062954 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:47:24.062963 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:47:24.062975 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:47:24.062984 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:47:24.062994 kernel: NET: Registered PF_XDP protocol family Jan 17 12:47:24.063079 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:47:24.063199 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:47:24.063293 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:47:24.063380 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 17 12:47:24.063465 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 17 12:47:24.063563 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 12:47:24.063668 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:47:24.063683 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:47:24.063692 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 12:47:24.063702 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 17 12:47:24.063711 kernel: Initialise system trusted keyrings Jan 17 12:47:24.063721 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:47:24.063730 kernel: Key type asymmetric registered Jan 17 12:47:24.063739 kernel: Asymmetric key parser 'x509' registered Jan 17 12:47:24.063752 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:47:24.063761 kernel: io scheduler mq-deadline registered Jan 17 12:47:24.063771 kernel: io scheduler kyber registered Jan 17 12:47:24.063780 kernel: io scheduler bfq registered Jan 17 12:47:24.063789 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:47:24.063799 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 12:47:24.063808 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:47:24.063818 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 12:47:24.063869 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:47:24.063881 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:47:24.063890 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:47:24.063900 kernel: random: crng init done Jan 17 12:47:24.063909 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:47:24.063918 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:47:24.063928 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:47:24.064034 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 12:47:24.064049 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:47:24.064140 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 12:47:24.066294 kernel: rtc_cmos 00:04: setting system clock to 2025-01-17T12:47:23 UTC (1737118043) Jan 17 12:47:24.066387 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 17 12:47:24.066402 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 12:47:24.066412 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:47:24.066421 kernel: Segment Routing with IPv6 Jan 17 12:47:24.066430 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:47:24.066440 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:47:24.066449 kernel: Key type dns_resolver registered Jan 17 12:47:24.066463 kernel: IPI shorthand broadcast: enabled Jan 17 12:47:24.066473 kernel: sched_clock: Marking stable (999007693, 171878906)->(1211256782, -40370183) Jan 17 12:47:24.066482 kernel: registered taskstats version 1 Jan 17 12:47:24.066491 kernel: Loading compiled-in X.509 certificates Jan 17 12:47:24.066501 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:47:24.066510 kernel: Key type .fscrypt registered Jan 17 12:47:24.066519 kernel: Key type fscrypt-provisioning registered Jan 17 12:47:24.066528 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:47:24.066540 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:47:24.066549 kernel: ima: No architecture policies found Jan 17 12:47:24.066558 kernel: clk: Disabling unused clocks Jan 17 12:47:24.066567 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:47:24.066576 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:47:24.066586 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:47:24.066595 kernel: Run /init as init process Jan 17 12:47:24.066604 kernel: with arguments: Jan 17 12:47:24.066613 kernel: /init Jan 17 12:47:24.066622 kernel: with environment: Jan 17 12:47:24.066634 kernel: HOME=/ Jan 17 12:47:24.066642 kernel: TERM=linux Jan 17 12:47:24.066652 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:47:24.066664 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:47:24.066676 systemd[1]: Detected virtualization kvm. Jan 17 12:47:24.066687 systemd[1]: Detected architecture x86-64. Jan 17 12:47:24.066696 systemd[1]: Running in initrd. Jan 17 12:47:24.066709 systemd[1]: No hostname configured, using default hostname. Jan 17 12:47:24.066718 systemd[1]: Hostname set to . Jan 17 12:47:24.066729 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:47:24.066739 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:47:24.066749 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:47:24.066759 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:47:24.066770 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:47:24.066789 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:47:24.066801 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:47:24.066811 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:47:24.066824 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:47:24.066835 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:47:24.066847 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:47:24.066857 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:47:24.066867 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:47:24.066878 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:47:24.066888 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:47:24.066898 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:47:24.066908 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:47:24.066919 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:47:24.066929 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:47:24.066942 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:47:24.066952 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:47:24.066963 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:47:24.066973 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:47:24.066983 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:47:24.066994 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:47:24.067005 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:47:24.067015 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:47:24.067027 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:47:24.067037 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:47:24.067048 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:47:24.067079 systemd-journald[184]: Collecting audit messages is disabled. Jan 17 12:47:24.067106 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:47:24.067118 systemd-journald[184]: Journal started Jan 17 12:47:24.067141 systemd-journald[184]: Runtime Journal (/run/log/journal/8c0e300b95d04153b47cb897fdbacb3b) is 8.0M, max 78.3M, 70.3M free. Jan 17 12:47:24.082656 systemd-modules-load[185]: Inserted module 'overlay' Jan 17 12:47:24.089241 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:47:24.090904 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:47:24.094766 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:47:24.098348 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:47:24.112516 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:47:24.126177 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:47:24.128076 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 17 12:47:24.168829 kernel: Bridge firewalling registered Jan 17 12:47:24.128346 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:47:24.175390 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:47:24.176352 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:47:24.178667 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:47:24.186382 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:47:24.189298 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:47:24.191740 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:47:24.195228 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:47:24.205253 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:47:24.209310 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:47:24.210041 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:47:24.213227 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:47:24.221369 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:47:24.236433 dracut-cmdline[217]: dracut-dracut-053 Jan 17 12:47:24.242110 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:47:24.258358 systemd-resolved[220]: Positive Trust Anchors: Jan 17 12:47:24.258370 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:47:24.258410 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:47:24.264934 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 17 12:47:24.265908 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:47:24.266733 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:47:24.324212 kernel: SCSI subsystem initialized Jan 17 12:47:24.334417 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:47:24.347389 kernel: iscsi: registered transport (tcp) Jan 17 12:47:24.370244 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:47:24.370363 kernel: QLogic iSCSI HBA Driver Jan 17 12:47:24.433172 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:47:24.439549 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:47:24.490990 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:47:24.491092 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:47:24.491123 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:47:24.540217 kernel: raid6: sse2x4 gen() 12728 MB/s Jan 17 12:47:24.558260 kernel: raid6: sse2x2 gen() 14270 MB/s Jan 17 12:47:24.576729 kernel: raid6: sse2x1 gen() 9323 MB/s Jan 17 12:47:24.576792 kernel: raid6: using algorithm sse2x2 gen() 14270 MB/s Jan 17 12:47:24.595696 kernel: raid6: .... xor() 8909 MB/s, rmw enabled Jan 17 12:47:24.595788 kernel: raid6: using ssse3x2 recovery algorithm Jan 17 12:47:24.619253 kernel: xor: measuring software checksum speed Jan 17 12:47:24.619321 kernel: prefetch64-sse : 15978 MB/sec Jan 17 12:47:24.621777 kernel: generic_sse : 15676 MB/sec Jan 17 12:47:24.621838 kernel: xor: using function: prefetch64-sse (15978 MB/sec) Jan 17 12:47:24.812220 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:47:24.830225 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:47:24.836470 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:47:24.851395 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 17 12:47:24.855984 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:47:24.867410 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:47:24.906610 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jan 17 12:47:24.971082 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:47:24.981548 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:47:25.071047 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:47:25.081463 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:47:25.115387 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:47:25.124812 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:47:25.127659 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:47:25.130103 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:47:25.140523 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:47:25.165420 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:47:25.182177 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 17 12:47:25.216300 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 17 12:47:25.216422 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:47:25.216436 kernel: GPT:17805311 != 20971519 Jan 17 12:47:25.216448 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:47:25.216459 kernel: GPT:17805311 != 20971519 Jan 17 12:47:25.216470 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:47:25.216481 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:47:25.186436 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:47:25.186586 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:47:25.187471 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:47:25.191679 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:47:25.191872 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:47:25.192521 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:47:25.199392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:47:25.229186 kernel: libata version 3.00 loaded. Jan 17 12:47:25.231218 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 12:47:25.240058 kernel: scsi host0: ata_piix Jan 17 12:47:25.240212 kernel: scsi host1: ata_piix Jan 17 12:47:25.240358 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 17 12:47:25.240378 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 17 12:47:25.253200 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) Jan 17 12:47:25.257842 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (449) Jan 17 12:47:25.279520 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:47:25.310922 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:47:25.317688 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:47:25.323666 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:47:25.324276 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:47:25.330846 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:47:25.337342 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:47:25.340320 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:47:25.348801 disk-uuid[504]: Primary Header is updated. Jan 17 12:47:25.348801 disk-uuid[504]: Secondary Entries is updated. Jan 17 12:47:25.348801 disk-uuid[504]: Secondary Header is updated. Jan 17 12:47:25.358185 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:47:25.363205 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:47:25.366799 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:47:25.374177 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:47:26.379245 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:47:26.381212 disk-uuid[505]: The operation has completed successfully. Jan 17 12:47:26.453908 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:47:26.455804 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:47:26.507295 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:47:26.512765 sh[527]: Success Jan 17 12:47:26.543183 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 17 12:47:26.625687 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:47:26.635415 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:47:26.643363 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:47:26.676392 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:47:26.676479 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:47:26.678426 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:47:26.680507 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:47:26.683261 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:47:26.696494 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:47:26.697675 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:47:26.713354 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:47:26.718609 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:47:26.731986 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:47:26.732034 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:47:26.732047 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:47:26.738176 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:47:26.754784 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:47:26.754426 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:47:26.769418 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:47:26.776748 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:47:26.862489 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:47:26.870832 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:47:26.892951 systemd-networkd[710]: lo: Link UP Jan 17 12:47:26.892960 systemd-networkd[710]: lo: Gained carrier Jan 17 12:47:26.894976 systemd-networkd[710]: Enumeration completed Jan 17 12:47:26.895353 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:47:26.895929 systemd[1]: Reached target network.target - Network. Jan 17 12:47:26.897535 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:47:26.897538 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:47:26.900258 systemd-networkd[710]: eth0: Link UP Jan 17 12:47:26.900262 systemd-networkd[710]: eth0: Gained carrier Jan 17 12:47:26.900268 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:47:26.917238 systemd-networkd[710]: eth0: DHCPv4 address 172.24.4.220/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 17 12:47:26.940009 ignition[614]: Ignition 2.19.0 Jan 17 12:47:26.940855 ignition[614]: Stage: fetch-offline Jan 17 12:47:26.940901 ignition[614]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:47:26.940911 ignition[614]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:47:26.942625 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:47:26.941022 ignition[614]: parsed url from cmdline: "" Jan 17 12:47:26.941025 ignition[614]: no config URL provided Jan 17 12:47:26.941031 ignition[614]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:47:26.941040 ignition[614]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:47:26.941045 ignition[614]: failed to fetch config: resource requires networking Jan 17 12:47:26.941389 ignition[614]: Ignition finished successfully Jan 17 12:47:26.950384 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:47:26.963733 ignition[719]: Ignition 2.19.0 Jan 17 12:47:26.963745 ignition[719]: Stage: fetch Jan 17 12:47:26.963975 ignition[719]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:47:26.963990 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:47:26.964120 ignition[719]: parsed url from cmdline: "" Jan 17 12:47:26.964126 ignition[719]: no config URL provided Jan 17 12:47:26.964132 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:47:26.964143 ignition[719]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:47:26.964309 ignition[719]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 17 12:47:26.964346 ignition[719]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 17 12:47:26.965671 ignition[719]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 17 12:47:27.224339 ignition[719]: GET result: OK Jan 17 12:47:27.224470 ignition[719]: parsing config with SHA512: 9cf350bbc601bc7a1db8fdc9e0d9e34af0e06df40e38a566996ec5975b7f4f41744497826628c36d86d736c03e8a7f4dd63c991d17c65667c16ec4b404175a7d Jan 17 12:47:27.230949 unknown[719]: fetched base config from "system" Jan 17 12:47:27.230975 unknown[719]: fetched base config from "system" Jan 17 12:47:27.231586 ignition[719]: fetch: fetch complete Jan 17 12:47:27.230991 unknown[719]: fetched user config from "openstack" Jan 17 12:47:27.231598 ignition[719]: fetch: fetch passed Jan 17 12:47:27.235015 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:47:27.231689 ignition[719]: Ignition finished successfully Jan 17 12:47:27.244527 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:47:27.289275 ignition[725]: Ignition 2.19.0 Jan 17 12:47:27.289305 ignition[725]: Stage: kargs Jan 17 12:47:27.289806 ignition[725]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:47:27.289833 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:47:27.292096 ignition[725]: kargs: kargs passed Jan 17 12:47:27.294463 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:47:27.292300 ignition[725]: Ignition finished successfully Jan 17 12:47:27.305495 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:47:27.338732 ignition[731]: Ignition 2.19.0 Jan 17 12:47:27.340485 ignition[731]: Stage: disks Jan 17 12:47:27.340898 ignition[731]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:47:27.340943 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:47:27.347201 ignition[731]: disks: disks passed Jan 17 12:47:27.348519 ignition[731]: Ignition finished successfully Jan 17 12:47:27.350550 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:47:27.352950 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:47:27.354966 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:47:27.358070 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:47:27.361123 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:47:27.363759 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:47:27.373425 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:47:27.413397 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:47:27.426679 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:47:27.436366 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:47:27.598400 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:47:27.598628 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:47:27.599710 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:47:27.607371 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:47:27.610707 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:47:27.613464 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:47:27.617144 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 17 12:47:27.634918 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (747) Jan 17 12:47:27.634969 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:47:27.634999 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:47:27.635027 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:47:27.635055 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:47:27.621959 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:47:27.621995 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:47:27.640725 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:47:27.645259 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:47:27.653506 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:47:28.031106 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:47:28.075499 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:47:28.098385 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:47:28.113387 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:47:28.290896 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:47:28.298320 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:47:28.306573 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:47:28.324099 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:47:28.330419 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:47:28.370294 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:47:28.376229 ignition[863]: INFO : Ignition 2.19.0 Jan 17 12:47:28.376229 ignition[863]: INFO : Stage: mount Jan 17 12:47:28.377442 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:47:28.377442 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:47:28.377442 ignition[863]: INFO : mount: mount passed Jan 17 12:47:28.377442 ignition[863]: INFO : Ignition finished successfully Jan 17 12:47:28.378249 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:47:28.445368 systemd-networkd[710]: eth0: Gained IPv6LL Jan 17 12:47:35.212777 coreos-metadata[749]: Jan 17 12:47:35.212 WARN failed to locate config-drive, using the metadata service API instead Jan 17 12:47:35.253424 coreos-metadata[749]: Jan 17 12:47:35.253 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 12:47:35.268956 coreos-metadata[749]: Jan 17 12:47:35.268 INFO Fetch successful Jan 17 12:47:35.270474 coreos-metadata[749]: Jan 17 12:47:35.269 INFO wrote hostname ci-4081-3-0-0-a01a08aaf3.novalocal to /sysroot/etc/hostname Jan 17 12:47:35.272697 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 17 12:47:35.272961 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 17 12:47:35.284420 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:47:35.319913 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:47:35.464286 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (880) Jan 17 12:47:35.520768 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:47:35.520862 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:47:35.525081 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:47:35.723249 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:47:35.752654 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:47:35.805798 ignition[898]: INFO : Ignition 2.19.0 Jan 17 12:47:35.805798 ignition[898]: INFO : Stage: files Jan 17 12:47:35.808836 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:47:35.808836 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:47:35.808836 ignition[898]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:47:35.814476 ignition[898]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:47:35.814476 ignition[898]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:47:35.818896 ignition[898]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:47:35.818896 ignition[898]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:47:35.818896 ignition[898]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:47:35.818386 unknown[898]: wrote ssh authorized keys file for user: core Jan 17 12:47:35.826491 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:47:35.826491 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:47:35.826491 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:47:35.826491 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:47:35.826491 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:47:35.826491 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:47:35.826491 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:47:35.826491 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 17 12:47:36.253814 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 17 12:47:37.843215 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:47:37.845466 ignition[898]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:47:37.845466 ignition[898]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:47:37.845466 ignition[898]: INFO : files: files passed Jan 17 12:47:37.845466 ignition[898]: INFO : Ignition finished successfully Jan 17 12:47:37.845876 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:47:37.860366 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:47:37.863699 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:47:37.871419 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:47:37.871676 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:47:37.882045 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:47:37.883120 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:47:37.884943 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:47:37.888142 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:47:37.891285 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:47:37.898445 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:47:37.924655 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:47:37.924900 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:47:37.927112 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:47:37.928863 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:47:37.930885 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:47:37.935398 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:47:37.952693 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:47:37.961427 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:47:37.980591 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:47:37.982824 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:47:37.985100 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:47:37.986989 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:47:37.987332 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:47:37.989804 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:47:37.992001 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:47:37.993949 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:47:37.995783 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:47:38.000814 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:47:38.001522 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:47:38.002658 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:47:38.003860 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:47:38.005047 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:47:38.006192 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:47:38.007102 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:47:38.007276 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:47:38.008477 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:47:38.009318 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:47:38.010444 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:47:38.010548 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:47:38.011695 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:47:38.011879 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:47:38.013254 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:47:38.013415 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:47:38.014914 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:47:38.015074 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:47:38.027600 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:47:38.030363 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:47:38.030892 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:47:38.031060 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:47:38.033354 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:47:38.033511 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:47:38.045563 ignition[951]: INFO : Ignition 2.19.0 Jan 17 12:47:38.047179 ignition[951]: INFO : Stage: umount Jan 17 12:47:38.047179 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:47:38.047179 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:47:38.050813 ignition[951]: INFO : umount: umount passed Jan 17 12:47:38.050813 ignition[951]: INFO : Ignition finished successfully Jan 17 12:47:38.047203 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:47:38.047301 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:47:38.052106 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:47:38.052225 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:47:38.053437 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:47:38.053502 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:47:38.054090 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:47:38.054128 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:47:38.054903 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:47:38.054942 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:47:38.055965 systemd[1]: Stopped target network.target - Network. Jan 17 12:47:38.057008 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:47:38.057054 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:47:38.058431 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:47:38.061951 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:47:38.065272 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:47:38.065807 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:47:38.066347 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:47:38.066852 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:47:38.066894 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:47:38.068060 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:47:38.068098 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:47:38.068676 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:47:38.068723 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:47:38.069298 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:47:38.069343 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:47:38.070618 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:47:38.071839 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:47:38.073714 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:47:38.074208 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:47:38.074292 systemd-networkd[710]: eth0: DHCPv6 lease lost Jan 17 12:47:38.074420 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:47:38.076400 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:47:38.076468 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:47:38.078280 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:47:38.078408 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:47:38.081248 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:47:38.081366 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:47:38.082670 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:47:38.082948 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:47:38.090297 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:47:38.092423 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:47:38.092484 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:47:38.093739 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:47:38.093781 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:47:38.094803 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:47:38.094846 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:47:38.096104 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:47:38.096146 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:47:38.097338 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:47:38.106506 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:47:38.106623 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:47:38.108460 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:47:38.108594 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:47:38.110026 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:47:38.110079 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:47:38.110825 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:47:38.110857 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:47:38.112021 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:47:38.112063 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:47:38.113682 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:47:38.113724 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:47:38.118770 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:47:38.118810 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:47:38.128331 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:47:38.130482 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:47:38.130532 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:47:38.131744 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:47:38.131784 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:47:38.133516 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:47:38.133558 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:47:38.134083 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:47:38.134120 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:47:38.137460 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:47:38.137563 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:47:38.138589 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:47:38.144317 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:47:38.150667 systemd[1]: Switching root. Jan 17 12:47:38.179274 systemd-journald[184]: Journal stopped Jan 17 12:47:39.988302 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 17 12:47:39.988359 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:47:39.988377 kernel: SELinux: policy capability open_perms=1 Jan 17 12:47:39.988389 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:47:39.988400 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:47:39.988411 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:47:39.988422 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:47:39.988435 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:47:39.988446 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:47:39.988458 kernel: audit: type=1403 audit(1737118058.859:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:47:39.988470 systemd[1]: Successfully loaded SELinux policy in 76.791ms. Jan 17 12:47:39.988493 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.085ms. Jan 17 12:47:39.988507 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:47:39.988521 systemd[1]: Detected virtualization kvm. Jan 17 12:47:39.988536 systemd[1]: Detected architecture x86-64. Jan 17 12:47:39.988549 systemd[1]: Detected first boot. Jan 17 12:47:39.988562 systemd[1]: Hostname set to . Jan 17 12:47:39.988577 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:47:39.988590 zram_generator::config[993]: No configuration found. Jan 17 12:47:39.988608 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:47:39.990220 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:47:39.990239 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:47:39.990253 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:47:39.990267 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:47:39.990280 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:47:39.990294 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:47:39.990307 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:47:39.990320 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:47:39.990337 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:47:39.990350 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:47:39.990363 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:47:39.990376 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:47:39.990389 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:47:39.990402 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:47:39.990415 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:47:39.990428 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:47:39.990459 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:47:39.990474 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:47:39.990487 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:47:39.990499 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:47:39.990513 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:47:39.990526 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:47:39.990543 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:47:39.990559 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:47:39.990572 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:47:39.990585 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:47:39.990598 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:47:39.990610 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:47:39.990623 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:47:39.990636 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:47:39.990649 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:47:39.990666 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:47:39.990681 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:47:39.990694 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:47:39.990707 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:47:39.990720 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:47:39.990733 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:47:39.990745 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:47:39.990758 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:47:39.990771 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:47:39.990785 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:47:39.990801 systemd[1]: Reached target machines.target - Containers. Jan 17 12:47:39.990815 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:47:39.990829 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:47:39.990841 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:47:39.990853 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:47:39.990865 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:47:39.990878 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:47:39.990890 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:47:39.990904 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:47:39.990917 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:47:39.990929 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:47:39.990941 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:47:39.990953 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:47:39.990965 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:47:39.990977 kernel: loop: module loaded Jan 17 12:47:39.990989 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:47:39.991002 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:47:39.991014 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:47:39.991026 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:47:39.991038 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:47:39.991067 systemd-journald[1089]: Collecting audit messages is disabled. Jan 17 12:47:39.991092 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:47:39.991105 systemd-journald[1089]: Journal started Jan 17 12:47:39.991132 systemd-journald[1089]: Runtime Journal (/run/log/journal/8c0e300b95d04153b47cb897fdbacb3b) is 8.0M, max 78.3M, 70.3M free. Jan 17 12:47:39.615268 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:47:39.631210 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:47:39.631555 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:47:39.995182 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:47:39.997388 systemd[1]: Stopped verity-setup.service. Jan 17 12:47:39.997419 kernel: fuse: init (API version 7.39) Jan 17 12:47:40.003186 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:47:40.008253 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:47:40.009754 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:47:40.011362 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:47:40.012198 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:47:40.031672 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:47:40.032176 kernel: ACPI: bus type drm_connector registered Jan 17 12:47:40.032527 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:47:40.033204 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:47:40.033956 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:47:40.034785 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:47:40.035658 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:47:40.035795 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:47:40.036603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:47:40.036716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:47:40.037495 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:47:40.037606 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:47:40.038353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:47:40.038463 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:47:40.039444 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:47:40.039587 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:47:40.040417 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:47:40.040542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:47:40.041516 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:47:40.042274 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:47:40.043030 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:47:40.052980 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:47:40.061284 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:47:40.067291 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:47:40.069228 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:47:40.069265 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:47:40.071501 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:47:40.075254 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:47:40.082322 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:47:40.083348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:47:40.086174 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:47:40.089262 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:47:40.091233 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:47:40.096294 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:47:40.098447 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:47:40.100432 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:47:40.102681 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:47:40.110415 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:47:40.112569 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:47:40.113215 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:47:40.113979 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:47:40.117845 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:47:40.128508 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:47:40.142996 systemd-journald[1089]: Time spent on flushing to /var/log/journal/8c0e300b95d04153b47cb897fdbacb3b is 49.170ms for 933 entries. Jan 17 12:47:40.142996 systemd-journald[1089]: System Journal (/var/log/journal/8c0e300b95d04153b47cb897fdbacb3b) is 8.0M, max 584.8M, 576.8M free. Jan 17 12:47:40.231084 systemd-journald[1089]: Received client request to flush runtime journal. Jan 17 12:47:40.231135 kernel: loop0: detected capacity change from 0 to 205544 Jan 17 12:47:40.152885 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:47:40.153582 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:47:40.157361 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:47:40.163765 udevadm[1132]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:47:40.179195 systemd-tmpfiles[1127]: ACLs are not supported, ignoring. Jan 17 12:47:40.179210 systemd-tmpfiles[1127]: ACLs are not supported, ignoring. Jan 17 12:47:40.189760 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:47:40.197338 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:47:40.203218 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:47:40.233756 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:47:40.245175 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:47:40.249989 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:47:40.250529 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:47:40.270532 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:47:40.276216 kernel: loop1: detected capacity change from 0 to 8 Jan 17 12:47:40.280684 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:47:40.299468 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jan 17 12:47:40.299925 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jan 17 12:47:40.301348 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 12:47:40.307043 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:47:40.350280 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 12:47:40.421188 kernel: loop4: detected capacity change from 0 to 205544 Jan 17 12:47:40.482186 kernel: loop5: detected capacity change from 0 to 8 Jan 17 12:47:40.486164 kernel: loop6: detected capacity change from 0 to 140768 Jan 17 12:47:40.525186 kernel: loop7: detected capacity change from 0 to 142488 Jan 17 12:47:40.583102 (sd-merge)[1157]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 17 12:47:40.584397 (sd-merge)[1157]: Merged extensions into '/usr'. Jan 17 12:47:40.596419 systemd[1]: Reloading requested from client PID 1126 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:47:40.596447 systemd[1]: Reloading... Jan 17 12:47:40.669350 zram_generator::config[1180]: No configuration found. Jan 17 12:47:40.906793 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:47:40.966383 systemd[1]: Reloading finished in 369 ms. Jan 17 12:47:40.987746 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:47:40.998341 systemd[1]: Starting ensure-sysext.service... Jan 17 12:47:41.021364 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:47:41.024748 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:47:41.036565 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:47:41.039444 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:47:41.039453 systemd[1]: Reloading... Jan 17 12:47:41.049061 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:47:41.049942 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:47:41.052222 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:47:41.052656 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 17 12:47:41.052717 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 17 12:47:41.058665 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:47:41.058674 systemd-tmpfiles[1239]: Skipping /boot Jan 17 12:47:41.065386 ldconfig[1121]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:47:41.072776 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:47:41.073088 systemd-tmpfiles[1239]: Skipping /boot Jan 17 12:47:41.099015 systemd-udevd[1242]: Using default interface naming scheme 'v255'. Jan 17 12:47:41.124220 zram_generator::config[1268]: No configuration found. Jan 17 12:47:41.238709 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:47:41.255198 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:47:41.261214 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1293) Jan 17 12:47:41.325764 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:47:41.343192 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 12:47:41.363663 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:47:41.378190 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:47:41.407254 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 12:47:41.407326 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 12:47:41.411587 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:47:41.413394 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 12:47:41.413431 kernel: [drm] features: -context_init Jan 17 12:47:41.415353 kernel: [drm] number of scanouts: 1 Jan 17 12:47:41.415391 kernel: [drm] number of cap sets: 0 Jan 17 12:47:41.416951 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:47:41.417131 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:47:41.417772 systemd[1]: Reloading finished in 378 ms. Jan 17 12:47:41.419180 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 12:47:41.430228 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 12:47:41.430318 kernel: Console: switching to colour frame buffer device 160x50 Jan 17 12:47:41.431353 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:47:41.433899 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 12:47:41.437140 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:47:41.444209 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:47:41.470296 systemd[1]: Finished ensure-sysext.service. Jan 17 12:47:41.474807 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:47:41.489729 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:47:41.494400 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:47:41.498323 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:47:41.500541 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:47:41.502793 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:47:41.506996 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:47:41.509538 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:47:41.512299 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:47:41.515879 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:47:41.516051 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:47:41.517869 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:47:41.524147 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:47:41.530975 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:47:41.533353 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:47:41.543327 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:47:41.547873 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:47:41.551700 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:47:41.551781 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:47:41.553228 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:47:41.553375 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:47:41.554414 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:47:41.555200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:47:41.555489 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:47:41.555598 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:47:41.558132 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:47:41.567381 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:47:41.568451 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:47:41.568595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:47:41.575206 lvm[1362]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:47:41.575684 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:47:41.580445 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:47:41.606617 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:47:41.611866 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:47:41.619443 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:47:41.622366 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:47:41.630861 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:47:41.638595 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:47:41.649206 lvm[1394]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:47:41.662292 augenrules[1403]: No rules Jan 17 12:47:41.662491 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:47:41.670171 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:47:41.674419 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:47:41.700350 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:47:41.732221 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:47:41.736595 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:47:41.759064 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:47:41.769057 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:47:41.771955 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:47:41.787459 systemd-networkd[1375]: lo: Link UP Jan 17 12:47:41.787712 systemd-networkd[1375]: lo: Gained carrier Jan 17 12:47:41.788957 systemd-networkd[1375]: Enumeration completed Jan 17 12:47:41.789432 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:47:41.789498 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:47:41.790238 systemd-networkd[1375]: eth0: Link UP Jan 17 12:47:41.790294 systemd-networkd[1375]: eth0: Gained carrier Jan 17 12:47:41.790355 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:47:41.790417 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:47:41.798573 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:47:41.808676 systemd-resolved[1376]: Positive Trust Anchors: Jan 17 12:47:41.808692 systemd-resolved[1376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:47:41.808734 systemd-resolved[1376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:47:41.812296 systemd-networkd[1375]: eth0: DHCPv4 address 172.24.4.220/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 17 12:47:41.813082 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Jan 17 12:47:41.814320 systemd-resolved[1376]: Using system hostname 'ci-4081-3-0-0-a01a08aaf3.novalocal'. Jan 17 12:47:41.815974 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:47:41.816991 systemd[1]: Reached target network.target - Network. Jan 17 12:47:41.817583 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:47:41.818122 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:47:41.820726 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:47:41.822125 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:47:41.823458 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:47:41.825499 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:47:41.827116 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:47:41.828021 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:47:41.828119 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:47:41.829006 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:47:41.830768 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:47:41.833258 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:47:41.847696 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:47:41.851445 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:47:41.852910 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:47:41.854326 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:47:41.855300 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:47:41.855398 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:47:41.862556 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:47:41.867222 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:47:41.874368 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:47:41.885352 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:47:41.891107 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:47:41.896613 jq[1432]: false Jan 17 12:47:41.897409 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:47:41.901566 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:47:41.907479 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:47:41.918703 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:47:41.929336 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:47:41.933104 dbus-daemon[1429]: [system] SELinux support is enabled Jan 17 12:47:41.932459 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:47:41.932956 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:47:41.940579 extend-filesystems[1433]: Found loop4 Jan 17 12:47:41.940579 extend-filesystems[1433]: Found loop5 Jan 17 12:47:41.940579 extend-filesystems[1433]: Found loop6 Jan 17 12:47:41.940579 extend-filesystems[1433]: Found loop7 Jan 17 12:47:41.940579 extend-filesystems[1433]: Found vda Jan 17 12:47:41.940579 extend-filesystems[1433]: Found vda1 Jan 17 12:47:41.940579 extend-filesystems[1433]: Found vda2 Jan 17 12:47:41.940579 extend-filesystems[1433]: Found vda3 Jan 17 12:47:41.940579 extend-filesystems[1433]: Found usr Jan 17 12:47:41.940579 extend-filesystems[1433]: Found vda4 Jan 17 12:47:41.940579 extend-filesystems[1433]: Found vda6 Jan 17 12:47:41.940579 extend-filesystems[1433]: Found vda7 Jan 17 12:47:41.940579 extend-filesystems[1433]: Found vda9 Jan 17 12:47:41.940579 extend-filesystems[1433]: Checking size of /dev/vda9 Jan 17 12:47:41.941557 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:47:41.947287 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:47:42.015648 extend-filesystems[1433]: Resized partition /dev/vda9 Jan 17 12:47:42.017678 update_engine[1443]: I20250117 12:47:41.959353 1443 main.cc:92] Flatcar Update Engine starting Jan 17 12:47:42.017678 update_engine[1443]: I20250117 12:47:41.960639 1443 update_check_scheduler.cc:74] Next update check in 8m22s Jan 17 12:47:41.959948 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:47:42.017992 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:47:41.968513 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:47:42.034745 jq[1449]: true Jan 17 12:47:41.968672 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:47:41.968923 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:47:42.041961 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 17 12:47:41.969048 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:47:41.978329 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:47:41.980905 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:47:42.003398 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:47:42.006068 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:47:42.006091 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:47:42.009664 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:47:42.009687 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:47:42.010378 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:47:42.029319 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:47:42.048449 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:47:42.064278 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 17 12:47:42.126095 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1309) Jan 17 12:47:42.095723 systemd-logind[1441]: New seat seat0. Jan 17 12:47:42.126416 jq[1462]: true Jan 17 12:47:42.138193 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:47:42.138193 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:47:42.138193 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 17 12:47:42.157880 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Jan 17 12:47:42.138243 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:47:42.138270 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:47:42.139175 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:47:42.141672 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:47:42.144461 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:47:42.180100 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:47:42.191323 locksmithd[1461]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:47:42.210280 bash[1483]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:47:42.206618 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:47:42.223446 systemd[1]: Starting sshkeys.service... Jan 17 12:47:42.230315 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:47:42.241876 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:47:42.249709 systemd[1]: Started sshd@0-172.24.4.220:22-172.24.4.1:56746.service - OpenSSH per-connection server daemon (172.24.4.1:56746). Jan 17 12:47:42.259220 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:47:42.268520 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:47:42.271484 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:47:42.274047 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:47:42.284530 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:47:42.298520 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:47:42.311608 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:47:42.324577 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:47:42.325415 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:47:42.562305 containerd[1463]: time="2025-01-17T12:47:42.561661902Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:47:42.627912 containerd[1463]: time="2025-01-17T12:47:42.627788071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:47:42.631234 containerd[1463]: time="2025-01-17T12:47:42.631091758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:47:42.631234 containerd[1463]: time="2025-01-17T12:47:42.631221090Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:47:42.631399 containerd[1463]: time="2025-01-17T12:47:42.631269811Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:47:42.631686 containerd[1463]: time="2025-01-17T12:47:42.631616582Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:47:42.631772 containerd[1463]: time="2025-01-17T12:47:42.631679700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:47:42.631912 containerd[1463]: time="2025-01-17T12:47:42.631860069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:47:42.631988 containerd[1463]: time="2025-01-17T12:47:42.631912537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:47:42.632392 containerd[1463]: time="2025-01-17T12:47:42.632312377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:47:42.632392 containerd[1463]: time="2025-01-17T12:47:42.632372299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:47:42.632535 containerd[1463]: time="2025-01-17T12:47:42.632411162Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:47:42.632535 containerd[1463]: time="2025-01-17T12:47:42.632439646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:47:42.632655 containerd[1463]: time="2025-01-17T12:47:42.632624723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:47:42.633185 containerd[1463]: time="2025-01-17T12:47:42.633081830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:47:42.633482 containerd[1463]: time="2025-01-17T12:47:42.633387493Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:47:42.633482 containerd[1463]: time="2025-01-17T12:47:42.633462133Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:47:42.633732 containerd[1463]: time="2025-01-17T12:47:42.633660746Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:47:42.633844 containerd[1463]: time="2025-01-17T12:47:42.633802261Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:47:42.740975 containerd[1463]: time="2025-01-17T12:47:42.740884714Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:47:42.741426 containerd[1463]: time="2025-01-17T12:47:42.741039244Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:47:42.741426 containerd[1463]: time="2025-01-17T12:47:42.741103885Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:47:42.741426 containerd[1463]: time="2025-01-17T12:47:42.741203131Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:47:42.741426 containerd[1463]: time="2025-01-17T12:47:42.741276519Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:47:42.741694 containerd[1463]: time="2025-01-17T12:47:42.741657153Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:47:42.742693 containerd[1463]: time="2025-01-17T12:47:42.742592437Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:47:42.743030 containerd[1463]: time="2025-01-17T12:47:42.742981637Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:47:42.743133 containerd[1463]: time="2025-01-17T12:47:42.743040317Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:47:42.743133 containerd[1463]: time="2025-01-17T12:47:42.743077437Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:47:42.743133 containerd[1463]: time="2025-01-17T12:47:42.743114717Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:47:42.743347 containerd[1463]: time="2025-01-17T12:47:42.743149372Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:47:42.743347 containerd[1463]: time="2025-01-17T12:47:42.743224302Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:47:42.743347 containerd[1463]: time="2025-01-17T12:47:42.743262794Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:47:42.743347 containerd[1463]: time="2025-01-17T12:47:42.743298762Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:47:42.743347 containerd[1463]: time="2025-01-17T12:47:42.743332435Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:47:42.743597 containerd[1463]: time="2025-01-17T12:47:42.743363924Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:47:42.743597 containerd[1463]: time="2025-01-17T12:47:42.743395533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:47:42.743597 containerd[1463]: time="2025-01-17T12:47:42.743440728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.743597 containerd[1463]: time="2025-01-17T12:47:42.743474481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.743597 containerd[1463]: time="2025-01-17T12:47:42.743505930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.743597 containerd[1463]: time="2025-01-17T12:47:42.743538161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.743597 containerd[1463]: time="2025-01-17T12:47:42.743567706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.743985 containerd[1463]: time="2025-01-17T12:47:42.743599376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.743985 containerd[1463]: time="2025-01-17T12:47:42.743632347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.743985 containerd[1463]: time="2025-01-17T12:47:42.743664538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.743985 containerd[1463]: time="2025-01-17T12:47:42.743699103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.743985 containerd[1463]: time="2025-01-17T12:47:42.743752743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.743985 containerd[1463]: time="2025-01-17T12:47:42.743783541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.743985 containerd[1463]: time="2025-01-17T12:47:42.743859794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.743985 containerd[1463]: time="2025-01-17T12:47:42.743902825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.743985 containerd[1463]: time="2025-01-17T12:47:42.743954912Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:47:42.744884 containerd[1463]: time="2025-01-17T12:47:42.744013763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.744884 containerd[1463]: time="2025-01-17T12:47:42.744045773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.744884 containerd[1463]: time="2025-01-17T12:47:42.744073725Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:47:42.744884 containerd[1463]: time="2025-01-17T12:47:42.744210351Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:47:42.744884 containerd[1463]: time="2025-01-17T12:47:42.744258422Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:47:42.744884 containerd[1463]: time="2025-01-17T12:47:42.744286635Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:47:42.744884 containerd[1463]: time="2025-01-17T12:47:42.744320087Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:47:42.744884 containerd[1463]: time="2025-01-17T12:47:42.744347128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.744884 containerd[1463]: time="2025-01-17T12:47:42.744379258Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:47:42.744884 containerd[1463]: time="2025-01-17T12:47:42.744413042Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:47:42.744884 containerd[1463]: time="2025-01-17T12:47:42.744439682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:47:42.746102 containerd[1463]: time="2025-01-17T12:47:42.745079181Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:47:42.746102 containerd[1463]: time="2025-01-17T12:47:42.745285899Z" level=info msg="Connect containerd service" Jan 17 12:47:42.746102 containerd[1463]: time="2025-01-17T12:47:42.745361370Z" level=info msg="using legacy CRI server" Jan 17 12:47:42.746102 containerd[1463]: time="2025-01-17T12:47:42.745378873Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:47:42.746102 containerd[1463]: time="2025-01-17T12:47:42.745562287Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:47:42.747325 containerd[1463]: time="2025-01-17T12:47:42.746905276Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:47:42.748336 containerd[1463]: time="2025-01-17T12:47:42.747236357Z" level=info msg="Start subscribing containerd event" Jan 17 12:47:42.748336 containerd[1463]: time="2025-01-17T12:47:42.748056956Z" level=info msg="Start recovering state" Jan 17 12:47:42.748336 containerd[1463]: time="2025-01-17T12:47:42.748219871Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:47:42.748530 containerd[1463]: time="2025-01-17T12:47:42.748383077Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:47:42.748741 containerd[1463]: time="2025-01-17T12:47:42.748643325Z" level=info msg="Start event monitor" Jan 17 12:47:42.749482 containerd[1463]: time="2025-01-17T12:47:42.749400836Z" level=info msg="Start snapshots syncer" Jan 17 12:47:42.749699 containerd[1463]: time="2025-01-17T12:47:42.749450329Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:47:42.749917 containerd[1463]: time="2025-01-17T12:47:42.749649262Z" level=info msg="Start streaming server" Jan 17 12:47:42.750384 containerd[1463]: time="2025-01-17T12:47:42.750149490Z" level=info msg="containerd successfully booted in 0.190661s" Jan 17 12:47:42.750309 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:47:43.197412 sshd[1500]: Accepted publickey for core from 172.24.4.1 port 56746 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:47:43.201087 sshd[1500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:47:43.226028 systemd-logind[1441]: New session 1 of user core. Jan 17 12:47:43.231257 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:47:43.257993 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:47:43.284296 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:47:43.297398 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:47:43.319299 (systemd)[1523]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:47:43.358440 systemd-networkd[1375]: eth0: Gained IPv6LL Jan 17 12:47:43.361020 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Jan 17 12:47:43.363988 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:47:43.369796 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:47:43.383866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:47:43.402813 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:47:43.447636 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:47:43.493988 systemd[1523]: Queued start job for default target default.target. Jan 17 12:47:43.500131 systemd[1523]: Created slice app.slice - User Application Slice. Jan 17 12:47:43.500481 systemd[1523]: Reached target paths.target - Paths. Jan 17 12:47:43.500497 systemd[1523]: Reached target timers.target - Timers. Jan 17 12:47:43.502262 systemd[1523]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:47:43.512909 systemd[1523]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:47:43.512960 systemd[1523]: Reached target sockets.target - Sockets. Jan 17 12:47:43.512975 systemd[1523]: Reached target basic.target - Basic System. Jan 17 12:47:43.513009 systemd[1523]: Reached target default.target - Main User Target. Jan 17 12:47:43.513034 systemd[1523]: Startup finished in 178ms. Jan 17 12:47:43.513523 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:47:43.521426 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:47:44.021706 systemd[1]: Started sshd@1-172.24.4.220:22-172.24.4.1:37414.service - OpenSSH per-connection server daemon (172.24.4.1:37414). Jan 17 12:47:45.093454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:47:45.093918 (kubelet)[1554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:47:45.263105 sshd[1546]: Accepted publickey for core from 172.24.4.1 port 37414 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:47:45.266756 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:47:45.276973 systemd-logind[1441]: New session 2 of user core. Jan 17 12:47:45.285708 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:47:46.010510 sshd[1546]: pam_unix(sshd:session): session closed for user core Jan 17 12:47:46.025677 systemd[1]: sshd@1-172.24.4.220:22-172.24.4.1:37414.service: Deactivated successfully. Jan 17 12:47:46.030633 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:47:46.035379 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:47:46.046368 systemd[1]: Started sshd@2-172.24.4.220:22-172.24.4.1:37424.service - OpenSSH per-connection server daemon (172.24.4.1:37424). Jan 17 12:47:46.055779 systemd-logind[1441]: Removed session 2. Jan 17 12:47:46.180213 kubelet[1554]: E0117 12:47:46.180118 1554 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:47:46.184032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:47:46.184864 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:47:46.185488 systemd[1]: kubelet.service: Consumed 1.761s CPU time. Jan 17 12:47:47.250022 sshd[1566]: Accepted publickey for core from 172.24.4.1 port 37424 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:47:47.252865 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:47:47.263901 systemd-logind[1441]: New session 3 of user core. Jan 17 12:47:47.282601 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:47:47.359760 login[1509]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:47:47.371665 systemd-logind[1441]: New session 4 of user core. Jan 17 12:47:47.378816 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:47:47.381956 login[1511]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:47:47.393256 systemd-logind[1441]: New session 5 of user core. Jan 17 12:47:47.406540 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:47:47.893302 sshd[1566]: pam_unix(sshd:session): session closed for user core Jan 17 12:47:47.900207 systemd[1]: sshd@2-172.24.4.220:22-172.24.4.1:37424.service: Deactivated successfully. Jan 17 12:47:47.903696 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:47:47.905371 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:47:47.907926 systemd-logind[1441]: Removed session 3. Jan 17 12:47:48.962829 coreos-metadata[1428]: Jan 17 12:47:48.962 WARN failed to locate config-drive, using the metadata service API instead Jan 17 12:47:49.041920 coreos-metadata[1428]: Jan 17 12:47:49.041 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 17 12:47:49.234126 coreos-metadata[1428]: Jan 17 12:47:49.233 INFO Fetch successful Jan 17 12:47:49.234126 coreos-metadata[1428]: Jan 17 12:47:49.233 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 12:47:49.248743 coreos-metadata[1428]: Jan 17 12:47:49.248 INFO Fetch successful Jan 17 12:47:49.248743 coreos-metadata[1428]: Jan 17 12:47:49.248 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 17 12:47:49.265418 coreos-metadata[1428]: Jan 17 12:47:49.265 INFO Fetch successful Jan 17 12:47:49.265418 coreos-metadata[1428]: Jan 17 12:47:49.265 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 17 12:47:49.280376 coreos-metadata[1428]: Jan 17 12:47:49.280 INFO Fetch successful Jan 17 12:47:49.280376 coreos-metadata[1428]: Jan 17 12:47:49.280 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 17 12:47:49.294633 coreos-metadata[1428]: Jan 17 12:47:49.294 INFO Fetch successful Jan 17 12:47:49.294633 coreos-metadata[1428]: Jan 17 12:47:49.294 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 17 12:47:49.309386 coreos-metadata[1428]: Jan 17 12:47:49.309 INFO Fetch successful Jan 17 12:47:49.353630 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:47:49.357074 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:47:49.374554 coreos-metadata[1503]: Jan 17 12:47:49.374 WARN failed to locate config-drive, using the metadata service API instead Jan 17 12:47:49.416122 coreos-metadata[1503]: Jan 17 12:47:49.416 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 17 12:47:49.431970 coreos-metadata[1503]: Jan 17 12:47:49.431 INFO Fetch successful Jan 17 12:47:49.431970 coreos-metadata[1503]: Jan 17 12:47:49.431 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 12:47:49.443387 coreos-metadata[1503]: Jan 17 12:47:49.443 INFO Fetch successful Jan 17 12:47:49.449468 unknown[1503]: wrote ssh authorized keys file for user: core Jan 17 12:47:49.482319 update-ssh-keys[1609]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:47:49.484651 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:47:49.489935 systemd[1]: Finished sshkeys.service. Jan 17 12:47:49.492314 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:47:49.492784 systemd[1]: Startup finished in 1.222s (kernel) + 15.035s (initrd) + 10.709s (userspace) = 26.968s. Jan 17 12:47:56.434930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:47:56.442575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:47:56.716791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:47:56.720827 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:47:56.852764 kubelet[1621]: E0117 12:47:56.852689 1621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:47:56.859449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:47:56.859823 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:47:57.917709 systemd[1]: Started sshd@3-172.24.4.220:22-172.24.4.1:55472.service - OpenSSH per-connection server daemon (172.24.4.1:55472). Jan 17 12:47:59.089903 sshd[1630]: Accepted publickey for core from 172.24.4.1 port 55472 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:47:59.092674 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:47:59.102680 systemd-logind[1441]: New session 6 of user core. Jan 17 12:47:59.111440 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:47:59.734499 sshd[1630]: pam_unix(sshd:session): session closed for user core Jan 17 12:47:59.743561 systemd[1]: sshd@3-172.24.4.220:22-172.24.4.1:55472.service: Deactivated successfully. Jan 17 12:47:59.746086 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:47:59.747604 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:47:59.753733 systemd[1]: Started sshd@4-172.24.4.220:22-172.24.4.1:55478.service - OpenSSH per-connection server daemon (172.24.4.1:55478). Jan 17 12:47:59.757983 systemd-logind[1441]: Removed session 6. Jan 17 12:48:00.940707 sshd[1637]: Accepted publickey for core from 172.24.4.1 port 55478 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:48:00.943403 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:48:00.953633 systemd-logind[1441]: New session 7 of user core. Jan 17 12:48:00.966483 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:48:01.585546 sshd[1637]: pam_unix(sshd:session): session closed for user core Jan 17 12:48:01.596361 systemd[1]: sshd@4-172.24.4.220:22-172.24.4.1:55478.service: Deactivated successfully. Jan 17 12:48:01.599212 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:48:01.602530 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:48:01.607681 systemd[1]: Started sshd@5-172.24.4.220:22-172.24.4.1:55480.service - OpenSSH per-connection server daemon (172.24.4.1:55480). Jan 17 12:48:01.611014 systemd-logind[1441]: Removed session 7. Jan 17 12:48:02.794625 sshd[1644]: Accepted publickey for core from 172.24.4.1 port 55480 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:48:02.797313 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:48:02.806606 systemd-logind[1441]: New session 8 of user core. Jan 17 12:48:02.818494 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:48:03.438831 sshd[1644]: pam_unix(sshd:session): session closed for user core Jan 17 12:48:03.448858 systemd[1]: sshd@5-172.24.4.220:22-172.24.4.1:55480.service: Deactivated successfully. Jan 17 12:48:03.451986 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:48:03.455659 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:48:03.464014 systemd[1]: Started sshd@6-172.24.4.220:22-172.24.4.1:38376.service - OpenSSH per-connection server daemon (172.24.4.1:38376). Jan 17 12:48:03.467382 systemd-logind[1441]: Removed session 8. Jan 17 12:48:04.605052 sshd[1651]: Accepted publickey for core from 172.24.4.1 port 38376 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:48:04.607705 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:48:04.617201 systemd-logind[1441]: New session 9 of user core. Jan 17 12:48:04.626437 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:48:05.106112 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:48:05.106786 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:48:05.124326 sudo[1654]: pam_unix(sudo:session): session closed for user root Jan 17 12:48:05.354387 sshd[1651]: pam_unix(sshd:session): session closed for user core Jan 17 12:48:05.364550 systemd[1]: sshd@6-172.24.4.220:22-172.24.4.1:38376.service: Deactivated successfully. Jan 17 12:48:05.367148 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:48:05.370532 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:48:05.382819 systemd[1]: Started sshd@7-172.24.4.220:22-172.24.4.1:38388.service - OpenSSH per-connection server daemon (172.24.4.1:38388). Jan 17 12:48:05.386510 systemd-logind[1441]: Removed session 9. Jan 17 12:48:06.517200 sshd[1659]: Accepted publickey for core from 172.24.4.1 port 38388 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:48:06.519989 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:48:06.529553 systemd-logind[1441]: New session 10 of user core. Jan 17 12:48:06.541465 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:48:06.995479 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:48:06.996142 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:48:06.998722 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:48:07.006611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:48:07.013525 sudo[1663]: pam_unix(sudo:session): session closed for user root Jan 17 12:48:07.024960 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:48:07.025777 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:48:07.060520 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:48:07.066979 auditctl[1669]: No rules Jan 17 12:48:07.068617 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:48:07.069047 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:48:07.076001 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:48:07.138553 augenrules[1687]: No rules Jan 17 12:48:07.140610 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:48:07.144456 sudo[1662]: pam_unix(sudo:session): session closed for user root Jan 17 12:48:07.318556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:48:07.318692 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:48:07.394010 kubelet[1697]: E0117 12:48:07.393951 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:48:07.397278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:48:07.397421 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:48:07.405714 sshd[1659]: pam_unix(sshd:session): session closed for user core Jan 17 12:48:07.415596 systemd[1]: sshd@7-172.24.4.220:22-172.24.4.1:38388.service: Deactivated successfully. Jan 17 12:48:07.417792 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:48:07.419779 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:48:07.430742 systemd[1]: Started sshd@8-172.24.4.220:22-172.24.4.1:38394.service - OpenSSH per-connection server daemon (172.24.4.1:38394). Jan 17 12:48:07.433517 systemd-logind[1441]: Removed session 10. Jan 17 12:48:08.600092 sshd[1708]: Accepted publickey for core from 172.24.4.1 port 38394 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:48:08.602723 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:48:08.612917 systemd-logind[1441]: New session 11 of user core. Jan 17 12:48:08.618442 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:48:09.077708 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:48:09.078390 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:48:10.367354 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:48:10.381629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:48:10.427308 systemd[1]: Reloading requested from client PID 1744 ('systemctl') (unit session-11.scope)... Jan 17 12:48:10.427449 systemd[1]: Reloading... Jan 17 12:48:10.516198 zram_generator::config[1782]: No configuration found. Jan 17 12:48:10.655411 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:48:10.736012 systemd[1]: Reloading finished in 308 ms. Jan 17 12:48:10.783234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:48:10.785376 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:48:10.788912 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:48:10.789173 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:48:10.790650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:48:10.891485 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:48:10.891497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:48:11.133752 kubelet[1851]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:48:11.133752 kubelet[1851]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:48:11.133752 kubelet[1851]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:48:11.135287 kubelet[1851]: I0117 12:48:11.134519 1851 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:48:11.592660 kubelet[1851]: I0117 12:48:11.592498 1851 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 17 12:48:11.592660 kubelet[1851]: I0117 12:48:11.592526 1851 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:48:11.592868 kubelet[1851]: I0117 12:48:11.592752 1851 server.go:929] "Client rotation is on, will bootstrap in background" Jan 17 12:48:11.625948 kubelet[1851]: I0117 12:48:11.625540 1851 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:48:11.640608 kubelet[1851]: E0117 12:48:11.640492 1851 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 12:48:11.640608 kubelet[1851]: I0117 12:48:11.640543 1851 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 12:48:11.645094 kubelet[1851]: I0117 12:48:11.645072 1851 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:48:11.647277 kubelet[1851]: I0117 12:48:11.646985 1851 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 17 12:48:11.647277 kubelet[1851]: I0117 12:48:11.647116 1851 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:48:11.647538 kubelet[1851]: I0117 12:48:11.647144 1851 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.220","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 12:48:11.647538 kubelet[1851]: I0117 12:48:11.647362 1851 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:48:11.647538 kubelet[1851]: I0117 12:48:11.647373 1851 container_manager_linux.go:300] "Creating device plugin manager" Jan 17 12:48:11.647538 kubelet[1851]: I0117 12:48:11.647471 1851 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:48:11.650638 kubelet[1851]: I0117 12:48:11.650558 1851 kubelet.go:408] "Attempting to sync node with API server" Jan 17 12:48:11.650638 kubelet[1851]: I0117 12:48:11.650585 1851 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:48:11.650638 kubelet[1851]: I0117 12:48:11.650614 1851 kubelet.go:314] "Adding apiserver pod source" Jan 17 12:48:11.650638 kubelet[1851]: I0117 12:48:11.650628 1851 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:48:11.653422 kubelet[1851]: E0117 12:48:11.653352 1851 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:11.653422 kubelet[1851]: E0117 12:48:11.653406 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:11.660833 kubelet[1851]: I0117 12:48:11.660779 1851 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:48:11.662930 kubelet[1851]: I0117 12:48:11.662886 1851 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:48:11.663046 kubelet[1851]: W0117 12:48:11.662945 1851 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:48:11.663536 kubelet[1851]: I0117 12:48:11.663488 1851 server.go:1269] "Started kubelet" Jan 17 12:48:11.666468 kubelet[1851]: I0117 12:48:11.666340 1851 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:48:11.670174 kubelet[1851]: I0117 12:48:11.668732 1851 server.go:460] "Adding debug handlers to kubelet server" Jan 17 12:48:11.670174 kubelet[1851]: I0117 12:48:11.669420 1851 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:48:11.670174 kubelet[1851]: I0117 12:48:11.669717 1851 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:48:11.672897 kubelet[1851]: I0117 12:48:11.672879 1851 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:48:11.675644 kubelet[1851]: I0117 12:48:11.675630 1851 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 17 12:48:11.675782 kubelet[1851]: I0117 12:48:11.673144 1851 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 12:48:11.675955 kubelet[1851]: I0117 12:48:11.675943 1851 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 12:48:11.676055 kubelet[1851]: I0117 12:48:11.676045 1851 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:48:11.676726 kubelet[1851]: E0117 12:48:11.676710 1851 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.220\" not found" Jan 17 12:48:11.677120 kubelet[1851]: E0117 12:48:11.677093 1851 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:48:11.678833 kubelet[1851]: I0117 12:48:11.678822 1851 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:48:11.679034 kubelet[1851]: I0117 12:48:11.679017 1851 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:48:11.680831 kubelet[1851]: I0117 12:48:11.680817 1851 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:48:11.696046 kubelet[1851]: W0117 12:48:11.696003 1851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.220" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 12:48:11.696160 kubelet[1851]: E0117 12:48:11.696104 1851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.24.4.220\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 17 12:48:11.696897 kubelet[1851]: W0117 12:48:11.696868 1851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 12:48:11.696986 kubelet[1851]: E0117 12:48:11.696970 1851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 17 12:48:11.697245 kubelet[1851]: I0117 12:48:11.697232 1851 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:48:11.697352 kubelet[1851]: I0117 12:48:11.697340 1851 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:48:11.697434 kubelet[1851]: I0117 12:48:11.697425 1851 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:48:11.701199 kubelet[1851]: E0117 12:48:11.696676 1851 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.220.181b7bb3628169ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.220,UID:172.24.4.220,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.24.4.220,},FirstTimestamp:2025-01-17 12:48:11.663469034 +0000 UTC m=+0.767951945,LastTimestamp:2025-01-17 12:48:11.663469034 +0000 UTC m=+0.767951945,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.220,}" Jan 17 12:48:11.701593 kubelet[1851]: E0117 12:48:11.700749 1851 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.220\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 17 12:48:11.701593 kubelet[1851]: W0117 12:48:11.700904 1851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 17 12:48:11.701674 kubelet[1851]: E0117 12:48:11.701643 1851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 17 12:48:11.703519 kubelet[1851]: E0117 12:48:11.703258 1851 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.220.181b7bb3635125d9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.220,UID:172.24.4.220,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.24.4.220,},FirstTimestamp:2025-01-17 12:48:11.677083097 +0000 UTC m=+0.781566018,LastTimestamp:2025-01-17 12:48:11.677083097 +0000 UTC m=+0.781566018,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.220,}" Jan 17 12:48:11.704311 kubelet[1851]: I0117 12:48:11.704101 1851 policy_none.go:49] "None policy: Start" Jan 17 12:48:11.706426 kubelet[1851]: I0117 12:48:11.706269 1851 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:48:11.706426 kubelet[1851]: I0117 12:48:11.706357 1851 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:48:11.733687 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:48:11.745533 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:48:11.750635 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:48:11.760255 kubelet[1851]: I0117 12:48:11.759591 1851 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:48:11.760255 kubelet[1851]: I0117 12:48:11.759790 1851 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 12:48:11.760255 kubelet[1851]: I0117 12:48:11.759803 1851 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:48:11.761181 kubelet[1851]: I0117 12:48:11.760444 1851 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:48:11.766023 kubelet[1851]: E0117 12:48:11.765999 1851 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.220\" not found" Jan 17 12:48:11.780383 kubelet[1851]: I0117 12:48:11.780343 1851 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:48:11.781986 kubelet[1851]: I0117 12:48:11.781866 1851 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:48:11.781986 kubelet[1851]: I0117 12:48:11.781898 1851 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:48:11.782174 kubelet[1851]: I0117 12:48:11.782107 1851 kubelet.go:2321] "Starting kubelet main sync loop" Jan 17 12:48:11.782827 kubelet[1851]: E0117 12:48:11.782802 1851 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 17 12:48:11.863851 kubelet[1851]: I0117 12:48:11.863311 1851 kubelet_node_status.go:72] "Attempting to register node" node="172.24.4.220" Jan 17 12:48:11.878766 kubelet[1851]: I0117 12:48:11.878676 1851 kubelet_node_status.go:75] "Successfully registered node" node="172.24.4.220" Jan 17 12:48:11.878766 kubelet[1851]: E0117 12:48:11.878756 1851 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.24.4.220\": node \"172.24.4.220\" not found" Jan 17 12:48:11.907851 kubelet[1851]: E0117 12:48:11.907790 1851 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.220\" not found" Jan 17 12:48:11.960219 sudo[1711]: pam_unix(sudo:session): session closed for user root Jan 17 12:48:12.008946 kubelet[1851]: E0117 12:48:12.008874 1851 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.220\" not found" Jan 17 12:48:12.109635 kubelet[1851]: E0117 12:48:12.109570 1851 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.220\" not found" Jan 17 12:48:12.210898 kubelet[1851]: E0117 12:48:12.210254 1851 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.220\" not found" Jan 17 12:48:12.291582 sshd[1708]: pam_unix(sshd:session): session closed for user core Jan 17 12:48:12.296892 systemd[1]: sshd@8-172.24.4.220:22-172.24.4.1:38394.service: Deactivated successfully. Jan 17 12:48:12.300246 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:48:12.300914 systemd[1]: session-11.scope: Consumed 1.038s CPU time, 74.0M memory peak, 0B memory swap peak. Jan 17 12:48:12.303882 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:48:12.306836 systemd-logind[1441]: Removed session 11. Jan 17 12:48:12.311244 kubelet[1851]: E0117 12:48:12.311197 1851 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.220\" not found" Jan 17 12:48:12.412410 kubelet[1851]: E0117 12:48:12.412309 1851 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.220\" not found" Jan 17 12:48:12.514142 kubelet[1851]: E0117 12:48:12.513376 1851 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.220\" not found" Jan 17 12:48:12.594814 kubelet[1851]: I0117 12:48:12.594636 1851 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 17 12:48:12.595057 kubelet[1851]: W0117 12:48:12.594955 1851 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 17 12:48:12.614125 kubelet[1851]: E0117 12:48:12.614073 1851 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.220\" not found" Jan 17 12:48:12.653997 kubelet[1851]: E0117 12:48:12.653927 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:12.714721 kubelet[1851]: E0117 12:48:12.714672 1851 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.220\" not found" Jan 17 12:48:12.816411 kubelet[1851]: I0117 12:48:12.816246 1851 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 17 12:48:12.817132 containerd[1463]: time="2025-01-17T12:48:12.816943455Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:48:12.818076 kubelet[1851]: I0117 12:48:12.817979 1851 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 17 12:48:14.116439 systemd-resolved[1376]: Clock change detected. Flushing caches. Jan 17 12:48:14.116619 systemd-timesyncd[1377]: Contacted time server 5.196.160.139:123 (2.flatcar.pool.ntp.org). Jan 17 12:48:14.116714 systemd-timesyncd[1377]: Initial clock synchronization to Fri 2025-01-17 12:48:14.116205 UTC. Jan 17 12:48:14.129945 kubelet[1851]: E0117 12:48:14.129859 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:14.129945 kubelet[1851]: I0117 12:48:14.129911 1851 apiserver.go:52] "Watching apiserver" Jan 17 12:48:14.138048 kubelet[1851]: E0117 12:48:14.136998 1851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cq6p7" podUID="e569825e-7420-42dc-bd20-5d7859eabb15" Jan 17 12:48:14.152037 kubelet[1851]: I0117 12:48:14.151984 1851 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 12:48:14.152922 systemd[1]: Created slice kubepods-besteffort-pod3334c6c7_d824_400e_bceb_560b3e43eab5.slice - libcontainer container kubepods-besteffort-pod3334c6c7_d824_400e_bceb_560b3e43eab5.slice. Jan 17 12:48:14.166259 kubelet[1851]: I0117 12:48:14.165044 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/72414523-befd-41f3-be5f-e3571be3fec4-node-certs\") pod \"calico-node-4sj7d\" (UID: \"72414523-befd-41f3-be5f-e3571be3fec4\") " pod="calico-system/calico-node-4sj7d" Jan 17 12:48:14.166259 kubelet[1851]: I0117 12:48:14.165119 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/72414523-befd-41f3-be5f-e3571be3fec4-cni-net-dir\") pod \"calico-node-4sj7d\" (UID: \"72414523-befd-41f3-be5f-e3571be3fec4\") " pod="calico-system/calico-node-4sj7d" Jan 17 12:48:14.166259 kubelet[1851]: I0117 12:48:14.165167 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3334c6c7-d824-400e-bceb-560b3e43eab5-lib-modules\") pod \"kube-proxy-hz2qq\" (UID: \"3334c6c7-d824-400e-bceb-560b3e43eab5\") " pod="kube-system/kube-proxy-hz2qq" Jan 17 12:48:14.166259 kubelet[1851]: I0117 12:48:14.165209 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/72414523-befd-41f3-be5f-e3571be3fec4-var-lib-calico\") pod \"calico-node-4sj7d\" (UID: \"72414523-befd-41f3-be5f-e3571be3fec4\") " pod="calico-system/calico-node-4sj7d" Jan 17 12:48:14.166259 kubelet[1851]: I0117 12:48:14.165303 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3334c6c7-d824-400e-bceb-560b3e43eab5-kube-proxy\") pod \"kube-proxy-hz2qq\" (UID: \"3334c6c7-d824-400e-bceb-560b3e43eab5\") " pod="kube-system/kube-proxy-hz2qq" Jan 17 12:48:14.166824 kubelet[1851]: I0117 12:48:14.165345 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z99sj\" (UniqueName: \"kubernetes.io/projected/3334c6c7-d824-400e-bceb-560b3e43eab5-kube-api-access-z99sj\") pod \"kube-proxy-hz2qq\" (UID: \"3334c6c7-d824-400e-bceb-560b3e43eab5\") " pod="kube-system/kube-proxy-hz2qq" Jan 17 12:48:14.166824 kubelet[1851]: I0117 12:48:14.165476 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/72414523-befd-41f3-be5f-e3571be3fec4-policysync\") pod \"calico-node-4sj7d\" (UID: \"72414523-befd-41f3-be5f-e3571be3fec4\") " pod="calico-system/calico-node-4sj7d" Jan 17 12:48:14.166824 kubelet[1851]: I0117 12:48:14.165530 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72414523-befd-41f3-be5f-e3571be3fec4-tigera-ca-bundle\") pod \"calico-node-4sj7d\" (UID: \"72414523-befd-41f3-be5f-e3571be3fec4\") " pod="calico-system/calico-node-4sj7d" Jan 17 12:48:14.166824 kubelet[1851]: I0117 12:48:14.165573 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/72414523-befd-41f3-be5f-e3571be3fec4-var-run-calico\") pod \"calico-node-4sj7d\" (UID: \"72414523-befd-41f3-be5f-e3571be3fec4\") " pod="calico-system/calico-node-4sj7d" Jan 17 12:48:14.166824 kubelet[1851]: I0117 12:48:14.165612 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/72414523-befd-41f3-be5f-e3571be3fec4-cni-bin-dir\") pod \"calico-node-4sj7d\" (UID: \"72414523-befd-41f3-be5f-e3571be3fec4\") " pod="calico-system/calico-node-4sj7d" Jan 17 12:48:14.167349 kubelet[1851]: I0117 12:48:14.165682 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/72414523-befd-41f3-be5f-e3571be3fec4-cni-log-dir\") pod \"calico-node-4sj7d\" (UID: \"72414523-befd-41f3-be5f-e3571be3fec4\") " pod="calico-system/calico-node-4sj7d" Jan 17 12:48:14.167349 kubelet[1851]: I0117 12:48:14.165782 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e569825e-7420-42dc-bd20-5d7859eabb15-varrun\") pod \"csi-node-driver-cq6p7\" (UID: \"e569825e-7420-42dc-bd20-5d7859eabb15\") " pod="calico-system/csi-node-driver-cq6p7" Jan 17 12:48:14.167349 kubelet[1851]: I0117 12:48:14.165830 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3334c6c7-d824-400e-bceb-560b3e43eab5-xtables-lock\") pod \"kube-proxy-hz2qq\" (UID: \"3334c6c7-d824-400e-bceb-560b3e43eab5\") " pod="kube-system/kube-proxy-hz2qq" Jan 17 12:48:14.167349 kubelet[1851]: I0117 12:48:14.165872 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrcq2\" (UniqueName: \"kubernetes.io/projected/e569825e-7420-42dc-bd20-5d7859eabb15-kube-api-access-nrcq2\") pod \"csi-node-driver-cq6p7\" (UID: \"e569825e-7420-42dc-bd20-5d7859eabb15\") " pod="calico-system/csi-node-driver-cq6p7" Jan 17 12:48:14.167349 kubelet[1851]: I0117 12:48:14.165911 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72414523-befd-41f3-be5f-e3571be3fec4-lib-modules\") pod \"calico-node-4sj7d\" (UID: \"72414523-befd-41f3-be5f-e3571be3fec4\") " pod="calico-system/calico-node-4sj7d" Jan 17 12:48:14.167635 kubelet[1851]: I0117 12:48:14.165949 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72414523-befd-41f3-be5f-e3571be3fec4-xtables-lock\") pod \"calico-node-4sj7d\" (UID: \"72414523-befd-41f3-be5f-e3571be3fec4\") " pod="calico-system/calico-node-4sj7d" Jan 17 12:48:14.167635 kubelet[1851]: I0117 12:48:14.166044 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/72414523-befd-41f3-be5f-e3571be3fec4-flexvol-driver-host\") pod \"calico-node-4sj7d\" (UID: \"72414523-befd-41f3-be5f-e3571be3fec4\") " pod="calico-system/calico-node-4sj7d" Jan 17 12:48:14.167635 kubelet[1851]: I0117 12:48:14.166120 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtg9r\" (UniqueName: \"kubernetes.io/projected/72414523-befd-41f3-be5f-e3571be3fec4-kube-api-access-gtg9r\") pod \"calico-node-4sj7d\" (UID: \"72414523-befd-41f3-be5f-e3571be3fec4\") " pod="calico-system/calico-node-4sj7d" Jan 17 12:48:14.167842 kubelet[1851]: I0117 12:48:14.166207 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e569825e-7420-42dc-bd20-5d7859eabb15-kubelet-dir\") pod \"csi-node-driver-cq6p7\" (UID: \"e569825e-7420-42dc-bd20-5d7859eabb15\") " pod="calico-system/csi-node-driver-cq6p7" Jan 17 12:48:14.168033 kubelet[1851]: I0117 12:48:14.167995 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e569825e-7420-42dc-bd20-5d7859eabb15-socket-dir\") pod \"csi-node-driver-cq6p7\" (UID: \"e569825e-7420-42dc-bd20-5d7859eabb15\") " pod="calico-system/csi-node-driver-cq6p7" Jan 17 12:48:14.168257 kubelet[1851]: I0117 12:48:14.168181 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e569825e-7420-42dc-bd20-5d7859eabb15-registration-dir\") pod \"csi-node-driver-cq6p7\" (UID: \"e569825e-7420-42dc-bd20-5d7859eabb15\") " pod="calico-system/csi-node-driver-cq6p7" Jan 17 12:48:14.174019 systemd[1]: Created slice kubepods-besteffort-pod72414523_befd_41f3_be5f_e3571be3fec4.slice - libcontainer container kubepods-besteffort-pod72414523_befd_41f3_be5f_e3571be3fec4.slice. Jan 17 12:48:14.274294 kubelet[1851]: E0117 12:48:14.274254 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:14.274597 kubelet[1851]: W0117 12:48:14.274564 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:14.275347 kubelet[1851]: E0117 12:48:14.275315 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:14.276962 kubelet[1851]: E0117 12:48:14.276868 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:14.276962 kubelet[1851]: W0117 12:48:14.276906 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:14.276962 kubelet[1851]: E0117 12:48:14.276943 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:14.278499 kubelet[1851]: E0117 12:48:14.277346 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:14.278499 kubelet[1851]: W0117 12:48:14.277369 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:14.278499 kubelet[1851]: E0117 12:48:14.277617 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:14.278499 kubelet[1851]: E0117 12:48:14.277819 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:14.278499 kubelet[1851]: W0117 12:48:14.277841 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:14.278499 kubelet[1851]: E0117 12:48:14.278041 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:14.278499 kubelet[1851]: E0117 12:48:14.278300 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:14.278499 kubelet[1851]: W0117 12:48:14.278347 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:14.279281 kubelet[1851]: E0117 12:48:14.278653 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:14.279281 kubelet[1851]: W0117 12:48:14.278672 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:14.279281 kubelet[1851]: E0117 12:48:14.278694 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:14.279281 kubelet[1851]: E0117 12:48:14.279069 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:14.279281 kubelet[1851]: W0117 12:48:14.279089 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:14.279281 kubelet[1851]: E0117 12:48:14.279111 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:14.279281 kubelet[1851]: E0117 12:48:14.279145 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:14.302276 kubelet[1851]: E0117 12:48:14.299517 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:14.302276 kubelet[1851]: W0117 12:48:14.299558 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:14.302276 kubelet[1851]: E0117 12:48:14.299596 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:14.305476 kubelet[1851]: E0117 12:48:14.303506 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:14.305745 kubelet[1851]: W0117 12:48:14.305588 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:14.305909 kubelet[1851]: E0117 12:48:14.305633 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:14.314496 kubelet[1851]: E0117 12:48:14.313580 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:14.316448 kubelet[1851]: W0117 12:48:14.314782 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:14.316448 kubelet[1851]: E0117 12:48:14.315042 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:14.466956 containerd[1463]: time="2025-01-17T12:48:14.466685941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hz2qq,Uid:3334c6c7-d824-400e-bceb-560b3e43eab5,Namespace:kube-system,Attempt:0,}" Jan 17 12:48:14.482611 containerd[1463]: time="2025-01-17T12:48:14.481894384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4sj7d,Uid:72414523-befd-41f3-be5f-e3571be3fec4,Namespace:calico-system,Attempt:0,}" Jan 17 12:48:15.130635 kubelet[1851]: E0117 12:48:15.130559 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:15.177759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3378301828.mount: Deactivated successfully. Jan 17 12:48:15.196595 containerd[1463]: time="2025-01-17T12:48:15.196502201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:48:15.198730 containerd[1463]: time="2025-01-17T12:48:15.198656812Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 17 12:48:15.201642 containerd[1463]: time="2025-01-17T12:48:15.201536202Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:48:15.204251 containerd[1463]: time="2025-01-17T12:48:15.204063602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:48:15.204997 containerd[1463]: time="2025-01-17T12:48:15.204409591Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:48:15.210071 containerd[1463]: time="2025-01-17T12:48:15.209955482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:48:15.215265 containerd[1463]: time="2025-01-17T12:48:15.214333614Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 747.43806ms" Jan 17 12:48:15.217701 containerd[1463]: time="2025-01-17T12:48:15.217615308Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 735.497615ms" Jan 17 12:48:15.435310 containerd[1463]: time="2025-01-17T12:48:15.428056872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:48:15.435310 containerd[1463]: time="2025-01-17T12:48:15.428142021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:48:15.435310 containerd[1463]: time="2025-01-17T12:48:15.428196984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:48:15.435310 containerd[1463]: time="2025-01-17T12:48:15.430316319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:48:15.451843 containerd[1463]: time="2025-01-17T12:48:15.451680367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:48:15.451843 containerd[1463]: time="2025-01-17T12:48:15.451780485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:48:15.451843 containerd[1463]: time="2025-01-17T12:48:15.451818877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:48:15.452109 containerd[1463]: time="2025-01-17T12:48:15.451908395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:48:15.518792 systemd[1]: run-containerd-runc-k8s.io-343f0215a4be55163adbf07cad6d7486377a392f18f03116559bb1030788d25b-runc.keCS02.mount: Deactivated successfully. Jan 17 12:48:15.520053 systemd[1]: run-containerd-runc-k8s.io-8de30d09ae82b466823e4c593f3dc46e145489eec0a22ad742de9d5bbb2a186f-runc.ZJZkEE.mount: Deactivated successfully. Jan 17 12:48:15.531009 systemd[1]: Started cri-containerd-343f0215a4be55163adbf07cad6d7486377a392f18f03116559bb1030788d25b.scope - libcontainer container 343f0215a4be55163adbf07cad6d7486377a392f18f03116559bb1030788d25b. Jan 17 12:48:15.532391 systemd[1]: Started cri-containerd-8de30d09ae82b466823e4c593f3dc46e145489eec0a22ad742de9d5bbb2a186f.scope - libcontainer container 8de30d09ae82b466823e4c593f3dc46e145489eec0a22ad742de9d5bbb2a186f. Jan 17 12:48:15.564823 containerd[1463]: time="2025-01-17T12:48:15.564560314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hz2qq,Uid:3334c6c7-d824-400e-bceb-560b3e43eab5,Namespace:kube-system,Attempt:0,} returns sandbox id \"343f0215a4be55163adbf07cad6d7486377a392f18f03116559bb1030788d25b\"" Jan 17 12:48:15.564823 containerd[1463]: time="2025-01-17T12:48:15.564742455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4sj7d,Uid:72414523-befd-41f3-be5f-e3571be3fec4,Namespace:calico-system,Attempt:0,} returns sandbox id \"8de30d09ae82b466823e4c593f3dc46e145489eec0a22ad742de9d5bbb2a186f\"" Jan 17 12:48:15.568337 containerd[1463]: time="2025-01-17T12:48:15.568270683Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 17 12:48:16.131815 kubelet[1851]: E0117 12:48:16.131732 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:16.260236 kubelet[1851]: E0117 12:48:16.257922 1851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cq6p7" podUID="e569825e-7420-42dc-bd20-5d7859eabb15" Jan 17 12:48:16.894848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627980079.mount: Deactivated successfully. Jan 17 12:48:17.132287 kubelet[1851]: E0117 12:48:17.132263 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:17.446069 containerd[1463]: time="2025-01-17T12:48:17.445851515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:17.447179 containerd[1463]: time="2025-01-17T12:48:17.446956207Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231136" Jan 17 12:48:17.448554 containerd[1463]: time="2025-01-17T12:48:17.448491967Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:17.451256 containerd[1463]: time="2025-01-17T12:48:17.451170360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:17.452209 containerd[1463]: time="2025-01-17T12:48:17.451846428Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.883502349s" Jan 17 12:48:17.452209 containerd[1463]: time="2025-01-17T12:48:17.451876735Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 17 12:48:17.453140 containerd[1463]: time="2025-01-17T12:48:17.453027564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:48:17.454620 containerd[1463]: time="2025-01-17T12:48:17.454458487Z" level=info msg="CreateContainer within sandbox \"343f0215a4be55163adbf07cad6d7486377a392f18f03116559bb1030788d25b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:48:17.471460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1143547739.mount: Deactivated successfully. Jan 17 12:48:17.490283 containerd[1463]: time="2025-01-17T12:48:17.490210460Z" level=info msg="CreateContainer within sandbox \"343f0215a4be55163adbf07cad6d7486377a392f18f03116559bb1030788d25b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0d8d528b98e0c290a6e7117e5da751ca682a694d9e2fb597ed108ea82976ac49\"" Jan 17 12:48:17.491315 containerd[1463]: time="2025-01-17T12:48:17.491047770Z" level=info msg="StartContainer for \"0d8d528b98e0c290a6e7117e5da751ca682a694d9e2fb597ed108ea82976ac49\"" Jan 17 12:48:17.527555 systemd[1]: Started cri-containerd-0d8d528b98e0c290a6e7117e5da751ca682a694d9e2fb597ed108ea82976ac49.scope - libcontainer container 0d8d528b98e0c290a6e7117e5da751ca682a694d9e2fb597ed108ea82976ac49. Jan 17 12:48:17.557719 containerd[1463]: time="2025-01-17T12:48:17.557681392Z" level=info msg="StartContainer for \"0d8d528b98e0c290a6e7117e5da751ca682a694d9e2fb597ed108ea82976ac49\" returns successfully" Jan 17 12:48:18.133677 kubelet[1851]: E0117 12:48:18.133594 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:18.257928 kubelet[1851]: E0117 12:48:18.257795 1851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cq6p7" podUID="e569825e-7420-42dc-bd20-5d7859eabb15" Jan 17 12:48:18.301812 kubelet[1851]: I0117 12:48:18.301690 1851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hz2qq" podStartSLOduration=4.416471353 podStartE2EDuration="6.301662779s" podCreationTimestamp="2025-01-17 12:48:12 +0000 UTC" firstStartedPulling="2025-01-17 12:48:15.567689783 +0000 UTC m=+4.197109619" lastFinishedPulling="2025-01-17 12:48:17.452881209 +0000 UTC m=+6.082301045" observedRunningTime="2025-01-17 12:48:18.298649658 +0000 UTC m=+6.928069534" watchObservedRunningTime="2025-01-17 12:48:18.301662779 +0000 UTC m=+6.931082655" Jan 17 12:48:18.379913 kubelet[1851]: E0117 12:48:18.379858 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.379913 kubelet[1851]: W0117 12:48:18.379899 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.380439 kubelet[1851]: E0117 12:48:18.379932 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.380439 kubelet[1851]: E0117 12:48:18.380354 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.380439 kubelet[1851]: W0117 12:48:18.380375 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.380439 kubelet[1851]: E0117 12:48:18.380398 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.381088 kubelet[1851]: E0117 12:48:18.380709 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.381088 kubelet[1851]: W0117 12:48:18.380730 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.381088 kubelet[1851]: E0117 12:48:18.380751 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.381088 kubelet[1851]: E0117 12:48:18.381069 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.381088 kubelet[1851]: W0117 12:48:18.381089 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.381566 kubelet[1851]: E0117 12:48:18.381110 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.381566 kubelet[1851]: E0117 12:48:18.381457 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.381566 kubelet[1851]: W0117 12:48:18.381478 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.381566 kubelet[1851]: E0117 12:48:18.381499 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.382196 kubelet[1851]: E0117 12:48:18.381801 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.382196 kubelet[1851]: W0117 12:48:18.381821 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.382196 kubelet[1851]: E0117 12:48:18.381842 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.382196 kubelet[1851]: E0117 12:48:18.382195 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.382849 kubelet[1851]: W0117 12:48:18.382250 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.382849 kubelet[1851]: E0117 12:48:18.382274 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.382849 kubelet[1851]: E0117 12:48:18.382590 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.382849 kubelet[1851]: W0117 12:48:18.382611 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.382849 kubelet[1851]: E0117 12:48:18.382631 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.383661 kubelet[1851]: E0117 12:48:18.383046 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.383661 kubelet[1851]: W0117 12:48:18.383074 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.383661 kubelet[1851]: E0117 12:48:18.383105 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.383661 kubelet[1851]: E0117 12:48:18.383559 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.383661 kubelet[1851]: W0117 12:48:18.383578 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.383661 kubelet[1851]: E0117 12:48:18.383600 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.384845 kubelet[1851]: E0117 12:48:18.383901 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.384845 kubelet[1851]: W0117 12:48:18.383921 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.384845 kubelet[1851]: E0117 12:48:18.383941 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.384845 kubelet[1851]: E0117 12:48:18.384311 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.384845 kubelet[1851]: W0117 12:48:18.384331 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.384845 kubelet[1851]: E0117 12:48:18.384351 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.384845 kubelet[1851]: E0117 12:48:18.384671 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.384845 kubelet[1851]: W0117 12:48:18.384691 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.384845 kubelet[1851]: E0117 12:48:18.384713 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.387594 kubelet[1851]: E0117 12:48:18.385003 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.387594 kubelet[1851]: W0117 12:48:18.385023 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.387594 kubelet[1851]: E0117 12:48:18.385043 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.387594 kubelet[1851]: E0117 12:48:18.385372 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.387594 kubelet[1851]: W0117 12:48:18.385392 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.387594 kubelet[1851]: E0117 12:48:18.385412 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.387594 kubelet[1851]: E0117 12:48:18.385704 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.387594 kubelet[1851]: W0117 12:48:18.385724 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.387594 kubelet[1851]: E0117 12:48:18.385746 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.387594 kubelet[1851]: E0117 12:48:18.386093 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.388305 kubelet[1851]: W0117 12:48:18.386114 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.388305 kubelet[1851]: E0117 12:48:18.386134 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.388305 kubelet[1851]: E0117 12:48:18.386476 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.388305 kubelet[1851]: W0117 12:48:18.386498 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.388305 kubelet[1851]: E0117 12:48:18.386518 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.388305 kubelet[1851]: E0117 12:48:18.386835 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.388305 kubelet[1851]: W0117 12:48:18.386860 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.388305 kubelet[1851]: E0117 12:48:18.386884 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.388305 kubelet[1851]: E0117 12:48:18.387166 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.388305 kubelet[1851]: W0117 12:48:18.387186 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.388953 kubelet[1851]: E0117 12:48:18.387210 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.401308 kubelet[1851]: E0117 12:48:18.401132 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.401308 kubelet[1851]: W0117 12:48:18.401169 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.401308 kubelet[1851]: E0117 12:48:18.401204 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.401883 kubelet[1851]: E0117 12:48:18.401636 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.401883 kubelet[1851]: W0117 12:48:18.401658 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.401883 kubelet[1851]: E0117 12:48:18.401690 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.402281 kubelet[1851]: E0117 12:48:18.402056 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.402281 kubelet[1851]: W0117 12:48:18.402077 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.402281 kubelet[1851]: E0117 12:48:18.402112 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.402892 kubelet[1851]: E0117 12:48:18.402473 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.402892 kubelet[1851]: W0117 12:48:18.402493 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.402892 kubelet[1851]: E0117 12:48:18.402527 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.403435 kubelet[1851]: E0117 12:48:18.403207 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.403435 kubelet[1851]: W0117 12:48:18.403285 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.403435 kubelet[1851]: E0117 12:48:18.403330 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.404318 kubelet[1851]: E0117 12:48:18.404032 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.404318 kubelet[1851]: W0117 12:48:18.404058 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.404318 kubelet[1851]: E0117 12:48:18.404101 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.404591 kubelet[1851]: E0117 12:48:18.404506 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.404591 kubelet[1851]: W0117 12:48:18.404528 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.404717 kubelet[1851]: E0117 12:48:18.404679 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.404958 kubelet[1851]: E0117 12:48:18.404923 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.404958 kubelet[1851]: W0117 12:48:18.404950 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.405105 kubelet[1851]: E0117 12:48:18.404981 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.405533 kubelet[1851]: E0117 12:48:18.405476 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.405533 kubelet[1851]: W0117 12:48:18.405513 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.405533 kubelet[1851]: E0117 12:48:18.405545 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.406459 kubelet[1851]: E0117 12:48:18.406355 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.406459 kubelet[1851]: W0117 12:48:18.406386 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.406459 kubelet[1851]: E0117 12:48:18.406423 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.407368 kubelet[1851]: E0117 12:48:18.407143 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.407368 kubelet[1851]: W0117 12:48:18.407170 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.407368 kubelet[1851]: E0117 12:48:18.407255 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:18.407691 kubelet[1851]: E0117 12:48:18.407607 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:18.407691 kubelet[1851]: W0117 12:48:18.407627 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:18.407691 kubelet[1851]: E0117 12:48:18.407647 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.057002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount695156404.mount: Deactivated successfully. Jan 17 12:48:19.134613 kubelet[1851]: E0117 12:48:19.134586 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:19.233019 containerd[1463]: time="2025-01-17T12:48:19.232711580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:19.233964 containerd[1463]: time="2025-01-17T12:48:19.233741713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 17 12:48:19.235054 containerd[1463]: time="2025-01-17T12:48:19.235024178Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:19.237666 containerd[1463]: time="2025-01-17T12:48:19.237626028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:19.238868 containerd[1463]: time="2025-01-17T12:48:19.238327784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.78526829s" Jan 17 12:48:19.238868 containerd[1463]: time="2025-01-17T12:48:19.238361477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:48:19.240348 containerd[1463]: time="2025-01-17T12:48:19.240315772Z" level=info msg="CreateContainer within sandbox \"8de30d09ae82b466823e4c593f3dc46e145489eec0a22ad742de9d5bbb2a186f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:48:19.259373 containerd[1463]: time="2025-01-17T12:48:19.259313783Z" level=info msg="CreateContainer within sandbox \"8de30d09ae82b466823e4c593f3dc46e145489eec0a22ad742de9d5bbb2a186f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6e7b5d290cfc988bf0c6dd2587a71e26116e1770ef26470404ea2b863bf555c4\"" Jan 17 12:48:19.259879 containerd[1463]: time="2025-01-17T12:48:19.259838467Z" level=info msg="StartContainer for \"6e7b5d290cfc988bf0c6dd2587a71e26116e1770ef26470404ea2b863bf555c4\"" Jan 17 12:48:19.295739 kubelet[1851]: E0117 12:48:19.295693 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.295739 kubelet[1851]: W0117 12:48:19.295715 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.295739 kubelet[1851]: E0117 12:48:19.295733 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.297085 kubelet[1851]: E0117 12:48:19.296031 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.297085 kubelet[1851]: W0117 12:48:19.296042 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.297085 kubelet[1851]: E0117 12:48:19.296053 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.297085 kubelet[1851]: E0117 12:48:19.296239 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.297085 kubelet[1851]: W0117 12:48:19.296248 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.297085 kubelet[1851]: E0117 12:48:19.296256 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.297085 kubelet[1851]: E0117 12:48:19.296423 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.297085 kubelet[1851]: W0117 12:48:19.296431 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.297085 kubelet[1851]: E0117 12:48:19.296460 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.297085 kubelet[1851]: E0117 12:48:19.296602 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.297398 kubelet[1851]: W0117 12:48:19.296623 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.297398 kubelet[1851]: E0117 12:48:19.296631 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.297611 systemd[1]: Started cri-containerd-6e7b5d290cfc988bf0c6dd2587a71e26116e1770ef26470404ea2b863bf555c4.scope - libcontainer container 6e7b5d290cfc988bf0c6dd2587a71e26116e1770ef26470404ea2b863bf555c4. Jan 17 12:48:19.298338 kubelet[1851]: E0117 12:48:19.297872 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.298338 kubelet[1851]: W0117 12:48:19.297902 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.298338 kubelet[1851]: E0117 12:48:19.297937 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.299371 kubelet[1851]: E0117 12:48:19.298588 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.299371 kubelet[1851]: W0117 12:48:19.298619 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.299371 kubelet[1851]: E0117 12:48:19.298680 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.300345 kubelet[1851]: E0117 12:48:19.300329 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.300424 kubelet[1851]: W0117 12:48:19.300410 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.300539 kubelet[1851]: E0117 12:48:19.300502 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.300993 kubelet[1851]: E0117 12:48:19.300890 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.300993 kubelet[1851]: W0117 12:48:19.300901 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.300993 kubelet[1851]: E0117 12:48:19.300911 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.301168 kubelet[1851]: E0117 12:48:19.301146 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.301336 kubelet[1851]: W0117 12:48:19.301264 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.301336 kubelet[1851]: E0117 12:48:19.301280 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.301716 kubelet[1851]: E0117 12:48:19.301652 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.301716 kubelet[1851]: W0117 12:48:19.301662 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.301716 kubelet[1851]: E0117 12:48:19.301674 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.302064 kubelet[1851]: E0117 12:48:19.301936 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.302064 kubelet[1851]: W0117 12:48:19.301947 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.302064 kubelet[1851]: E0117 12:48:19.301974 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.302243 kubelet[1851]: E0117 12:48:19.302231 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.302386 kubelet[1851]: W0117 12:48:19.302330 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.302386 kubelet[1851]: E0117 12:48:19.302345 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.303800 kubelet[1851]: E0117 12:48:19.303697 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.303800 kubelet[1851]: W0117 12:48:19.303710 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.303800 kubelet[1851]: E0117 12:48:19.303721 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.304057 kubelet[1851]: E0117 12:48:19.303976 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.304057 kubelet[1851]: W0117 12:48:19.303986 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.304057 kubelet[1851]: E0117 12:48:19.303996 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.304492 kubelet[1851]: E0117 12:48:19.304391 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.304492 kubelet[1851]: W0117 12:48:19.304401 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.304492 kubelet[1851]: E0117 12:48:19.304410 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.304701 kubelet[1851]: E0117 12:48:19.304691 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.304796 kubelet[1851]: W0117 12:48:19.304747 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.304796 kubelet[1851]: E0117 12:48:19.304761 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.305071 kubelet[1851]: E0117 12:48:19.304985 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.305071 kubelet[1851]: W0117 12:48:19.304995 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.305071 kubelet[1851]: E0117 12:48:19.305004 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.305333 kubelet[1851]: E0117 12:48:19.305322 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.305433 kubelet[1851]: W0117 12:48:19.305387 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.305433 kubelet[1851]: E0117 12:48:19.305401 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.305697 kubelet[1851]: E0117 12:48:19.305645 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.305697 kubelet[1851]: W0117 12:48:19.305658 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.305697 kubelet[1851]: E0117 12:48:19.305667 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.309987 kubelet[1851]: E0117 12:48:19.309928 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.310143 kubelet[1851]: W0117 12:48:19.310043 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.310143 kubelet[1851]: E0117 12:48:19.310056 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.310503 kubelet[1851]: E0117 12:48:19.310389 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.310503 kubelet[1851]: W0117 12:48:19.310400 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.310503 kubelet[1851]: E0117 12:48:19.310409 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.312650 kubelet[1851]: E0117 12:48:19.312491 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.312650 kubelet[1851]: W0117 12:48:19.312504 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.313017 kubelet[1851]: E0117 12:48:19.312840 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.313762 kubelet[1851]: E0117 12:48:19.313541 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.313762 kubelet[1851]: W0117 12:48:19.313560 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.313762 kubelet[1851]: E0117 12:48:19.313607 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.315160 kubelet[1851]: E0117 12:48:19.314880 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.315160 kubelet[1851]: W0117 12:48:19.314891 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.315160 kubelet[1851]: E0117 12:48:19.314908 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.316500 kubelet[1851]: E0117 12:48:19.316443 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.316500 kubelet[1851]: W0117 12:48:19.316454 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.316500 kubelet[1851]: E0117 12:48:19.316484 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.317009 kubelet[1851]: E0117 12:48:19.316969 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.317009 kubelet[1851]: W0117 12:48:19.316979 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.317195 kubelet[1851]: E0117 12:48:19.317093 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.317415 kubelet[1851]: E0117 12:48:19.317382 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.317727 kubelet[1851]: W0117 12:48:19.317392 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.317727 kubelet[1851]: E0117 12:48:19.317472 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.317918 kubelet[1851]: E0117 12:48:19.317907 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.317999 kubelet[1851]: W0117 12:48:19.317988 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.318151 kubelet[1851]: E0117 12:48:19.318139 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.318330 kubelet[1851]: E0117 12:48:19.318320 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.318413 kubelet[1851]: W0117 12:48:19.318402 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.318540 kubelet[1851]: E0117 12:48:19.318485 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.319101 kubelet[1851]: E0117 12:48:19.318906 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.319101 kubelet[1851]: W0117 12:48:19.318924 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.319101 kubelet[1851]: E0117 12:48:19.318934 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.319574 kubelet[1851]: E0117 12:48:19.319479 1851 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:48:19.319574 kubelet[1851]: W0117 12:48:19.319491 1851 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:48:19.319574 kubelet[1851]: E0117 12:48:19.319500 1851 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:48:19.338081 containerd[1463]: time="2025-01-17T12:48:19.338050494Z" level=info msg="StartContainer for \"6e7b5d290cfc988bf0c6dd2587a71e26116e1770ef26470404ea2b863bf555c4\" returns successfully" Jan 17 12:48:19.344625 systemd[1]: cri-containerd-6e7b5d290cfc988bf0c6dd2587a71e26116e1770ef26470404ea2b863bf555c4.scope: Deactivated successfully. Jan 17 12:48:19.940241 containerd[1463]: time="2025-01-17T12:48:19.940078637Z" level=info msg="shim disconnected" id=6e7b5d290cfc988bf0c6dd2587a71e26116e1770ef26470404ea2b863bf555c4 namespace=k8s.io Jan 17 12:48:19.940241 containerd[1463]: time="2025-01-17T12:48:19.940171061Z" level=warning msg="cleaning up after shim disconnected" id=6e7b5d290cfc988bf0c6dd2587a71e26116e1770ef26470404ea2b863bf555c4 namespace=k8s.io Jan 17 12:48:19.940241 containerd[1463]: time="2025-01-17T12:48:19.940193693Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:48:19.997511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e7b5d290cfc988bf0c6dd2587a71e26116e1770ef26470404ea2b863bf555c4-rootfs.mount: Deactivated successfully. Jan 17 12:48:20.135283 kubelet[1851]: E0117 12:48:20.135183 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:20.259517 kubelet[1851]: E0117 12:48:20.258112 1851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cq6p7" podUID="e569825e-7420-42dc-bd20-5d7859eabb15" Jan 17 12:48:20.301974 containerd[1463]: time="2025-01-17T12:48:20.301707542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:48:21.136073 kubelet[1851]: E0117 12:48:21.135998 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:22.136597 kubelet[1851]: E0117 12:48:22.136464 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:22.261246 kubelet[1851]: E0117 12:48:22.261073 1851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cq6p7" podUID="e569825e-7420-42dc-bd20-5d7859eabb15" Jan 17 12:48:23.137178 kubelet[1851]: E0117 12:48:23.137116 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:24.137998 kubelet[1851]: E0117 12:48:24.137915 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:24.262944 kubelet[1851]: E0117 12:48:24.261651 1851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cq6p7" podUID="e569825e-7420-42dc-bd20-5d7859eabb15" Jan 17 12:48:25.138802 kubelet[1851]: E0117 12:48:25.138705 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:26.012924 containerd[1463]: time="2025-01-17T12:48:26.012868463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:26.014247 containerd[1463]: time="2025-01-17T12:48:26.014091166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:48:26.015560 containerd[1463]: time="2025-01-17T12:48:26.015516660Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:26.018324 containerd[1463]: time="2025-01-17T12:48:26.018288298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:26.019168 containerd[1463]: time="2025-01-17T12:48:26.019066988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.71729226s" Jan 17 12:48:26.019168 containerd[1463]: time="2025-01-17T12:48:26.019099499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:48:26.021291 containerd[1463]: time="2025-01-17T12:48:26.021264560Z" level=info msg="CreateContainer within sandbox \"8de30d09ae82b466823e4c593f3dc46e145489eec0a22ad742de9d5bbb2a186f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:48:26.041153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585218994.mount: Deactivated successfully. Jan 17 12:48:26.047293 containerd[1463]: time="2025-01-17T12:48:26.047259063Z" level=info msg="CreateContainer within sandbox \"8de30d09ae82b466823e4c593f3dc46e145489eec0a22ad742de9d5bbb2a186f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6e90776a6d62a2b81a075368fe24121ad39a0d6144bdbf6a3d179a2a480121cb\"" Jan 17 12:48:26.048041 containerd[1463]: time="2025-01-17T12:48:26.047802612Z" level=info msg="StartContainer for \"6e90776a6d62a2b81a075368fe24121ad39a0d6144bdbf6a3d179a2a480121cb\"" Jan 17 12:48:26.085397 systemd[1]: Started cri-containerd-6e90776a6d62a2b81a075368fe24121ad39a0d6144bdbf6a3d179a2a480121cb.scope - libcontainer container 6e90776a6d62a2b81a075368fe24121ad39a0d6144bdbf6a3d179a2a480121cb. Jan 17 12:48:26.117093 containerd[1463]: time="2025-01-17T12:48:26.116992027Z" level=info msg="StartContainer for \"6e90776a6d62a2b81a075368fe24121ad39a0d6144bdbf6a3d179a2a480121cb\" returns successfully" Jan 17 12:48:26.139750 kubelet[1851]: E0117 12:48:26.139706 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:26.258954 kubelet[1851]: E0117 12:48:26.258582 1851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cq6p7" podUID="e569825e-7420-42dc-bd20-5d7859eabb15" Jan 17 12:48:27.140710 kubelet[1851]: E0117 12:48:27.140638 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:27.370590 containerd[1463]: time="2025-01-17T12:48:27.370485523Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:48:27.375586 systemd[1]: cri-containerd-6e90776a6d62a2b81a075368fe24121ad39a0d6144bdbf6a3d179a2a480121cb.scope: Deactivated successfully. Jan 17 12:48:27.417853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e90776a6d62a2b81a075368fe24121ad39a0d6144bdbf6a3d179a2a480121cb-rootfs.mount: Deactivated successfully. Jan 17 12:48:27.453093 kubelet[1851]: I0117 12:48:27.451322 1851 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 17 12:48:28.001372 update_engine[1443]: I20250117 12:48:28.000828 1443 update_attempter.cc:509] Updating boot flags... Jan 17 12:48:28.141374 kubelet[1851]: E0117 12:48:28.141252 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:28.272076 systemd[1]: Created slice kubepods-besteffort-pode569825e_7420_42dc_bd20_5d7859eabb15.slice - libcontainer container kubepods-besteffort-pode569825e_7420_42dc_bd20_5d7859eabb15.slice. Jan 17 12:48:28.277261 containerd[1463]: time="2025-01-17T12:48:28.277149742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cq6p7,Uid:e569825e-7420-42dc-bd20-5d7859eabb15,Namespace:calico-system,Attempt:0,}" Jan 17 12:48:28.397298 containerd[1463]: time="2025-01-17T12:48:28.396136802Z" level=info msg="shim disconnected" id=6e90776a6d62a2b81a075368fe24121ad39a0d6144bdbf6a3d179a2a480121cb namespace=k8s.io Jan 17 12:48:28.397298 containerd[1463]: time="2025-01-17T12:48:28.396275342Z" level=warning msg="cleaning up after shim disconnected" id=6e90776a6d62a2b81a075368fe24121ad39a0d6144bdbf6a3d179a2a480121cb namespace=k8s.io Jan 17 12:48:28.397298 containerd[1463]: time="2025-01-17T12:48:28.396306901Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:48:28.462853 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2373) Jan 17 12:48:28.474705 containerd[1463]: time="2025-01-17T12:48:28.474480886Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:48:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:48:28.530407 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2373) Jan 17 12:48:28.588029 containerd[1463]: time="2025-01-17T12:48:28.587971127Z" level=error msg="Failed to destroy network for sandbox \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:48:28.589559 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b-shm.mount: Deactivated successfully. Jan 17 12:48:28.590162 containerd[1463]: time="2025-01-17T12:48:28.589650878Z" level=error msg="encountered an error cleaning up failed sandbox \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:48:28.590162 containerd[1463]: time="2025-01-17T12:48:28.589724997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cq6p7,Uid:e569825e-7420-42dc-bd20-5d7859eabb15,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:48:28.590657 kubelet[1851]: E0117 12:48:28.590495 1851 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:48:28.590739 kubelet[1851]: E0117 12:48:28.590658 1851 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cq6p7" Jan 17 12:48:28.590739 kubelet[1851]: E0117 12:48:28.590689 1851 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cq6p7" Jan 17 12:48:28.590823 kubelet[1851]: E0117 12:48:28.590758 1851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cq6p7_calico-system(e569825e-7420-42dc-bd20-5d7859eabb15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cq6p7_calico-system(e569825e-7420-42dc-bd20-5d7859eabb15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cq6p7" podUID="e569825e-7420-42dc-bd20-5d7859eabb15" Jan 17 12:48:29.141589 kubelet[1851]: E0117 12:48:29.141542 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:29.331315 containerd[1463]: time="2025-01-17T12:48:29.330355960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:48:29.331315 containerd[1463]: time="2025-01-17T12:48:29.331110846Z" level=info msg="StopPodSandbox for \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\"" Jan 17 12:48:29.331315 containerd[1463]: time="2025-01-17T12:48:29.331276086Z" level=info msg="Ensure that sandbox bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b in task-service has been cleanup successfully" Jan 17 12:48:29.331770 kubelet[1851]: I0117 12:48:29.330614 1851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Jan 17 12:48:29.377181 containerd[1463]: time="2025-01-17T12:48:29.376946931Z" level=error msg="StopPodSandbox for \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\" failed" error="failed to destroy network for sandbox \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:48:29.377607 kubelet[1851]: E0117 12:48:29.377522 1851 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Jan 17 12:48:29.377775 kubelet[1851]: E0117 12:48:29.377654 1851 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b"} Jan 17 12:48:29.377878 kubelet[1851]: E0117 12:48:29.377826 1851 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e569825e-7420-42dc-bd20-5d7859eabb15\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:48:29.377988 kubelet[1851]: E0117 12:48:29.377925 1851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e569825e-7420-42dc-bd20-5d7859eabb15\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cq6p7" podUID="e569825e-7420-42dc-bd20-5d7859eabb15" Jan 17 12:48:30.142577 kubelet[1851]: E0117 12:48:30.142497 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:30.773841 systemd[1]: Created slice kubepods-besteffort-pod06495861_2bb3_4c45_95ca_1620c6e3c97e.slice - libcontainer container kubepods-besteffort-pod06495861_2bb3_4c45_95ca_1620c6e3c97e.slice. Jan 17 12:48:30.890734 kubelet[1851]: I0117 12:48:30.890557 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw2xm\" (UniqueName: \"kubernetes.io/projected/06495861-2bb3-4c45-95ca-1620c6e3c97e-kube-api-access-xw2xm\") pod \"nginx-deployment-8587fbcb89-x927j\" (UID: \"06495861-2bb3-4c45-95ca-1620c6e3c97e\") " pod="default/nginx-deployment-8587fbcb89-x927j" Jan 17 12:48:31.082301 containerd[1463]: time="2025-01-17T12:48:31.081320413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-x927j,Uid:06495861-2bb3-4c45-95ca-1620c6e3c97e,Namespace:default,Attempt:0,}" Jan 17 12:48:31.143664 kubelet[1851]: E0117 12:48:31.143595 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:31.210082 containerd[1463]: time="2025-01-17T12:48:31.207906826Z" level=error msg="Failed to destroy network for sandbox \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:48:31.210082 containerd[1463]: time="2025-01-17T12:48:31.209491127Z" level=error msg="encountered an error cleaning up failed sandbox \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:48:31.210082 containerd[1463]: time="2025-01-17T12:48:31.209541542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-x927j,Uid:06495861-2bb3-4c45-95ca-1620c6e3c97e,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:48:31.210476 kubelet[1851]: E0117 12:48:31.210443 1851 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:48:31.210858 kubelet[1851]: E0117 12:48:31.210577 1851 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-x927j" Jan 17 12:48:31.210858 kubelet[1851]: E0117 12:48:31.210604 1851 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-x927j" Jan 17 12:48:31.210858 kubelet[1851]: E0117 12:48:31.210646 1851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-x927j_default(06495861-2bb3-4c45-95ca-1620c6e3c97e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-x927j_default(06495861-2bb3-4c45-95ca-1620c6e3c97e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-x927j" podUID="06495861-2bb3-4c45-95ca-1620c6e3c97e" Jan 17 12:48:31.211381 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec-shm.mount: Deactivated successfully. Jan 17 12:48:31.337037 kubelet[1851]: I0117 12:48:31.336927 1851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Jan 17 12:48:31.339330 containerd[1463]: time="2025-01-17T12:48:31.339057599Z" level=info msg="StopPodSandbox for \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\"" Jan 17 12:48:31.339330 containerd[1463]: time="2025-01-17T12:48:31.339247375Z" level=info msg="Ensure that sandbox 0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec in task-service has been cleanup successfully" Jan 17 12:48:31.409458 containerd[1463]: time="2025-01-17T12:48:31.409382844Z" level=error msg="StopPodSandbox for \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\" failed" error="failed to destroy network for sandbox \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:48:31.409934 kubelet[1851]: E0117 12:48:31.409878 1851 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Jan 17 12:48:31.410387 kubelet[1851]: E0117 12:48:31.410113 1851 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec"} Jan 17 12:48:31.410387 kubelet[1851]: E0117 12:48:31.410198 1851 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06495861-2bb3-4c45-95ca-1620c6e3c97e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:48:31.410387 kubelet[1851]: E0117 12:48:31.410310 1851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06495861-2bb3-4c45-95ca-1620c6e3c97e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-x927j" podUID="06495861-2bb3-4c45-95ca-1620c6e3c97e" Jan 17 12:48:32.126417 kubelet[1851]: E0117 12:48:32.126365 1851 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:32.145258 kubelet[1851]: E0117 12:48:32.145190 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:33.145718 kubelet[1851]: E0117 12:48:33.145658 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:34.146758 kubelet[1851]: E0117 12:48:34.146649 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:35.146842 kubelet[1851]: E0117 12:48:35.146804 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:36.147075 kubelet[1851]: E0117 12:48:36.147037 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:37.147415 kubelet[1851]: E0117 12:48:37.147169 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:38.045079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2661584867.mount: Deactivated successfully. Jan 17 12:48:38.092366 containerd[1463]: time="2025-01-17T12:48:38.092300585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:38.094398 containerd[1463]: time="2025-01-17T12:48:38.094346081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:48:38.095681 containerd[1463]: time="2025-01-17T12:48:38.095625711Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:38.098116 containerd[1463]: time="2025-01-17T12:48:38.098093770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:38.098809 containerd[1463]: time="2025-01-17T12:48:38.098656626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.768261753s" Jan 17 12:48:38.098809 containerd[1463]: time="2025-01-17T12:48:38.098690720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:48:38.111323 containerd[1463]: time="2025-01-17T12:48:38.111096687Z" level=info msg="CreateContainer within sandbox \"8de30d09ae82b466823e4c593f3dc46e145489eec0a22ad742de9d5bbb2a186f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:48:38.133666 containerd[1463]: time="2025-01-17T12:48:38.133594872Z" level=info msg="CreateContainer within sandbox \"8de30d09ae82b466823e4c593f3dc46e145489eec0a22ad742de9d5bbb2a186f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6a32dcdee1abb4c36a6efcca409e724a15399515147af0f319a4c8fd4123d93c\"" Jan 17 12:48:38.134641 containerd[1463]: time="2025-01-17T12:48:38.134596801Z" level=info msg="StartContainer for \"6a32dcdee1abb4c36a6efcca409e724a15399515147af0f319a4c8fd4123d93c\"" Jan 17 12:48:38.148167 kubelet[1851]: E0117 12:48:38.147977 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:38.172351 systemd[1]: Started cri-containerd-6a32dcdee1abb4c36a6efcca409e724a15399515147af0f319a4c8fd4123d93c.scope - libcontainer container 6a32dcdee1abb4c36a6efcca409e724a15399515147af0f319a4c8fd4123d93c. Jan 17 12:48:38.203004 containerd[1463]: time="2025-01-17T12:48:38.202957843Z" level=info msg="StartContainer for \"6a32dcdee1abb4c36a6efcca409e724a15399515147af0f319a4c8fd4123d93c\" returns successfully" Jan 17 12:48:38.272887 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:48:38.272956 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:48:38.404038 kubelet[1851]: I0117 12:48:38.403884 1851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4sj7d" podStartSLOduration=3.8715939649999997 podStartE2EDuration="26.403800493s" podCreationTimestamp="2025-01-17 12:48:12 +0000 UTC" firstStartedPulling="2025-01-17 12:48:15.567771937 +0000 UTC m=+4.197191773" lastFinishedPulling="2025-01-17 12:48:38.099978475 +0000 UTC m=+26.729398301" observedRunningTime="2025-01-17 12:48:38.402140129 +0000 UTC m=+27.031560055" watchObservedRunningTime="2025-01-17 12:48:38.403800493 +0000 UTC m=+27.033220369" Jan 17 12:48:39.148675 kubelet[1851]: E0117 12:48:39.148549 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:40.028289 kernel: bpftool[2659]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:48:40.149957 kubelet[1851]: E0117 12:48:40.149888 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:40.292962 systemd-networkd[1375]: vxlan.calico: Link UP Jan 17 12:48:40.292972 systemd-networkd[1375]: vxlan.calico: Gained carrier Jan 17 12:48:41.150893 kubelet[1851]: E0117 12:48:41.150807 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:41.260496 containerd[1463]: time="2025-01-17T12:48:41.259860372Z" level=info msg="StopPodSandbox for \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\"" Jan 17 12:48:41.448723 containerd[1463]: 2025-01-17 12:48:41.368 [INFO][2753] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Jan 17 12:48:41.448723 containerd[1463]: 2025-01-17 12:48:41.368 [INFO][2753] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" iface="eth0" netns="/var/run/netns/cni-26796a69-76c2-83cb-f422-f92be2292dfd" Jan 17 12:48:41.448723 containerd[1463]: 2025-01-17 12:48:41.369 [INFO][2753] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" iface="eth0" netns="/var/run/netns/cni-26796a69-76c2-83cb-f422-f92be2292dfd" Jan 17 12:48:41.448723 containerd[1463]: 2025-01-17 12:48:41.370 [INFO][2753] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" iface="eth0" netns="/var/run/netns/cni-26796a69-76c2-83cb-f422-f92be2292dfd" Jan 17 12:48:41.448723 containerd[1463]: 2025-01-17 12:48:41.370 [INFO][2753] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Jan 17 12:48:41.448723 containerd[1463]: 2025-01-17 12:48:41.370 [INFO][2753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Jan 17 12:48:41.448723 containerd[1463]: 2025-01-17 12:48:41.425 [INFO][2759] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" HandleID="k8s-pod-network.bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Workload="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:48:41.448723 containerd[1463]: 2025-01-17 12:48:41.425 [INFO][2759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:48:41.448723 containerd[1463]: 2025-01-17 12:48:41.425 [INFO][2759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:48:41.448723 containerd[1463]: 2025-01-17 12:48:41.438 [WARNING][2759] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" HandleID="k8s-pod-network.bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Workload="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:48:41.448723 containerd[1463]: 2025-01-17 12:48:41.438 [INFO][2759] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" HandleID="k8s-pod-network.bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Workload="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:48:41.448723 containerd[1463]: 2025-01-17 12:48:41.441 [INFO][2759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:48:41.448723 containerd[1463]: 2025-01-17 12:48:41.445 [INFO][2753] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Jan 17 12:48:41.451073 containerd[1463]: time="2025-01-17T12:48:41.450722164Z" level=info msg="TearDown network for sandbox \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\" successfully" Jan 17 12:48:41.451073 containerd[1463]: time="2025-01-17T12:48:41.450810700Z" level=info msg="StopPodSandbox for \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\" returns successfully" Jan 17 12:48:41.453763 containerd[1463]: time="2025-01-17T12:48:41.453665334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cq6p7,Uid:e569825e-7420-42dc-bd20-5d7859eabb15,Namespace:calico-system,Attempt:1,}" Jan 17 12:48:41.457567 systemd[1]: run-netns-cni\x2d26796a69\x2d76c2\x2d83cb\x2df422\x2df92be2292dfd.mount: Deactivated successfully. Jan 17 12:48:41.689676 systemd-networkd[1375]: calieae4c3992b6: Link UP Jan 17 12:48:41.690126 systemd-networkd[1375]: calieae4c3992b6: Gained carrier Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.541 [INFO][2766] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.220-k8s-csi--node--driver--cq6p7-eth0 csi-node-driver- calico-system e569825e-7420-42dc-bd20-5d7859eabb15 1145 0 2025-01-17 12:48:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.24.4.220 csi-node-driver-cq6p7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calieae4c3992b6 [] []}} ContainerID="4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" Namespace="calico-system" Pod="csi-node-driver-cq6p7" WorkloadEndpoint="172.24.4.220-k8s-csi--node--driver--cq6p7-" Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.541 [INFO][2766] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" Namespace="calico-system" Pod="csi-node-driver-cq6p7" WorkloadEndpoint="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.602 [INFO][2778] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" HandleID="k8s-pod-network.4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" Workload="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.621 [INFO][2778] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" HandleID="k8s-pod-network.4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" Workload="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293520), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.220", "pod":"csi-node-driver-cq6p7", "timestamp":"2025-01-17 12:48:41.602432106 +0000 UTC"}, Hostname:"172.24.4.220", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.621 [INFO][2778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.621 [INFO][2778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.621 [INFO][2778] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.220' Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.625 [INFO][2778] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" host="172.24.4.220" Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.633 [INFO][2778] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.220" Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.642 [INFO][2778] ipam/ipam.go 489: Trying affinity for 192.168.93.192/26 host="172.24.4.220" Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.647 [INFO][2778] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.192/26 host="172.24.4.220" Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.651 [INFO][2778] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.192/26 host="172.24.4.220" Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.652 [INFO][2778] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.192/26 handle="k8s-pod-network.4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" host="172.24.4.220" Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.655 [INFO][2778] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.664 [INFO][2778] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.192/26 handle="k8s-pod-network.4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" host="172.24.4.220" Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.679 [INFO][2778] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.193/26] block=192.168.93.192/26 handle="k8s-pod-network.4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" host="172.24.4.220" Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.679 [INFO][2778] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.193/26] handle="k8s-pod-network.4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" host="172.24.4.220" Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.680 [INFO][2778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:48:41.721157 containerd[1463]: 2025-01-17 12:48:41.680 [INFO][2778] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.193/26] IPv6=[] ContainerID="4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" HandleID="k8s-pod-network.4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" Workload="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:48:41.723713 containerd[1463]: 2025-01-17 12:48:41.683 [INFO][2766] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" Namespace="calico-system" Pod="csi-node-driver-cq6p7" WorkloadEndpoint="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.220-k8s-csi--node--driver--cq6p7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e569825e-7420-42dc-bd20-5d7859eabb15", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 48, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.220", ContainerID:"", Pod:"csi-node-driver-cq6p7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieae4c3992b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:48:41.723713 containerd[1463]: 2025-01-17 12:48:41.683 [INFO][2766] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.193/32] ContainerID="4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" Namespace="calico-system" Pod="csi-node-driver-cq6p7" WorkloadEndpoint="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:48:41.723713 containerd[1463]: 2025-01-17 12:48:41.683 [INFO][2766] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieae4c3992b6 ContainerID="4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" Namespace="calico-system" Pod="csi-node-driver-cq6p7" WorkloadEndpoint="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:48:41.723713 containerd[1463]: 2025-01-17 12:48:41.689 [INFO][2766] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" Namespace="calico-system" Pod="csi-node-driver-cq6p7" WorkloadEndpoint="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:48:41.723713 containerd[1463]: 2025-01-17 12:48:41.692 [INFO][2766] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" Namespace="calico-system" Pod="csi-node-driver-cq6p7" WorkloadEndpoint="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.220-k8s-csi--node--driver--cq6p7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e569825e-7420-42dc-bd20-5d7859eabb15", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 48, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.220", ContainerID:"4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b", Pod:"csi-node-driver-cq6p7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieae4c3992b6", MAC:"1e:0a:1d:25:65:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:48:41.723713 containerd[1463]: 2025-01-17 12:48:41.718 [INFO][2766] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b" Namespace="calico-system" Pod="csi-node-driver-cq6p7" WorkloadEndpoint="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:48:41.769932 containerd[1463]: time="2025-01-17T12:48:41.769801106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:48:41.770097 containerd[1463]: time="2025-01-17T12:48:41.769909179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:48:41.770097 containerd[1463]: time="2025-01-17T12:48:41.769952791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:48:41.770733 containerd[1463]: time="2025-01-17T12:48:41.770090148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:48:41.791388 systemd[1]: Started cri-containerd-4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b.scope - libcontainer container 4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b. Jan 17 12:48:41.811708 containerd[1463]: time="2025-01-17T12:48:41.811665553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cq6p7,Uid:e569825e-7420-42dc-bd20-5d7859eabb15,Namespace:calico-system,Attempt:1,} returns sandbox id \"4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b\"" Jan 17 12:48:41.813516 containerd[1463]: time="2025-01-17T12:48:41.813467412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:48:42.008762 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL Jan 17 12:48:42.121030 kubelet[1851]: I0117 12:48:42.120905 1851 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:48:42.151189 kubelet[1851]: E0117 12:48:42.150997 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:43.032675 systemd-networkd[1375]: calieae4c3992b6: Gained IPv6LL Jan 17 12:48:43.151850 kubelet[1851]: E0117 12:48:43.151771 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:44.152337 kubelet[1851]: E0117 12:48:44.152152 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:44.423088 containerd[1463]: time="2025-01-17T12:48:44.422778314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:44.424398 containerd[1463]: time="2025-01-17T12:48:44.424336467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:48:44.425582 containerd[1463]: time="2025-01-17T12:48:44.425538862Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:44.427968 containerd[1463]: time="2025-01-17T12:48:44.427936068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:44.428898 containerd[1463]: time="2025-01-17T12:48:44.428614189Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.615108926s" Jan 17 12:48:44.428898 containerd[1463]: time="2025-01-17T12:48:44.428651049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:48:44.431239 containerd[1463]: time="2025-01-17T12:48:44.431197224Z" level=info msg="CreateContainer within sandbox \"4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:48:44.455790 containerd[1463]: time="2025-01-17T12:48:44.455665995Z" level=info msg="CreateContainer within sandbox \"4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dcd38264134abe0b3196ffcfe1d5d8ae1e29befc0b1b21b906cf09dbe177482f\"" Jan 17 12:48:44.456466 containerd[1463]: time="2025-01-17T12:48:44.456446038Z" level=info msg="StartContainer for \"dcd38264134abe0b3196ffcfe1d5d8ae1e29befc0b1b21b906cf09dbe177482f\"" Jan 17 12:48:44.496468 systemd[1]: run-containerd-runc-k8s.io-dcd38264134abe0b3196ffcfe1d5d8ae1e29befc0b1b21b906cf09dbe177482f-runc.H4FToc.mount: Deactivated successfully. Jan 17 12:48:44.504531 systemd[1]: Started cri-containerd-dcd38264134abe0b3196ffcfe1d5d8ae1e29befc0b1b21b906cf09dbe177482f.scope - libcontainer container dcd38264134abe0b3196ffcfe1d5d8ae1e29befc0b1b21b906cf09dbe177482f. Jan 17 12:48:44.536373 containerd[1463]: time="2025-01-17T12:48:44.536331082Z" level=info msg="StartContainer for \"dcd38264134abe0b3196ffcfe1d5d8ae1e29befc0b1b21b906cf09dbe177482f\" returns successfully" Jan 17 12:48:44.538139 containerd[1463]: time="2025-01-17T12:48:44.538099048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:48:45.153092 kubelet[1851]: E0117 12:48:45.153021 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:45.259478 containerd[1463]: time="2025-01-17T12:48:45.258958490Z" level=info msg="StopPodSandbox for \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\"" Jan 17 12:48:45.453421 containerd[1463]: 2025-01-17 12:48:45.356 [INFO][2933] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Jan 17 12:48:45.453421 containerd[1463]: 2025-01-17 12:48:45.356 [INFO][2933] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" iface="eth0" netns="/var/run/netns/cni-8bca9087-dbf6-46b1-8c8b-997be7526023" Jan 17 12:48:45.453421 containerd[1463]: 2025-01-17 12:48:45.358 [INFO][2933] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" iface="eth0" netns="/var/run/netns/cni-8bca9087-dbf6-46b1-8c8b-997be7526023" Jan 17 12:48:45.453421 containerd[1463]: 2025-01-17 12:48:45.359 [INFO][2933] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" iface="eth0" netns="/var/run/netns/cni-8bca9087-dbf6-46b1-8c8b-997be7526023" Jan 17 12:48:45.453421 containerd[1463]: 2025-01-17 12:48:45.359 [INFO][2933] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Jan 17 12:48:45.453421 containerd[1463]: 2025-01-17 12:48:45.359 [INFO][2933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Jan 17 12:48:45.453421 containerd[1463]: 2025-01-17 12:48:45.422 [INFO][2939] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" HandleID="k8s-pod-network.0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Workload="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:48:45.453421 containerd[1463]: 2025-01-17 12:48:45.422 [INFO][2939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:48:45.453421 containerd[1463]: 2025-01-17 12:48:45.422 [INFO][2939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:48:45.453421 containerd[1463]: 2025-01-17 12:48:45.440 [WARNING][2939] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" HandleID="k8s-pod-network.0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Workload="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:48:45.453421 containerd[1463]: 2025-01-17 12:48:45.440 [INFO][2939] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" HandleID="k8s-pod-network.0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Workload="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:48:45.453421 containerd[1463]: 2025-01-17 12:48:45.445 [INFO][2939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:48:45.453421 containerd[1463]: 2025-01-17 12:48:45.448 [INFO][2933] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Jan 17 12:48:45.459160 containerd[1463]: time="2025-01-17T12:48:45.453746986Z" level=info msg="TearDown network for sandbox \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\" successfully" Jan 17 12:48:45.459160 containerd[1463]: time="2025-01-17T12:48:45.453816907Z" level=info msg="StopPodSandbox for \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\" returns successfully" Jan 17 12:48:45.459160 containerd[1463]: time="2025-01-17T12:48:45.457589512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-x927j,Uid:06495861-2bb3-4c45-95ca-1620c6e3c97e,Namespace:default,Attempt:1,}" Jan 17 12:48:45.459139 systemd[1]: run-netns-cni\x2d8bca9087\x2ddbf6\x2d46b1\x2d8c8b\x2d997be7526023.mount: Deactivated successfully. Jan 17 12:48:45.650202 systemd-networkd[1375]: caliad8bfba02ca: Link UP Jan 17 12:48:45.655464 systemd-networkd[1375]: caliad8bfba02ca: Gained carrier Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.553 [INFO][2945] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0 nginx-deployment-8587fbcb89- default 06495861-2bb3-4c45-95ca-1620c6e3c97e 1166 0 2025-01-17 12:48:30 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.220 nginx-deployment-8587fbcb89-x927j eth0 default [] [] [kns.default ksa.default.default] caliad8bfba02ca [] []}} ContainerID="14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" Namespace="default" Pod="nginx-deployment-8587fbcb89-x927j" WorkloadEndpoint="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-" Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.553 [INFO][2945] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" Namespace="default" Pod="nginx-deployment-8587fbcb89-x927j" WorkloadEndpoint="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.587 [INFO][2957] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" HandleID="k8s-pod-network.14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" Workload="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.597 [INFO][2957] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" HandleID="k8s-pod-network.14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" Workload="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318d20), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.220", "pod":"nginx-deployment-8587fbcb89-x927j", "timestamp":"2025-01-17 12:48:45.587115037 +0000 UTC"}, Hostname:"172.24.4.220", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.598 [INFO][2957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.598 [INFO][2957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.598 [INFO][2957] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.220' Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.601 [INFO][2957] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" host="172.24.4.220" Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.612 [INFO][2957] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.220" Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.618 [INFO][2957] ipam/ipam.go 489: Trying affinity for 192.168.93.192/26 host="172.24.4.220" Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.621 [INFO][2957] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.192/26 host="172.24.4.220" Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.624 [INFO][2957] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.192/26 host="172.24.4.220" Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.624 [INFO][2957] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.192/26 handle="k8s-pod-network.14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" host="172.24.4.220" Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.626 [INFO][2957] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211 Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.632 [INFO][2957] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.192/26 handle="k8s-pod-network.14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" host="172.24.4.220" Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.641 [INFO][2957] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.194/26] block=192.168.93.192/26 handle="k8s-pod-network.14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" host="172.24.4.220" Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.641 [INFO][2957] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.194/26] handle="k8s-pod-network.14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" host="172.24.4.220" Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.641 [INFO][2957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:48:45.678676 containerd[1463]: 2025-01-17 12:48:45.641 [INFO][2957] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.194/26] IPv6=[] ContainerID="14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" HandleID="k8s-pod-network.14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" Workload="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:48:45.681985 containerd[1463]: 2025-01-17 12:48:45.644 [INFO][2945] cni-plugin/k8s.go 386: Populated endpoint ContainerID="14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" Namespace="default" Pod="nginx-deployment-8587fbcb89-x927j" WorkloadEndpoint="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"06495861-2bb3-4c45-95ca-1620c6e3c97e", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 48, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.220", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-x927j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.93.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliad8bfba02ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:48:45.681985 containerd[1463]: 2025-01-17 12:48:45.644 [INFO][2945] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.194/32] ContainerID="14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" Namespace="default" Pod="nginx-deployment-8587fbcb89-x927j" WorkloadEndpoint="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:48:45.681985 containerd[1463]: 2025-01-17 12:48:45.644 [INFO][2945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad8bfba02ca ContainerID="14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" Namespace="default" Pod="nginx-deployment-8587fbcb89-x927j" WorkloadEndpoint="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:48:45.681985 containerd[1463]: 2025-01-17 12:48:45.663 [INFO][2945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" Namespace="default" Pod="nginx-deployment-8587fbcb89-x927j" WorkloadEndpoint="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:48:45.681985 containerd[1463]: 2025-01-17 12:48:45.664 [INFO][2945] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" Namespace="default" Pod="nginx-deployment-8587fbcb89-x927j" WorkloadEndpoint="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"06495861-2bb3-4c45-95ca-1620c6e3c97e", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 48, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.220", ContainerID:"14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211", Pod:"nginx-deployment-8587fbcb89-x927j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.93.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliad8bfba02ca", MAC:"d2:2c:13:da:df:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:48:45.681985 containerd[1463]: 2025-01-17 12:48:45.677 [INFO][2945] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211" Namespace="default" Pod="nginx-deployment-8587fbcb89-x927j" WorkloadEndpoint="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:48:45.719853 containerd[1463]: time="2025-01-17T12:48:45.719602528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:48:45.719853 containerd[1463]: time="2025-01-17T12:48:45.719706603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:48:45.719853 containerd[1463]: time="2025-01-17T12:48:45.719747490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:48:45.722303 containerd[1463]: time="2025-01-17T12:48:45.721543488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:48:45.746481 systemd[1]: Started cri-containerd-14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211.scope - libcontainer container 14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211. Jan 17 12:48:45.782594 containerd[1463]: time="2025-01-17T12:48:45.782474574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-x927j,Uid:06495861-2bb3-4c45-95ca-1620c6e3c97e,Namespace:default,Attempt:1,} returns sandbox id \"14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211\"" Jan 17 12:48:46.153525 kubelet[1851]: E0117 12:48:46.153489 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:46.282201 containerd[1463]: time="2025-01-17T12:48:46.282164903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:46.283161 containerd[1463]: time="2025-01-17T12:48:46.283124814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:48:46.284645 containerd[1463]: time="2025-01-17T12:48:46.284602926Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:46.287329 containerd[1463]: time="2025-01-17T12:48:46.287285206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:46.288131 containerd[1463]: time="2025-01-17T12:48:46.287987884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.749849823s" Jan 17 12:48:46.288131 containerd[1463]: time="2025-01-17T12:48:46.288033069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:48:46.289691 containerd[1463]: time="2025-01-17T12:48:46.289512363Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:48:46.290572 containerd[1463]: time="2025-01-17T12:48:46.290524171Z" level=info msg="CreateContainer within sandbox \"4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:48:46.308260 containerd[1463]: time="2025-01-17T12:48:46.308167782Z" level=info msg="CreateContainer within sandbox \"4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1ce7280f7e8973fd77ac5e306ef6c170f1b70505a349e66514ede00c64e60b81\"" Jan 17 12:48:46.308859 containerd[1463]: time="2025-01-17T12:48:46.308838149Z" level=info msg="StartContainer for \"1ce7280f7e8973fd77ac5e306ef6c170f1b70505a349e66514ede00c64e60b81\"" Jan 17 12:48:46.354464 systemd[1]: Started cri-containerd-1ce7280f7e8973fd77ac5e306ef6c170f1b70505a349e66514ede00c64e60b81.scope - libcontainer container 1ce7280f7e8973fd77ac5e306ef6c170f1b70505a349e66514ede00c64e60b81. Jan 17 12:48:46.390492 containerd[1463]: time="2025-01-17T12:48:46.390336489Z" level=info msg="StartContainer for \"1ce7280f7e8973fd77ac5e306ef6c170f1b70505a349e66514ede00c64e60b81\" returns successfully" Jan 17 12:48:47.154080 kubelet[1851]: E0117 12:48:47.153918 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:47.260875 kubelet[1851]: I0117 12:48:47.260580 1851 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:48:47.260875 kubelet[1851]: I0117 12:48:47.260687 1851 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:48:47.384499 systemd-networkd[1375]: caliad8bfba02ca: Gained IPv6LL Jan 17 12:48:48.154147 kubelet[1851]: E0117 12:48:48.154078 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:49.155798 kubelet[1851]: E0117 12:48:49.154880 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:50.155357 kubelet[1851]: E0117 12:48:50.155303 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:50.510351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2511151991.mount: Deactivated successfully. Jan 17 12:48:51.156640 kubelet[1851]: E0117 12:48:51.156600 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:51.768279 containerd[1463]: time="2025-01-17T12:48:51.767255921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:51.769625 containerd[1463]: time="2025-01-17T12:48:51.769588540Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 17 12:48:51.771538 containerd[1463]: time="2025-01-17T12:48:51.771493844Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:51.775882 containerd[1463]: time="2025-01-17T12:48:51.775767865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:48:51.776959 containerd[1463]: time="2025-01-17T12:48:51.776829741Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 5.487275969s" Jan 17 12:48:51.776959 containerd[1463]: time="2025-01-17T12:48:51.776868134Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:48:51.779916 containerd[1463]: time="2025-01-17T12:48:51.779775279Z" level=info msg="CreateContainer within sandbox \"14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 17 12:48:51.799119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059894402.mount: Deactivated successfully. Jan 17 12:48:51.799712 containerd[1463]: time="2025-01-17T12:48:51.799676956Z" level=info msg="CreateContainer within sandbox \"14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"6855fd0b04ceb3929a93e054a9a515368a623a228b953a7e3f2021f52c8f3b35\"" Jan 17 12:48:51.800745 containerd[1463]: time="2025-01-17T12:48:51.800648017Z" level=info msg="StartContainer for \"6855fd0b04ceb3929a93e054a9a515368a623a228b953a7e3f2021f52c8f3b35\"" Jan 17 12:48:51.835375 systemd[1]: Started cri-containerd-6855fd0b04ceb3929a93e054a9a515368a623a228b953a7e3f2021f52c8f3b35.scope - libcontainer container 6855fd0b04ceb3929a93e054a9a515368a623a228b953a7e3f2021f52c8f3b35. Jan 17 12:48:51.866493 containerd[1463]: time="2025-01-17T12:48:51.866449075Z" level=info msg="StartContainer for \"6855fd0b04ceb3929a93e054a9a515368a623a228b953a7e3f2021f52c8f3b35\" returns successfully" Jan 17 12:48:52.126677 kubelet[1851]: E0117 12:48:52.126434 1851 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:52.157235 kubelet[1851]: E0117 12:48:52.157145 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:52.447324 kubelet[1851]: I0117 12:48:52.447170 1851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cq6p7" podStartSLOduration=35.970976349 podStartE2EDuration="40.447138079s" podCreationTimestamp="2025-01-17 12:48:12 +0000 UTC" firstStartedPulling="2025-01-17 12:48:41.81290623 +0000 UTC m=+30.442326066" lastFinishedPulling="2025-01-17 12:48:46.28906796 +0000 UTC m=+34.918487796" observedRunningTime="2025-01-17 12:48:47.431081439 +0000 UTC m=+36.060501315" watchObservedRunningTime="2025-01-17 12:48:52.447138079 +0000 UTC m=+41.076558006" Jan 17 12:48:53.157651 kubelet[1851]: E0117 12:48:53.157528 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:54.158595 kubelet[1851]: E0117 12:48:54.158490 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:55.159091 kubelet[1851]: E0117 12:48:55.158984 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:56.160113 kubelet[1851]: E0117 12:48:56.160023 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:57.161199 kubelet[1851]: E0117 12:48:57.161108 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:58.161472 kubelet[1851]: E0117 12:48:58.161369 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:48:59.161673 kubelet[1851]: E0117 12:48:59.161585 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:00.162256 kubelet[1851]: E0117 12:49:00.162140 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:00.697276 kubelet[1851]: I0117 12:49:00.697125 1851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-x927j" podStartSLOduration=24.702367774 podStartE2EDuration="30.697088161s" podCreationTimestamp="2025-01-17 12:48:30 +0000 UTC" firstStartedPulling="2025-01-17 12:48:45.783876453 +0000 UTC m=+34.413296290" lastFinishedPulling="2025-01-17 12:48:51.778596841 +0000 UTC m=+40.408016677" observedRunningTime="2025-01-17 12:48:52.448586218 +0000 UTC m=+41.078006094" watchObservedRunningTime="2025-01-17 12:49:00.697088161 +0000 UTC m=+49.326508037" Jan 17 12:49:00.715789 systemd[1]: Created slice kubepods-besteffort-podde332b37_785e_4aea_986e_f05a502d9686.slice - libcontainer container kubepods-besteffort-podde332b37_785e_4aea_986e_f05a502d9686.slice. Jan 17 12:49:00.803869 kubelet[1851]: I0117 12:49:00.803751 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/de332b37-785e-4aea-986e-f05a502d9686-data\") pod \"nfs-server-provisioner-0\" (UID: \"de332b37-785e-4aea-986e-f05a502d9686\") " pod="default/nfs-server-provisioner-0" Jan 17 12:49:00.803869 kubelet[1851]: I0117 12:49:00.803842 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmt7k\" (UniqueName: \"kubernetes.io/projected/de332b37-785e-4aea-986e-f05a502d9686-kube-api-access-xmt7k\") pod \"nfs-server-provisioner-0\" (UID: \"de332b37-785e-4aea-986e-f05a502d9686\") " pod="default/nfs-server-provisioner-0" Jan 17 12:49:01.021831 containerd[1463]: time="2025-01-17T12:49:01.021617617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:de332b37-785e-4aea-986e-f05a502d9686,Namespace:default,Attempt:0,}" Jan 17 12:49:01.163446 kubelet[1851]: E0117 12:49:01.163312 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:01.234900 systemd-networkd[1375]: cali60e51b789ff: Link UP Jan 17 12:49:01.237129 systemd-networkd[1375]: cali60e51b789ff: Gained carrier Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.131 [INFO][3179] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.220-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default de332b37-785e-4aea-986e-f05a502d9686 1231 0 2025-01-17 12:49:00 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.24.4.220 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.220-k8s-nfs--server--provisioner--0-" Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.131 [INFO][3179] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.220-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.164 [INFO][3189] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" HandleID="k8s-pod-network.08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" Workload="172.24.4.220-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.181 [INFO][3189] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" HandleID="k8s-pod-network.08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" Workload="172.24.4.220-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fc210), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.220", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-17 12:49:01.16456271 +0000 UTC"}, Hostname:"172.24.4.220", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.181 [INFO][3189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.181 [INFO][3189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.181 [INFO][3189] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.220' Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.184 [INFO][3189] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" host="172.24.4.220" Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.191 [INFO][3189] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.220" Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.197 [INFO][3189] ipam/ipam.go 489: Trying affinity for 192.168.93.192/26 host="172.24.4.220" Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.199 [INFO][3189] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.192/26 host="172.24.4.220" Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.203 [INFO][3189] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.192/26 host="172.24.4.220" Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.203 [INFO][3189] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.192/26 handle="k8s-pod-network.08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" host="172.24.4.220" Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.205 [INFO][3189] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139 Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.213 [INFO][3189] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.192/26 handle="k8s-pod-network.08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" host="172.24.4.220" Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.224 [INFO][3189] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.195/26] block=192.168.93.192/26 handle="k8s-pod-network.08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" host="172.24.4.220" Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.224 [INFO][3189] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.195/26] handle="k8s-pod-network.08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" host="172.24.4.220" Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.225 [INFO][3189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:49:01.250915 containerd[1463]: 2025-01-17 12:49:01.225 [INFO][3189] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.195/26] IPv6=[] ContainerID="08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" HandleID="k8s-pod-network.08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" Workload="172.24.4.220-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:49:01.257202 containerd[1463]: 2025-01-17 12:49:01.228 [INFO][3179] cni-plugin/k8s.go 386: Populated endpoint ContainerID="08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.220-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.220-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"de332b37-785e-4aea-986e-f05a502d9686", ResourceVersion:"1231", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 49, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.220", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.93.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:49:01.257202 containerd[1463]: 2025-01-17 12:49:01.229 [INFO][3179] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.195/32] ContainerID="08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.220-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:49:01.257202 containerd[1463]: 2025-01-17 12:49:01.229 [INFO][3179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.220-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:49:01.257202 containerd[1463]: 2025-01-17 12:49:01.233 [INFO][3179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.220-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:49:01.258024 containerd[1463]: 2025-01-17 12:49:01.233 [INFO][3179] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.220-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.220-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"de332b37-785e-4aea-986e-f05a502d9686", ResourceVersion:"1231", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 49, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.220", ContainerID:"08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.93.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"72:6b:c6:09:50:4e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:49:01.258024 containerd[1463]: 2025-01-17 12:49:01.245 [INFO][3179] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.220-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:49:01.310037 containerd[1463]: time="2025-01-17T12:49:01.309799648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:49:01.311497 containerd[1463]: time="2025-01-17T12:49:01.310476059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:49:01.311497 containerd[1463]: time="2025-01-17T12:49:01.310523929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:49:01.311497 containerd[1463]: time="2025-01-17T12:49:01.310706304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:49:01.343414 systemd[1]: Started cri-containerd-08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139.scope - libcontainer container 08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139. Jan 17 12:49:01.382949 containerd[1463]: time="2025-01-17T12:49:01.382794886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:de332b37-785e-4aea-986e-f05a502d9686,Namespace:default,Attempt:0,} returns sandbox id \"08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139\"" Jan 17 12:49:01.384927 containerd[1463]: time="2025-01-17T12:49:01.384733254Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 12:49:02.164038 kubelet[1851]: E0117 12:49:02.163966 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:02.939291 systemd-networkd[1375]: cali60e51b789ff: Gained IPv6LL Jan 17 12:49:03.165086 kubelet[1851]: E0117 12:49:03.164950 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:04.165381 kubelet[1851]: E0117 12:49:04.165279 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:04.805892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1731374017.mount: Deactivated successfully. Jan 17 12:49:05.166537 kubelet[1851]: E0117 12:49:05.166486 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:06.166699 kubelet[1851]: E0117 12:49:06.166603 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:06.896450 containerd[1463]: time="2025-01-17T12:49:06.896389130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:49:06.898046 containerd[1463]: time="2025-01-17T12:49:06.897793160Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 17 12:49:06.899292 containerd[1463]: time="2025-01-17T12:49:06.899226545Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:49:06.906774 containerd[1463]: time="2025-01-17T12:49:06.906346885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:49:06.908715 containerd[1463]: time="2025-01-17T12:49:06.908668759Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.523900278s" Jan 17 12:49:06.908715 containerd[1463]: time="2025-01-17T12:49:06.908707401Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 17 12:49:06.911194 containerd[1463]: time="2025-01-17T12:49:06.911090049Z" level=info msg="CreateContainer within sandbox \"08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 12:49:06.935582 containerd[1463]: time="2025-01-17T12:49:06.935530301Z" level=info msg="CreateContainer within sandbox \"08ac204ac408b3f0a297c5fc2293266594f44c7466dd8f2a310e50defca2c139\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"34003d13f72f16abb93a917e56e7c7c4935d94f62c80393e98392936805e4147\"" Jan 17 12:49:06.936267 containerd[1463]: time="2025-01-17T12:49:06.936137677Z" level=info msg="StartContainer for \"34003d13f72f16abb93a917e56e7c7c4935d94f62c80393e98392936805e4147\"" Jan 17 12:49:06.968482 systemd[1]: Started cri-containerd-34003d13f72f16abb93a917e56e7c7c4935d94f62c80393e98392936805e4147.scope - libcontainer container 34003d13f72f16abb93a917e56e7c7c4935d94f62c80393e98392936805e4147. Jan 17 12:49:06.996749 containerd[1463]: time="2025-01-17T12:49:06.996638516Z" level=info msg="StartContainer for \"34003d13f72f16abb93a917e56e7c7c4935d94f62c80393e98392936805e4147\" returns successfully" Jan 17 12:49:07.167178 kubelet[1851]: E0117 12:49:07.167102 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:07.509612 kubelet[1851]: I0117 12:49:07.509169 1851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.983865092 podStartE2EDuration="7.509137731s" podCreationTimestamp="2025-01-17 12:49:00 +0000 UTC" firstStartedPulling="2025-01-17 12:49:01.384298611 +0000 UTC m=+50.013718448" lastFinishedPulling="2025-01-17 12:49:06.909571251 +0000 UTC m=+55.538991087" observedRunningTime="2025-01-17 12:49:07.508837535 +0000 UTC m=+56.138257421" watchObservedRunningTime="2025-01-17 12:49:07.509137731 +0000 UTC m=+56.138557607" Jan 17 12:49:08.167871 kubelet[1851]: E0117 12:49:08.167792 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:09.168398 kubelet[1851]: E0117 12:49:09.168316 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:10.168972 kubelet[1851]: E0117 12:49:10.168876 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:11.170134 kubelet[1851]: E0117 12:49:11.170019 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:12.126838 kubelet[1851]: E0117 12:49:12.126746 1851 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:12.170820 kubelet[1851]: E0117 12:49:12.170737 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:12.171841 containerd[1463]: time="2025-01-17T12:49:12.171287061Z" level=info msg="StopPodSandbox for \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\"" Jan 17 12:49:12.302489 containerd[1463]: 2025-01-17 12:49:12.250 [WARNING][3369] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"06495861-2bb3-4c45-95ca-1620c6e3c97e", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 48, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.220", ContainerID:"14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211", Pod:"nginx-deployment-8587fbcb89-x927j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.93.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliad8bfba02ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:49:12.302489 containerd[1463]: 2025-01-17 12:49:12.250 [INFO][3369] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Jan 17 12:49:12.302489 containerd[1463]: 2025-01-17 12:49:12.250 [INFO][3369] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" iface="eth0" netns="" Jan 17 12:49:12.302489 containerd[1463]: 2025-01-17 12:49:12.250 [INFO][3369] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Jan 17 12:49:12.302489 containerd[1463]: 2025-01-17 12:49:12.250 [INFO][3369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Jan 17 12:49:12.302489 containerd[1463]: 2025-01-17 12:49:12.277 [INFO][3377] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" HandleID="k8s-pod-network.0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Workload="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:49:12.302489 containerd[1463]: 2025-01-17 12:49:12.277 [INFO][3377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:49:12.302489 containerd[1463]: 2025-01-17 12:49:12.277 [INFO][3377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:49:12.302489 containerd[1463]: 2025-01-17 12:49:12.292 [WARNING][3377] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" HandleID="k8s-pod-network.0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Workload="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:49:12.302489 containerd[1463]: 2025-01-17 12:49:12.292 [INFO][3377] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" HandleID="k8s-pod-network.0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Workload="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:49:12.302489 containerd[1463]: 2025-01-17 12:49:12.299 [INFO][3377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:49:12.302489 containerd[1463]: 2025-01-17 12:49:12.300 [INFO][3369] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Jan 17 12:49:12.302489 containerd[1463]: time="2025-01-17T12:49:12.302157319Z" level=info msg="TearDown network for sandbox \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\" successfully" Jan 17 12:49:12.302489 containerd[1463]: time="2025-01-17T12:49:12.302208014Z" level=info msg="StopPodSandbox for \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\" returns successfully" Jan 17 12:49:12.303710 containerd[1463]: time="2025-01-17T12:49:12.303561353Z" level=info msg="RemovePodSandbox for \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\"" Jan 17 12:49:12.303710 containerd[1463]: time="2025-01-17T12:49:12.303594466Z" level=info msg="Forcibly stopping sandbox \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\"" Jan 17 12:49:12.386499 containerd[1463]: 2025-01-17 12:49:12.342 [WARNING][3397] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"06495861-2bb3-4c45-95ca-1620c6e3c97e", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 48, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.220", ContainerID:"14cb2ad2f2a43e70914d4153a0799c11f11bd848e85a907948c37f4227dd4211", Pod:"nginx-deployment-8587fbcb89-x927j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.93.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliad8bfba02ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:49:12.386499 containerd[1463]: 2025-01-17 12:49:12.342 [INFO][3397] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Jan 17 12:49:12.386499 containerd[1463]: 2025-01-17 12:49:12.342 [INFO][3397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" iface="eth0" netns="" Jan 17 12:49:12.386499 containerd[1463]: 2025-01-17 12:49:12.342 [INFO][3397] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Jan 17 12:49:12.386499 containerd[1463]: 2025-01-17 12:49:12.342 [INFO][3397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Jan 17 12:49:12.386499 containerd[1463]: 2025-01-17 12:49:12.366 [INFO][3403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" HandleID="k8s-pod-network.0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Workload="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:49:12.386499 containerd[1463]: 2025-01-17 12:49:12.366 [INFO][3403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:49:12.386499 containerd[1463]: 2025-01-17 12:49:12.366 [INFO][3403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:49:12.386499 containerd[1463]: 2025-01-17 12:49:12.378 [WARNING][3403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" HandleID="k8s-pod-network.0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Workload="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:49:12.386499 containerd[1463]: 2025-01-17 12:49:12.378 [INFO][3403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" HandleID="k8s-pod-network.0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Workload="172.24.4.220-k8s-nginx--deployment--8587fbcb89--x927j-eth0" Jan 17 12:49:12.386499 containerd[1463]: 2025-01-17 12:49:12.381 [INFO][3403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:49:12.386499 containerd[1463]: 2025-01-17 12:49:12.383 [INFO][3397] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec" Jan 17 12:49:12.388677 containerd[1463]: time="2025-01-17T12:49:12.387313836Z" level=info msg="TearDown network for sandbox \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\" successfully" Jan 17 12:49:12.392044 containerd[1463]: time="2025-01-17T12:49:12.391994894Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:49:12.392150 containerd[1463]: time="2025-01-17T12:49:12.392045830Z" level=info msg="RemovePodSandbox \"0b6e01bdc32bcde15fb487e85ee9bf8bebcd1e7285c0e2643bdc900fba32c6ec\" returns successfully" Jan 17 12:49:12.392846 containerd[1463]: time="2025-01-17T12:49:12.392595084Z" level=info msg="StopPodSandbox for \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\"" Jan 17 12:49:12.477265 containerd[1463]: 2025-01-17 12:49:12.434 [WARNING][3421] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.220-k8s-csi--node--driver--cq6p7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e569825e-7420-42dc-bd20-5d7859eabb15", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 48, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.220", ContainerID:"4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b", Pod:"csi-node-driver-cq6p7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieae4c3992b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:49:12.477265 containerd[1463]: 2025-01-17 12:49:12.434 [INFO][3421] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Jan 17 12:49:12.477265 containerd[1463]: 2025-01-17 12:49:12.434 [INFO][3421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" iface="eth0" netns="" Jan 17 12:49:12.477265 containerd[1463]: 2025-01-17 12:49:12.434 [INFO][3421] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Jan 17 12:49:12.477265 containerd[1463]: 2025-01-17 12:49:12.434 [INFO][3421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Jan 17 12:49:12.477265 containerd[1463]: 2025-01-17 12:49:12.458 [INFO][3427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" HandleID="k8s-pod-network.bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Workload="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:49:12.477265 containerd[1463]: 2025-01-17 12:49:12.458 [INFO][3427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:49:12.477265 containerd[1463]: 2025-01-17 12:49:12.458 [INFO][3427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:49:12.477265 containerd[1463]: 2025-01-17 12:49:12.470 [WARNING][3427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" HandleID="k8s-pod-network.bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Workload="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:49:12.477265 containerd[1463]: 2025-01-17 12:49:12.470 [INFO][3427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" HandleID="k8s-pod-network.bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Workload="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:49:12.477265 containerd[1463]: 2025-01-17 12:49:12.473 [INFO][3427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:49:12.477265 containerd[1463]: 2025-01-17 12:49:12.475 [INFO][3421] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Jan 17 12:49:12.477265 containerd[1463]: time="2025-01-17T12:49:12.477270927Z" level=info msg="TearDown network for sandbox \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\" successfully" Jan 17 12:49:12.478978 containerd[1463]: time="2025-01-17T12:49:12.477295654Z" level=info msg="StopPodSandbox for \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\" returns successfully" Jan 17 12:49:12.478978 containerd[1463]: time="2025-01-17T12:49:12.477835460Z" level=info msg="RemovePodSandbox for \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\"" Jan 17 12:49:12.478978 containerd[1463]: time="2025-01-17T12:49:12.477858063Z" level=info msg="Forcibly stopping sandbox \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\"" Jan 17 12:49:12.581116 containerd[1463]: 2025-01-17 12:49:12.526 [WARNING][3445] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.220-k8s-csi--node--driver--cq6p7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e569825e-7420-42dc-bd20-5d7859eabb15", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 48, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.220", ContainerID:"4c1d93692075c85fd7f495437b9c39322631bcfae25ca31bacbb44708f1a893b", Pod:"csi-node-driver-cq6p7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieae4c3992b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:49:12.581116 containerd[1463]: 2025-01-17 12:49:12.526 [INFO][3445] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Jan 17 12:49:12.581116 containerd[1463]: 2025-01-17 12:49:12.526 [INFO][3445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" iface="eth0" netns="" Jan 17 12:49:12.581116 containerd[1463]: 2025-01-17 12:49:12.526 [INFO][3445] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Jan 17 12:49:12.581116 containerd[1463]: 2025-01-17 12:49:12.526 [INFO][3445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Jan 17 12:49:12.581116 containerd[1463]: 2025-01-17 12:49:12.563 [INFO][3451] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" HandleID="k8s-pod-network.bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Workload="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:49:12.581116 containerd[1463]: 2025-01-17 12:49:12.563 [INFO][3451] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:49:12.581116 containerd[1463]: 2025-01-17 12:49:12.563 [INFO][3451] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:49:12.581116 containerd[1463]: 2025-01-17 12:49:12.574 [WARNING][3451] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" HandleID="k8s-pod-network.bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Workload="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:49:12.581116 containerd[1463]: 2025-01-17 12:49:12.575 [INFO][3451] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" HandleID="k8s-pod-network.bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Workload="172.24.4.220-k8s-csi--node--driver--cq6p7-eth0" Jan 17 12:49:12.581116 containerd[1463]: 2025-01-17 12:49:12.577 [INFO][3451] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:49:12.581116 containerd[1463]: 2025-01-17 12:49:12.579 [INFO][3445] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b" Jan 17 12:49:12.581561 containerd[1463]: time="2025-01-17T12:49:12.581201991Z" level=info msg="TearDown network for sandbox \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\" successfully" Jan 17 12:49:12.586021 containerd[1463]: time="2025-01-17T12:49:12.585989949Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:49:12.586497 containerd[1463]: time="2025-01-17T12:49:12.586036657Z" level=info msg="RemovePodSandbox \"bdb6a252cc2b4adad7c13b84c683dfe7fa95f9f754bf4cbbb6531cd648e4124b\" returns successfully" Jan 17 12:49:13.170988 kubelet[1851]: E0117 12:49:13.170904 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:14.171422 kubelet[1851]: E0117 12:49:14.171189 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:15.171813 kubelet[1851]: E0117 12:49:15.171723 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:16.171988 kubelet[1851]: E0117 12:49:16.171907 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:17.172872 kubelet[1851]: E0117 12:49:17.172789 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:18.174211 kubelet[1851]: E0117 12:49:18.174070 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:19.174936 kubelet[1851]: E0117 12:49:19.174848 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:20.175906 kubelet[1851]: E0117 12:49:20.175821 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:21.176378 kubelet[1851]: E0117 12:49:21.176315 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:22.177422 kubelet[1851]: E0117 12:49:22.177311 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:23.178492 kubelet[1851]: E0117 12:49:23.178424 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:24.178928 kubelet[1851]: E0117 12:49:24.178832 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:25.179705 kubelet[1851]: E0117 12:49:25.179558 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:26.179879 kubelet[1851]: E0117 12:49:26.179741 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:27.180684 kubelet[1851]: E0117 12:49:27.180610 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:28.181391 kubelet[1851]: E0117 12:49:28.181310 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:29.181875 kubelet[1851]: E0117 12:49:29.181771 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:30.182920 kubelet[1851]: E0117 12:49:30.182830 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:31.183721 kubelet[1851]: E0117 12:49:31.183638 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:31.595663 systemd[1]: Created slice kubepods-besteffort-podd79603df_8004_4dc6_883a_06873d934815.slice - libcontainer container kubepods-besteffort-podd79603df_8004_4dc6_883a_06873d934815.slice. Jan 17 12:49:31.719411 kubelet[1851]: I0117 12:49:31.719173 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqv98\" (UniqueName: \"kubernetes.io/projected/d79603df-8004-4dc6-883a-06873d934815-kube-api-access-dqv98\") pod \"test-pod-1\" (UID: \"d79603df-8004-4dc6-883a-06873d934815\") " pod="default/test-pod-1" Jan 17 12:49:31.719411 kubelet[1851]: I0117 12:49:31.719362 1851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5fcfec49-b91f-4222-907a-2c6301a38639\" (UniqueName: \"kubernetes.io/nfs/d79603df-8004-4dc6-883a-06873d934815-pvc-5fcfec49-b91f-4222-907a-2c6301a38639\") pod \"test-pod-1\" (UID: \"d79603df-8004-4dc6-883a-06873d934815\") " pod="default/test-pod-1" Jan 17 12:49:31.897302 kernel: FS-Cache: Loaded Jan 17 12:49:31.989711 kernel: RPC: Registered named UNIX socket transport module. Jan 17 12:49:31.989868 kernel: RPC: Registered udp transport module. Jan 17 12:49:31.989911 kernel: RPC: Registered tcp transport module. Jan 17 12:49:31.990414 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 12:49:31.991258 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 12:49:32.126947 kubelet[1851]: E0117 12:49:32.126832 1851 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:32.184369 kubelet[1851]: E0117 12:49:32.184240 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:32.365383 kernel: NFS: Registering the id_resolver key type Jan 17 12:49:32.365559 kernel: Key type id_resolver registered Jan 17 12:49:32.365606 kernel: Key type id_legacy registered Jan 17 12:49:32.410011 nfsidmap[3499]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 17 12:49:32.418502 nfsidmap[3501]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 17 12:49:32.502762 containerd[1463]: time="2025-01-17T12:49:32.502613081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d79603df-8004-4dc6-883a-06873d934815,Namespace:default,Attempt:0,}" Jan 17 12:49:32.754069 systemd-networkd[1375]: cali5ec59c6bf6e: Link UP Jan 17 12:49:32.757665 systemd-networkd[1375]: cali5ec59c6bf6e: Gained carrier Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.610 [INFO][3503] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.220-k8s-test--pod--1-eth0 default d79603df-8004-4dc6-883a-06873d934815 1329 0 2025-01-17 12:49:03 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.220 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.220-k8s-test--pod--1-" Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.611 [INFO][3503] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.220-k8s-test--pod--1-eth0" Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.666 [INFO][3514] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" HandleID="k8s-pod-network.71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" Workload="172.24.4.220-k8s-test--pod--1-eth0" Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.688 [INFO][3514] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" HandleID="k8s-pod-network.71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" Workload="172.24.4.220-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000294b20), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.220", "pod":"test-pod-1", "timestamp":"2025-01-17 12:49:32.666309292 +0000 UTC"}, Hostname:"172.24.4.220", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.688 [INFO][3514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.688 [INFO][3514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.688 [INFO][3514] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.220' Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.693 [INFO][3514] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" host="172.24.4.220" Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.701 [INFO][3514] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.220" Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.711 [INFO][3514] ipam/ipam.go 489: Trying affinity for 192.168.93.192/26 host="172.24.4.220" Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.716 [INFO][3514] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.192/26 host="172.24.4.220" Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.721 [INFO][3514] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.192/26 host="172.24.4.220" Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.721 [INFO][3514] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.192/26 handle="k8s-pod-network.71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" host="172.24.4.220" Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.725 [INFO][3514] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.733 [INFO][3514] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.192/26 handle="k8s-pod-network.71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" host="172.24.4.220" Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.745 [INFO][3514] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.196/26] block=192.168.93.192/26 handle="k8s-pod-network.71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" host="172.24.4.220" Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.745 [INFO][3514] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.196/26] handle="k8s-pod-network.71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" host="172.24.4.220" Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.745 [INFO][3514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.745 [INFO][3514] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.196/26] IPv6=[] ContainerID="71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" HandleID="k8s-pod-network.71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" Workload="172.24.4.220-k8s-test--pod--1-eth0" Jan 17 12:49:32.781495 containerd[1463]: 2025-01-17 12:49:32.748 [INFO][3503] cni-plugin/k8s.go 386: Populated endpoint ContainerID="71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.220-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.220-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"d79603df-8004-4dc6-883a-06873d934815", ResourceVersion:"1329", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.220", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.93.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:49:32.784633 containerd[1463]: 2025-01-17 12:49:32.748 [INFO][3503] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.196/32] ContainerID="71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.220-k8s-test--pod--1-eth0" Jan 17 12:49:32.784633 containerd[1463]: 2025-01-17 12:49:32.748 [INFO][3503] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.220-k8s-test--pod--1-eth0" Jan 17 12:49:32.784633 containerd[1463]: 2025-01-17 12:49:32.758 [INFO][3503] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.220-k8s-test--pod--1-eth0" Jan 17 12:49:32.784633 containerd[1463]: 2025-01-17 12:49:32.758 [INFO][3503] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.220-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.220-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"d79603df-8004-4dc6-883a-06873d934815", ResourceVersion:"1329", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.220", ContainerID:"71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.93.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"1e:bc:06:87:84:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:49:32.784633 containerd[1463]: 2025-01-17 12:49:32.779 [INFO][3503] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.220-k8s-test--pod--1-eth0" Jan 17 12:49:32.828339 containerd[1463]: time="2025-01-17T12:49:32.828185445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:49:32.828339 containerd[1463]: time="2025-01-17T12:49:32.828261549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:49:32.828339 containerd[1463]: time="2025-01-17T12:49:32.828281576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:49:32.829353 containerd[1463]: time="2025-01-17T12:49:32.828700252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:49:32.846372 systemd[1]: Started cri-containerd-71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd.scope - libcontainer container 71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd. Jan 17 12:49:32.888082 containerd[1463]: time="2025-01-17T12:49:32.887982850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d79603df-8004-4dc6-883a-06873d934815,Namespace:default,Attempt:0,} returns sandbox id \"71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd\"" Jan 17 12:49:32.889716 containerd[1463]: time="2025-01-17T12:49:32.889556846Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:49:33.185147 kubelet[1851]: E0117 12:49:33.185063 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:33.315077 containerd[1463]: time="2025-01-17T12:49:33.314788371Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:49:33.316898 containerd[1463]: time="2025-01-17T12:49:33.316811480Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 12:49:33.332776 containerd[1463]: time="2025-01-17T12:49:33.332428102Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 442.690156ms" Jan 17 12:49:33.332776 containerd[1463]: time="2025-01-17T12:49:33.332510587Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:49:33.337158 containerd[1463]: time="2025-01-17T12:49:33.336723797Z" level=info msg="CreateContainer within sandbox \"71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 12:49:33.364604 containerd[1463]: time="2025-01-17T12:49:33.364531450Z" level=info msg="CreateContainer within sandbox \"71b19d46809f2327f2296ebe2f2ccc6473915ca0cf1d3ed34414f5de56c1d6dd\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6773ec27d85c662d2a5b9843859e69de7dd044af8da16f37dc24259fb35cb2e7\"" Jan 17 12:49:33.366806 containerd[1463]: time="2025-01-17T12:49:33.366723895Z" level=info msg="StartContainer for \"6773ec27d85c662d2a5b9843859e69de7dd044af8da16f37dc24259fb35cb2e7\"" Jan 17 12:49:33.424365 systemd[1]: Started cri-containerd-6773ec27d85c662d2a5b9843859e69de7dd044af8da16f37dc24259fb35cb2e7.scope - libcontainer container 6773ec27d85c662d2a5b9843859e69de7dd044af8da16f37dc24259fb35cb2e7. Jan 17 12:49:33.450965 containerd[1463]: time="2025-01-17T12:49:33.450301512Z" level=info msg="StartContainer for \"6773ec27d85c662d2a5b9843859e69de7dd044af8da16f37dc24259fb35cb2e7\" returns successfully" Jan 17 12:49:33.603677 kubelet[1851]: I0117 12:49:33.603409 1851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=30.158824485 podStartE2EDuration="30.603377348s" podCreationTimestamp="2025-01-17 12:49:03 +0000 UTC" firstStartedPulling="2025-01-17 12:49:32.889089618 +0000 UTC m=+81.518509444" lastFinishedPulling="2025-01-17 12:49:33.333642421 +0000 UTC m=+81.963062307" observedRunningTime="2025-01-17 12:49:33.60320226 +0000 UTC m=+82.232622146" watchObservedRunningTime="2025-01-17 12:49:33.603377348 +0000 UTC m=+82.232797254" Jan 17 12:49:33.912482 systemd-networkd[1375]: cali5ec59c6bf6e: Gained IPv6LL Jan 17 12:49:34.185787 kubelet[1851]: E0117 12:49:34.185583 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:35.186402 kubelet[1851]: E0117 12:49:35.186318 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:36.186916 kubelet[1851]: E0117 12:49:36.186801 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:37.188073 kubelet[1851]: E0117 12:49:37.187965 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:38.189029 kubelet[1851]: E0117 12:49:38.188939 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:39.189281 kubelet[1851]: E0117 12:49:39.189156 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:40.190121 kubelet[1851]: E0117 12:49:40.190040 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:41.190753 kubelet[1851]: E0117 12:49:41.190639 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:42.161476 systemd[1]: run-containerd-runc-k8s.io-6a32dcdee1abb4c36a6efcca409e724a15399515147af0f319a4c8fd4123d93c-runc.7HEkA8.mount: Deactivated successfully. Jan 17 12:49:42.191559 kubelet[1851]: E0117 12:49:42.191490 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:49:43.192756 kubelet[1851]: E0117 12:49:43.192673 1851 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"