Dec 13 02:31:35.933624 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 02:31:35.933664 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:31:35.933686 kernel: BIOS-provided physical RAM map: Dec 13 02:31:35.933699 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:31:35.933712 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:31:35.933725 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:31:35.933740 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 02:31:35.933754 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 02:31:35.933767 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 02:31:35.933784 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:31:35.933797 kernel: NX (Execute Disable) protection: active Dec 13 02:31:35.933811 kernel: APIC: Static calls initialized Dec 13 02:31:35.933824 kernel: SMBIOS 2.8 present. Dec 13 02:31:35.933838 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 02:31:35.933854 kernel: Hypervisor detected: KVM Dec 13 02:31:35.933872 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:31:35.933893 kernel: kvm-clock: using sched offset of 5980891438 cycles Dec 13 02:31:35.933914 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:31:35.933938 kernel: tsc: Detected 1996.249 MHz processor Dec 13 02:31:35.933961 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:31:35.933977 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:31:35.933992 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 02:31:35.934007 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 02:31:35.934022 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:31:35.934043 kernel: ACPI: Early table checksum verification disabled Dec 13 02:31:35.934058 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 02:31:35.934073 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:31:35.934087 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:31:35.934102 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:31:35.934116 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 02:31:35.934131 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:31:35.934145 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:31:35.934160 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 02:31:35.934319 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 02:31:35.934338 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 02:31:35.934353 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 02:31:35.934367 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 02:31:35.934382 kernel: No NUMA configuration found Dec 13 02:31:35.934396 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 02:31:35.934411 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 02:31:35.934435 kernel: Zone ranges: Dec 13 02:31:35.934453 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:31:35.934468 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 02:31:35.934484 kernel: Normal empty Dec 13 02:31:35.934499 kernel: Movable zone start for each node Dec 13 02:31:35.934514 kernel: Early memory node ranges Dec 13 02:31:35.934529 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:31:35.934544 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 02:31:35.934562 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 02:31:35.934578 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:31:35.934593 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:31:35.934608 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 02:31:35.934623 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 02:31:35.934638 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:31:35.934653 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:31:35.934669 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 02:31:35.934684 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:31:35.934702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:31:35.934717 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:31:35.934732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:31:35.934747 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:31:35.934763 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:31:35.934778 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 02:31:35.934793 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 02:31:35.934808 kernel: Booting paravirtualized kernel on KVM Dec 13 02:31:35.934824 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:31:35.934843 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:31:35.934858 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 02:31:35.934874 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 02:31:35.934888 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:31:35.934903 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 02:31:35.934921 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:31:35.934938 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:31:35.934953 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:31:35.934972 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:31:35.934987 kernel: Fallback order for Node 0: 0 Dec 13 02:31:35.935002 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 02:31:35.935017 kernel: Policy zone: DMA32 Dec 13 02:31:35.935033 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:31:35.935048 kernel: Memory: 1971212K/2096620K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 02:31:35.935064 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:31:35.935079 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 02:31:35.935097 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 02:31:35.935113 kernel: Dynamic Preempt: voluntary Dec 13 02:31:35.935128 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 02:31:35.935144 kernel: rcu: RCU event tracing is enabled. Dec 13 02:31:35.935160 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:31:35.935175 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 02:31:35.935191 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:31:35.935206 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:31:35.935221 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:31:35.935236 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:31:35.935255 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:31:35.935270 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 02:31:35.935285 kernel: Console: colour VGA+ 80x25 Dec 13 02:31:35.935343 kernel: printk: console [tty0] enabled Dec 13 02:31:35.935360 kernel: printk: console [ttyS0] enabled Dec 13 02:31:35.935375 kernel: ACPI: Core revision 20230628 Dec 13 02:31:35.935390 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:31:35.935405 kernel: x2apic enabled Dec 13 02:31:35.935420 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 02:31:35.935441 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 02:31:35.935456 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 02:31:35.935471 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 02:31:35.935487 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 02:31:35.935502 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 02:31:35.935517 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:31:35.935533 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:31:35.935548 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:31:35.935563 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:31:35.935582 kernel: Speculative Store Bypass: Vulnerable Dec 13 02:31:35.935597 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 02:31:35.935612 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:31:35.935627 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:31:35.935643 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 02:31:35.935658 kernel: landlock: Up and running. Dec 13 02:31:35.935673 kernel: SELinux: Initializing. Dec 13 02:31:35.935688 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:31:35.935709 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:31:35.935720 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 02:31:35.935731 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:31:35.935742 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:31:35.935754 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:31:35.935762 kernel: Performance Events: AMD PMU driver. Dec 13 02:31:35.935771 kernel: ... version: 0 Dec 13 02:31:35.935780 kernel: ... bit width: 48 Dec 13 02:31:35.935790 kernel: ... generic registers: 4 Dec 13 02:31:35.935799 kernel: ... value mask: 0000ffffffffffff Dec 13 02:31:35.935808 kernel: ... max period: 00007fffffffffff Dec 13 02:31:35.935816 kernel: ... fixed-purpose events: 0 Dec 13 02:31:35.935825 kernel: ... event mask: 000000000000000f Dec 13 02:31:35.935833 kernel: signal: max sigframe size: 1440 Dec 13 02:31:35.935841 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:31:35.935850 kernel: rcu: Max phase no-delay instances is 400. Dec 13 02:31:35.935859 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:31:35.935867 kernel: smpboot: x86: Booting SMP configuration: Dec 13 02:31:35.935878 kernel: .... node #0, CPUs: #1 Dec 13 02:31:35.935886 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:31:35.935895 kernel: smpboot: Max logical packages: 2 Dec 13 02:31:35.935903 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 02:31:35.935912 kernel: devtmpfs: initialized Dec 13 02:31:35.935920 kernel: x86/mm: Memory block size: 128MB Dec 13 02:31:35.935929 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:31:35.935938 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:31:35.935947 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:31:35.935957 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:31:35.935965 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:31:35.935974 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:31:35.935982 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:31:35.935991 kernel: audit: type=2000 audit(1734057095.130:1): state=initialized audit_enabled=0 res=1 Dec 13 02:31:35.936000 kernel: cpuidle: using governor menu Dec 13 02:31:35.936009 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:31:35.936017 kernel: dca service started, version 1.12.1 Dec 13 02:31:35.936026 kernel: PCI: Using configuration type 1 for base access Dec 13 02:31:35.936037 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:31:35.936045 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:31:35.936061 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 02:31:35.936070 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:31:35.936079 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:31:35.936087 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:31:35.936096 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:31:35.936104 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 02:31:35.936113 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 02:31:35.936123 kernel: ACPI: Interpreter enabled Dec 13 02:31:35.936132 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 02:31:35.936141 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:31:35.936149 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:31:35.936158 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 02:31:35.936167 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 02:31:35.936175 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:31:35.936328 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:31:35.936431 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 02:31:35.936521 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 02:31:35.936534 kernel: acpiphp: Slot [3] registered Dec 13 02:31:35.936543 kernel: acpiphp: Slot [4] registered Dec 13 02:31:35.936551 kernel: acpiphp: Slot [5] registered Dec 13 02:31:35.936560 kernel: acpiphp: Slot [6] registered Dec 13 02:31:35.936568 kernel: acpiphp: Slot [7] registered Dec 13 02:31:35.936576 kernel: acpiphp: Slot [8] registered Dec 13 02:31:35.936588 kernel: acpiphp: Slot [9] registered Dec 13 02:31:35.936596 kernel: acpiphp: Slot [10] registered Dec 13 02:31:35.936605 kernel: acpiphp: Slot [11] registered Dec 13 02:31:35.936613 kernel: acpiphp: Slot [12] registered Dec 13 02:31:35.936622 kernel: acpiphp: Slot [13] registered Dec 13 02:31:35.936630 kernel: acpiphp: Slot [14] registered Dec 13 02:31:35.936639 kernel: acpiphp: Slot [15] registered Dec 13 02:31:35.936647 kernel: acpiphp: Slot [16] registered Dec 13 02:31:35.936655 kernel: acpiphp: Slot [17] registered Dec 13 02:31:35.936664 kernel: acpiphp: Slot [18] registered Dec 13 02:31:35.936674 kernel: acpiphp: Slot [19] registered Dec 13 02:31:35.936682 kernel: acpiphp: Slot [20] registered Dec 13 02:31:35.936691 kernel: acpiphp: Slot [21] registered Dec 13 02:31:35.936699 kernel: acpiphp: Slot [22] registered Dec 13 02:31:35.936708 kernel: acpiphp: Slot [23] registered Dec 13 02:31:35.936716 kernel: acpiphp: Slot [24] registered Dec 13 02:31:35.936724 kernel: acpiphp: Slot [25] registered Dec 13 02:31:35.936733 kernel: acpiphp: Slot [26] registered Dec 13 02:31:35.936741 kernel: acpiphp: Slot [27] registered Dec 13 02:31:35.936752 kernel: acpiphp: Slot [28] registered Dec 13 02:31:35.936762 kernel: acpiphp: Slot [29] registered Dec 13 02:31:35.936771 kernel: acpiphp: Slot [30] registered Dec 13 02:31:35.936780 kernel: acpiphp: Slot [31] registered Dec 13 02:31:35.936789 kernel: PCI host bridge to bus 0000:00 Dec 13 02:31:35.936887 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:31:35.936978 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:31:35.937066 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:31:35.937162 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 02:31:35.937248 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 02:31:35.937372 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:31:35.937489 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:31:35.937595 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 02:31:35.937701 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 02:31:35.937805 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 02:31:35.937902 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 02:31:35.937998 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 02:31:35.938094 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 02:31:35.938225 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 02:31:35.938354 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 02:31:35.938454 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 02:31:35.938557 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 02:31:35.938666 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 02:31:35.938758 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 02:31:35.938848 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 02:31:35.938937 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 02:31:35.939026 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 02:31:35.939115 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:31:35.939219 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 02:31:35.939353 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 02:31:35.939451 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 02:31:35.939541 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 02:31:35.939630 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 02:31:35.939727 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 02:31:35.939819 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 02:31:35.939914 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 02:31:35.940002 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 02:31:35.940099 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 02:31:35.940191 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 02:31:35.940280 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 02:31:35.940417 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 02:31:35.940513 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 02:31:35.940601 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 02:31:35.940614 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:31:35.940623 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:31:35.940632 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:31:35.940640 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:31:35.940649 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:31:35.940660 kernel: iommu: Default domain type: Translated Dec 13 02:31:35.940669 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:31:35.940681 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:31:35.940690 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:31:35.940699 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:31:35.940708 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 02:31:35.940803 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 02:31:35.940898 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 02:31:35.940991 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:31:35.941005 kernel: vgaarb: loaded Dec 13 02:31:35.941015 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:31:35.941029 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:31:35.941038 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:31:35.941047 kernel: pnp: PnP ACPI init Dec 13 02:31:35.941145 kernel: pnp 00:03: [dma 2] Dec 13 02:31:35.941160 kernel: pnp: PnP ACPI: found 5 devices Dec 13 02:31:35.941170 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:31:35.941179 kernel: NET: Registered PF_INET protocol family Dec 13 02:31:35.941189 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:31:35.941202 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 02:31:35.941212 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:31:35.941221 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:31:35.941230 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 02:31:35.941240 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 02:31:35.941249 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:31:35.941259 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:31:35.941268 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:31:35.941277 kernel: NET: Registered PF_XDP protocol family Dec 13 02:31:35.941425 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:31:35.941515 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:31:35.941601 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:31:35.941684 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 02:31:35.941763 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 02:31:35.941853 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 02:31:35.941944 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:31:35.941957 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:31:35.941970 kernel: Initialise system trusted keyrings Dec 13 02:31:35.941979 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 02:31:35.941988 kernel: Key type asymmetric registered Dec 13 02:31:35.941996 kernel: Asymmetric key parser 'x509' registered Dec 13 02:31:35.942005 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 02:31:35.942014 kernel: io scheduler mq-deadline registered Dec 13 02:31:35.942032 kernel: io scheduler kyber registered Dec 13 02:31:35.942040 kernel: io scheduler bfq registered Dec 13 02:31:35.942049 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:31:35.942060 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 02:31:35.942069 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 02:31:35.942078 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 02:31:35.942087 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 02:31:35.942095 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:31:35.942104 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:31:35.942113 kernel: random: crng init done Dec 13 02:31:35.942121 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:31:35.942130 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:31:35.942140 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:31:35.942241 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 02:31:35.942580 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 02:31:35.942674 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T02:31:35 UTC (1734057095) Dec 13 02:31:35.942761 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 02:31:35.942775 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 02:31:35.942785 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Dec 13 02:31:35.942799 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:31:35.942808 kernel: Segment Routing with IPv6 Dec 13 02:31:35.942818 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:31:35.942827 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:31:35.942836 kernel: Key type dns_resolver registered Dec 13 02:31:35.942845 kernel: IPI shorthand broadcast: enabled Dec 13 02:31:35.942854 kernel: sched_clock: Marking stable (889010805, 130854285)->(1023268209, -3403119) Dec 13 02:31:35.942864 kernel: registered taskstats version 1 Dec 13 02:31:35.942873 kernel: Loading compiled-in X.509 certificates Dec 13 02:31:35.942883 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 02:31:35.942895 kernel: Key type .fscrypt registered Dec 13 02:31:35.942904 kernel: Key type fscrypt-provisioning registered Dec 13 02:31:35.942913 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:31:35.942923 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:31:35.942932 kernel: ima: No architecture policies found Dec 13 02:31:35.942941 kernel: clk: Disabling unused clocks Dec 13 02:31:35.942950 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 02:31:35.942959 kernel: Write protecting the kernel read-only data: 36864k Dec 13 02:31:35.942971 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 02:31:35.942981 kernel: Run /init as init process Dec 13 02:31:35.942990 kernel: with arguments: Dec 13 02:31:35.942999 kernel: /init Dec 13 02:31:35.943008 kernel: with environment: Dec 13 02:31:35.943017 kernel: HOME=/ Dec 13 02:31:35.943026 kernel: TERM=linux Dec 13 02:31:35.943036 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:31:35.943047 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:31:35.943140 systemd[1]: Detected virtualization kvm. Dec 13 02:31:35.943153 systemd[1]: Detected architecture x86-64. Dec 13 02:31:35.943164 systemd[1]: Running in initrd. Dec 13 02:31:35.943174 systemd[1]: No hostname configured, using default hostname. Dec 13 02:31:35.943183 systemd[1]: Hostname set to . Dec 13 02:31:35.943194 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:31:35.943204 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:31:35.943217 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:31:35.943228 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:31:35.943239 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 02:31:35.943250 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:31:35.943260 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 02:31:35.943271 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 02:31:35.943282 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 02:31:35.943312 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 02:31:35.943323 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:31:35.943334 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:31:35.943344 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:31:35.943380 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:31:35.943393 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:31:35.943404 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:31:35.943415 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:31:35.943426 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:31:35.943436 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 02:31:35.943446 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 02:31:35.943457 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:31:35.943467 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:31:35.943478 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:31:35.943490 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:31:35.943501 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 02:31:35.943511 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:31:35.943522 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 02:31:35.943532 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:31:35.943542 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:31:35.943555 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:31:35.943566 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:31:35.943576 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 02:31:35.943610 systemd-journald[185]: Collecting audit messages is disabled. Dec 13 02:31:35.943634 systemd-journald[185]: Journal started Dec 13 02:31:35.943659 systemd-journald[185]: Runtime Journal (/run/log/journal/28cede2e90804f928ee54f3b65e42e53) is 4.9M, max 39.3M, 34.4M free. Dec 13 02:31:35.948118 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:31:35.951452 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:31:35.951841 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:31:35.961589 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 02:31:35.967231 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:31:35.968038 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 02:31:35.989533 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 02:31:36.026501 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:31:36.026565 kernel: Bridge firewalling registered Dec 13 02:31:36.015836 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 02:31:36.034263 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:31:36.036126 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:31:36.045521 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:31:36.047614 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:31:36.049523 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:31:36.051123 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:31:36.067609 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:31:36.073461 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 02:31:36.074100 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:31:36.074754 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:31:36.085544 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:31:36.092326 dracut-cmdline[215]: dracut-dracut-053 Dec 13 02:31:36.096200 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:31:36.114184 systemd-resolved[221]: Positive Trust Anchors: Dec 13 02:31:36.114218 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:31:36.114265 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:31:36.117157 systemd-resolved[221]: Defaulting to hostname 'linux'. Dec 13 02:31:36.118980 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:31:36.120888 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:31:36.186379 kernel: SCSI subsystem initialized Dec 13 02:31:36.198379 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:31:36.212846 kernel: iscsi: registered transport (tcp) Dec 13 02:31:36.236831 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:31:36.237214 kernel: QLogic iSCSI HBA Driver Dec 13 02:31:36.299198 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 02:31:36.305484 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 02:31:36.376539 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:31:36.377201 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:31:36.378473 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 02:31:36.453417 kernel: raid6: sse2x4 gen() 4584 MB/s Dec 13 02:31:36.470373 kernel: raid6: sse2x2 gen() 6065 MB/s Dec 13 02:31:36.487527 kernel: raid6: sse2x1 gen() 9528 MB/s Dec 13 02:31:36.487594 kernel: raid6: using algorithm sse2x1 gen() 9528 MB/s Dec 13 02:31:36.505712 kernel: raid6: .... xor() 7175 MB/s, rmw enabled Dec 13 02:31:36.505788 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 02:31:36.529526 kernel: xor: measuring software checksum speed Dec 13 02:31:36.529599 kernel: prefetch64-sse : 18487 MB/sec Dec 13 02:31:36.530565 kernel: generic_sse : 16821 MB/sec Dec 13 02:31:36.530624 kernel: xor: using function: prefetch64-sse (18487 MB/sec) Dec 13 02:31:36.724411 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 02:31:36.742493 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:31:36.751600 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:31:36.775467 systemd-udevd[404]: Using default interface naming scheme 'v255'. Dec 13 02:31:36.780471 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:31:36.794360 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 02:31:36.825599 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Dec 13 02:31:36.877591 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:31:36.885557 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:31:36.959091 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:31:36.968427 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 02:31:37.006564 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 02:31:37.010287 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:31:37.012558 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:31:37.014054 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:31:37.022841 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 02:31:37.041032 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:31:37.044801 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Dec 13 02:31:37.071453 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 02:31:37.071588 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:31:37.071602 kernel: GPT:17805311 != 41943039 Dec 13 02:31:37.071613 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:31:37.071625 kernel: GPT:17805311 != 41943039 Dec 13 02:31:37.071635 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:31:37.071652 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:31:37.084340 kernel: libata version 3.00 loaded. Dec 13 02:31:37.094335 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 02:31:37.115126 kernel: scsi host0: ata_piix Dec 13 02:31:37.115321 kernel: scsi host1: ata_piix Dec 13 02:31:37.115458 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 02:31:37.115475 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 02:31:37.123190 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Dec 13 02:31:37.121670 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 02:31:37.129364 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (452) Dec 13 02:31:37.139839 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 02:31:37.143919 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:31:37.144123 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:31:37.145457 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:31:37.146239 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:31:37.146379 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:31:37.147885 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:31:37.157532 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:31:37.165750 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 02:31:37.171276 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 02:31:37.171933 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 02:31:37.185503 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 02:31:37.231749 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:31:37.235470 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:31:37.251772 disk-uuid[500]: Primary Header is updated. Dec 13 02:31:37.251772 disk-uuid[500]: Secondary Entries is updated. Dec 13 02:31:37.251772 disk-uuid[500]: Secondary Header is updated. Dec 13 02:31:37.261369 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:31:37.263736 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:31:37.272266 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:31:38.285426 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:31:38.286274 disk-uuid[505]: The operation has completed successfully. Dec 13 02:31:38.348685 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:31:38.349798 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 02:31:38.400421 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 02:31:38.414836 sh[523]: Success Dec 13 02:31:38.445345 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 02:31:38.522326 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 02:31:38.540012 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 02:31:38.542278 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 02:31:38.571695 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 02:31:38.571800 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:31:38.573445 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 02:31:38.578063 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 02:31:38.580567 kernel: BTRFS info (device dm-0): using free space tree Dec 13 02:31:38.603911 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 02:31:38.606138 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 02:31:38.616643 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 02:31:38.622597 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 02:31:38.645339 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:31:38.645419 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:31:38.647124 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:31:38.657367 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 02:31:38.680797 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:31:38.680011 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:31:38.700057 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 02:31:38.711652 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 02:31:38.785335 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:31:38.795383 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:31:38.818567 systemd-networkd[706]: lo: Link UP Dec 13 02:31:38.818579 systemd-networkd[706]: lo: Gained carrier Dec 13 02:31:38.822070 systemd-networkd[706]: Enumeration completed Dec 13 02:31:38.822706 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:31:38.823322 systemd[1]: Reached target network.target - Network. Dec 13 02:31:38.823915 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:31:38.823918 systemd-networkd[706]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:31:38.824968 systemd-networkd[706]: eth0: Link UP Dec 13 02:31:38.824973 systemd-networkd[706]: eth0: Gained carrier Dec 13 02:31:38.824983 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:31:38.845380 systemd-networkd[706]: eth0: DHCPv4 address 172.24.4.31/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 02:31:38.871788 ignition[620]: Ignition 2.19.0 Dec 13 02:31:38.871804 ignition[620]: Stage: fetch-offline Dec 13 02:31:38.873652 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:31:38.871845 ignition[620]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:31:38.871857 ignition[620]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:31:38.871977 ignition[620]: parsed url from cmdline: "" Dec 13 02:31:38.871981 ignition[620]: no config URL provided Dec 13 02:31:38.871987 ignition[620]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:31:38.871996 ignition[620]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:31:38.872001 ignition[620]: failed to fetch config: resource requires networking Dec 13 02:31:38.872256 ignition[620]: Ignition finished successfully Dec 13 02:31:38.880494 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 02:31:38.894755 ignition[715]: Ignition 2.19.0 Dec 13 02:31:38.894768 ignition[715]: Stage: fetch Dec 13 02:31:38.894946 ignition[715]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:31:38.894958 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:31:38.895051 ignition[715]: parsed url from cmdline: "" Dec 13 02:31:38.895054 ignition[715]: no config URL provided Dec 13 02:31:38.895061 ignition[715]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:31:38.895069 ignition[715]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:31:38.895202 ignition[715]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 02:31:38.895267 ignition[715]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 02:31:38.895318 ignition[715]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 02:31:39.105761 ignition[715]: GET result: OK Dec 13 02:31:39.105867 ignition[715]: parsing config with SHA512: 39d1b5e19bd84d5d992f36196502834b595fddb67b4a5bd14f800f0caeb7defa281e836109494ef80fcf95b922879f2abbc00d10419ba93a4918e41c254f889e Dec 13 02:31:39.115803 unknown[715]: fetched base config from "system" Dec 13 02:31:39.116461 ignition[715]: fetch: fetch complete Dec 13 02:31:39.115826 unknown[715]: fetched base config from "system" Dec 13 02:31:39.116473 ignition[715]: fetch: fetch passed Dec 13 02:31:39.115841 unknown[715]: fetched user config from "openstack" Dec 13 02:31:39.116562 ignition[715]: Ignition finished successfully Dec 13 02:31:39.120775 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 02:31:39.138558 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 02:31:39.169775 ignition[721]: Ignition 2.19.0 Dec 13 02:31:39.169801 ignition[721]: Stage: kargs Dec 13 02:31:39.170231 ignition[721]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:31:39.170259 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:31:39.175748 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 02:31:39.172061 ignition[721]: kargs: kargs passed Dec 13 02:31:39.172154 ignition[721]: Ignition finished successfully Dec 13 02:31:39.185784 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 02:31:39.218326 ignition[727]: Ignition 2.19.0 Dec 13 02:31:39.218347 ignition[727]: Stage: disks Dec 13 02:31:39.218756 ignition[727]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:31:39.222836 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 02:31:39.218782 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:31:39.224863 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 02:31:39.220629 ignition[727]: disks: disks passed Dec 13 02:31:39.227176 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 02:31:39.220722 ignition[727]: Ignition finished successfully Dec 13 02:31:39.228882 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:31:39.230339 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:31:39.231973 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:31:39.245466 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 02:31:39.545956 systemd-fsck[735]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 02:31:39.566205 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 02:31:39.577532 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 02:31:39.861332 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 02:31:39.863393 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 02:31:39.865416 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 02:31:39.900466 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:31:39.924470 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 02:31:39.927080 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 02:31:39.930624 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 02:31:39.931928 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:31:39.931989 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:31:39.942647 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 02:31:39.949619 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 02:31:39.999362 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (743) Dec 13 02:31:40.041137 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:31:40.041235 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:31:40.044191 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:31:40.141358 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 02:31:40.172632 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:31:40.406715 initrd-setup-root[773]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:31:40.419044 initrd-setup-root[780]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:31:40.425836 initrd-setup-root[787]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:31:40.436615 initrd-setup-root[794]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:31:40.559853 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 02:31:40.568470 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 02:31:40.574510 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 02:31:40.583146 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 02:31:40.587319 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:31:40.618911 ignition[862]: INFO : Ignition 2.19.0 Dec 13 02:31:40.618911 ignition[862]: INFO : Stage: mount Dec 13 02:31:40.620722 ignition[862]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:31:40.620722 ignition[862]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:31:40.620722 ignition[862]: INFO : mount: mount passed Dec 13 02:31:40.620722 ignition[862]: INFO : Ignition finished successfully Dec 13 02:31:40.621284 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 02:31:40.635556 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 02:31:40.850239 systemd-networkd[706]: eth0: Gained IPv6LL Dec 13 02:31:47.528897 coreos-metadata[745]: Dec 13 02:31:47.528 WARN failed to locate config-drive, using the metadata service API instead Dec 13 02:31:47.566766 coreos-metadata[745]: Dec 13 02:31:47.566 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 02:31:47.583635 coreos-metadata[745]: Dec 13 02:31:47.583 INFO Fetch successful Dec 13 02:31:47.584927 coreos-metadata[745]: Dec 13 02:31:47.584 INFO wrote hostname ci-4081-2-1-d-77a3414d8f.novalocal to /sysroot/etc/hostname Dec 13 02:31:47.589520 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 02:31:47.590422 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 02:31:47.603469 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 02:31:47.641810 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:31:47.670428 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (879) Dec 13 02:31:47.684530 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:31:47.684610 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:31:47.688764 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:31:47.702385 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 02:31:47.707994 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:31:47.751395 ignition[897]: INFO : Ignition 2.19.0 Dec 13 02:31:47.751395 ignition[897]: INFO : Stage: files Dec 13 02:31:47.754462 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:31:47.754462 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:31:47.754462 ignition[897]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:31:47.759883 ignition[897]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:31:47.759883 ignition[897]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:31:47.763667 ignition[897]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:31:47.763667 ignition[897]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:31:47.767372 ignition[897]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:31:47.764027 unknown[897]: wrote ssh authorized keys file for user: core Dec 13 02:31:47.770954 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:31:47.773034 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:31:47.773034 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:31:47.773034 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:31:47.773034 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:31:47.773034 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:31:47.773034 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:31:47.773034 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 02:31:48.185414 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 02:31:49.814829 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:31:49.814829 ignition[897]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:31:49.823758 ignition[897]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:31:49.823758 ignition[897]: INFO : files: files passed Dec 13 02:31:49.823758 ignition[897]: INFO : Ignition finished successfully Dec 13 02:31:49.819837 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 02:31:49.828481 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 02:31:49.830429 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 02:31:49.892040 initrd-setup-root-after-ignition[924]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:31:49.892040 initrd-setup-root-after-ignition[924]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:31:49.897026 initrd-setup-root-after-ignition[928]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:31:49.894056 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:31:49.895672 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 02:31:49.902502 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 02:31:49.939574 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:31:49.939778 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 02:31:49.942642 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 02:31:49.945377 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 02:31:49.987917 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:31:49.988141 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 02:31:49.991061 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 02:31:50.003554 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 02:31:50.035442 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:31:50.051677 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 02:31:50.075808 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:31:50.079567 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:31:50.081533 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 02:31:50.084210 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:31:50.084758 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:31:50.087823 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 02:31:50.089783 systemd[1]: Stopped target basic.target - Basic System. Dec 13 02:31:50.092528 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 02:31:50.095216 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:31:50.097835 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 02:31:50.100813 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 02:31:50.103628 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:31:50.106637 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 02:31:50.109387 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 02:31:50.121191 systemd[1]: Stopped target swap.target - Swaps. Dec 13 02:31:50.123746 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:31:50.124080 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:31:50.127579 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:31:50.130794 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:31:50.133504 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 02:31:50.136568 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:31:50.139153 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:31:50.139661 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 02:31:50.143759 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:31:50.144228 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:31:50.147424 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:31:50.147706 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 02:31:50.158919 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 02:31:50.162074 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:31:50.162611 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:31:50.173849 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 02:31:50.175130 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:31:50.176596 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:31:50.179839 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:31:50.180543 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:31:50.197277 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:31:50.197400 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 02:31:50.212354 ignition[949]: INFO : Ignition 2.19.0 Dec 13 02:31:50.212354 ignition[949]: INFO : Stage: umount Dec 13 02:31:50.212354 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:31:50.212354 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:31:50.219871 ignition[949]: INFO : umount: umount passed Dec 13 02:31:50.219871 ignition[949]: INFO : Ignition finished successfully Dec 13 02:31:50.220924 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:31:50.221063 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 02:31:50.221882 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:31:50.221932 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 02:31:50.222728 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:31:50.222774 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 02:31:50.223710 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:31:50.223751 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 02:31:50.224693 systemd[1]: Stopped target network.target - Network. Dec 13 02:31:50.225651 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:31:50.225695 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:31:50.226722 systemd[1]: Stopped target paths.target - Path Units. Dec 13 02:31:50.227635 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:31:50.229541 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:31:50.230351 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 02:31:50.231486 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 02:31:50.232497 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:31:50.232535 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:31:50.233520 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:31:50.233556 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:31:50.234751 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:31:50.234795 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 02:31:50.235725 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 02:31:50.235764 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 02:31:50.236834 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 02:31:50.238000 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 02:31:50.241484 systemd-networkd[706]: eth0: DHCPv6 lease lost Dec 13 02:31:50.243261 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:31:50.243404 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 02:31:50.244970 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:31:50.245009 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:31:50.252617 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 02:31:50.253635 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:31:50.253712 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:31:50.254404 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:31:50.255320 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:31:50.255431 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 02:31:50.259801 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:31:50.259878 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:31:50.262436 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:31:50.262501 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 02:31:50.264446 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 02:31:50.264492 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:31:50.267063 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:31:50.267798 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 02:31:50.269053 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:31:50.269181 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:31:50.270959 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:31:50.271020 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 02:31:50.272151 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:31:50.272183 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:31:50.273232 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:31:50.273274 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:31:50.274972 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:31:50.275016 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 02:31:50.276173 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:31:50.276219 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:31:50.283721 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 02:31:50.286330 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 02:31:50.286392 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:31:50.293478 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:31:50.293530 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:31:50.296953 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:31:50.298687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:31:50.298812 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 02:31:51.242066 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:31:51.242403 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 02:31:51.244745 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 02:31:51.246954 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:31:51.247088 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 02:31:51.258709 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 02:31:51.285953 systemd[1]: Switching root. Dec 13 02:31:51.360084 systemd-journald[185]: Journal stopped Dec 13 02:31:53.864620 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 02:31:53.864687 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:31:53.864710 kernel: SELinux: policy capability open_perms=1 Dec 13 02:31:53.864727 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:31:53.864740 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:31:53.864755 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:31:53.864767 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:31:53.864780 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:31:53.864792 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:31:53.864809 kernel: audit: type=1403 audit(1734057112.388:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:31:53.864828 systemd[1]: Successfully loaded SELinux policy in 176.289ms. Dec 13 02:31:53.864847 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.504ms. Dec 13 02:31:53.864862 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:31:53.864876 systemd[1]: Detected virtualization kvm. Dec 13 02:31:53.864891 systemd[1]: Detected architecture x86-64. Dec 13 02:31:53.864903 systemd[1]: Detected first boot. Dec 13 02:31:53.864916 systemd[1]: Hostname set to . Dec 13 02:31:53.864929 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:31:53.864944 zram_generator::config[991]: No configuration found. Dec 13 02:31:53.864959 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:31:53.864973 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:31:53.864986 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 02:31:53.864999 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:31:53.865013 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 02:31:53.865026 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 02:31:53.865039 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 02:31:53.865052 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 02:31:53.865068 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 02:31:53.865081 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 02:31:53.865094 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 02:31:53.865107 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 02:31:53.865120 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:31:53.865133 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:31:53.865145 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 02:31:53.865160 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 02:31:53.865173 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 02:31:53.865188 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:31:53.865201 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 02:31:53.865214 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:31:53.865226 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 02:31:53.865240 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 02:31:53.865253 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 02:31:53.865268 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 02:31:53.865281 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:31:53.865331 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:31:53.865348 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:31:53.865361 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:31:53.865375 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 02:31:53.865392 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 02:31:53.865406 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:31:53.865419 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:31:53.865434 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:31:53.865447 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 02:31:53.865460 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 02:31:53.865473 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 02:31:53.865486 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 02:31:53.865500 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:31:53.865512 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 02:31:53.865535 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 02:31:53.865550 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 02:31:53.865566 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:31:53.865579 systemd[1]: Reached target machines.target - Containers. Dec 13 02:31:53.865592 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 02:31:53.865604 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:31:53.865617 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:31:53.865630 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 02:31:53.865643 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:31:53.865656 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:31:53.865671 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:31:53.865684 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 02:31:53.865697 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:31:53.865710 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:31:53.865723 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:31:53.865735 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 02:31:53.865748 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:31:53.865760 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:31:53.865773 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:31:53.865789 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:31:53.865802 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 02:31:53.865814 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 02:31:53.865827 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:31:53.865840 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:31:53.865853 systemd[1]: Stopped verity-setup.service. Dec 13 02:31:53.865866 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:31:53.865879 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 02:31:53.865892 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 02:31:53.865906 kernel: loop: module loaded Dec 13 02:31:53.865919 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 02:31:53.865932 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 02:31:53.865944 kernel: fuse: init (API version 7.39) Dec 13 02:31:53.865956 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 02:31:53.865979 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 02:31:53.865995 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:31:53.866008 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:31:53.866021 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 02:31:53.866034 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:31:53.866046 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:31:53.866060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:31:53.866073 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:31:53.866088 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:31:53.866100 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:31:53.866113 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:31:53.866138 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 02:31:53.866153 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 02:31:53.866166 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:31:53.866182 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 02:31:53.866212 systemd-journald[1077]: Collecting audit messages is disabled. Dec 13 02:31:53.866238 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 02:31:53.866252 systemd-journald[1077]: Journal started Dec 13 02:31:53.866278 systemd-journald[1077]: Runtime Journal (/run/log/journal/28cede2e90804f928ee54f3b65e42e53) is 4.9M, max 39.3M, 34.4M free. Dec 13 02:31:53.423131 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:31:53.449755 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 02:31:53.450186 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:31:53.877817 kernel: ACPI: bus type drm_connector registered Dec 13 02:31:53.883325 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 02:31:53.896477 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 02:31:53.899329 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:31:53.901317 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:31:53.904311 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 02:31:53.912327 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 02:31:53.923963 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 02:31:53.924015 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:31:53.935316 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 02:31:53.940321 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:31:53.945069 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 02:31:53.945152 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:31:53.960952 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:31:53.977956 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 02:31:53.978040 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:31:53.993141 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 02:31:53.996610 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:31:53.996746 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:31:54.000818 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:31:54.006639 kernel: loop0: detected capacity change from 0 to 8 Dec 13 02:31:54.004454 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 02:31:54.005455 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 02:31:54.007532 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 02:31:54.010206 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 02:31:54.011410 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:31:54.024339 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:31:54.038869 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 02:31:54.044353 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 02:31:54.050405 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 02:31:54.055582 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 02:31:54.060571 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 02:31:54.066507 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 02:31:54.074834 systemd-journald[1077]: Time spent on flushing to /var/log/journal/28cede2e90804f928ee54f3b65e42e53 is 33.545ms for 929 entries. Dec 13 02:31:54.074834 systemd-journald[1077]: System Journal (/var/log/journal/28cede2e90804f928ee54f3b65e42e53) is 8.0M, max 584.8M, 576.8M free. Dec 13 02:31:54.126931 systemd-journald[1077]: Received client request to flush runtime journal. Dec 13 02:31:54.099102 udevadm[1137]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:31:54.131161 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 02:31:54.186856 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:31:54.190111 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 02:31:54.199495 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 02:31:54.209158 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 02:31:54.222582 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:31:54.263319 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 02:31:54.268775 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Dec 13 02:31:54.268794 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Dec 13 02:31:54.275955 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:31:54.337333 kernel: loop4: detected capacity change from 0 to 8 Dec 13 02:31:54.341386 kernel: loop5: detected capacity change from 0 to 210664 Dec 13 02:31:54.401326 kernel: loop6: detected capacity change from 0 to 140768 Dec 13 02:31:54.469345 kernel: loop7: detected capacity change from 0 to 142488 Dec 13 02:31:54.501397 (sd-merge)[1149]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 02:31:54.502496 (sd-merge)[1149]: Merged extensions into '/usr'. Dec 13 02:31:54.512348 systemd[1]: Reloading requested from client PID 1106 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 02:31:54.512364 systemd[1]: Reloading... Dec 13 02:31:54.595213 zram_generator::config[1175]: No configuration found. Dec 13 02:31:54.837337 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:31:54.926156 systemd[1]: Reloading finished in 413 ms. Dec 13 02:31:54.959257 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 02:31:54.960516 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 02:31:54.968456 systemd[1]: Starting ensure-sysext.service... Dec 13 02:31:54.974833 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:31:54.977471 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:31:55.010759 systemd-udevd[1234]: Using default interface naming scheme 'v255'. Dec 13 02:31:55.015558 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:31:55.016418 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 02:31:55.018510 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:31:55.019223 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Dec 13 02:31:55.019443 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Dec 13 02:31:55.070906 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:31:55.070916 systemd-tmpfiles[1232]: Skipping /boot Dec 13 02:31:55.076437 systemd[1]: Reloading requested from client PID 1231 ('systemctl') (unit ensure-sysext.service)... Dec 13 02:31:55.076461 systemd[1]: Reloading... Dec 13 02:31:55.080631 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:31:55.080739 systemd-tmpfiles[1232]: Skipping /boot Dec 13 02:31:55.163342 zram_generator::config[1265]: No configuration found. Dec 13 02:31:55.293678 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:31:55.352116 systemd[1]: Reloading finished in 275 ms. Dec 13 02:31:55.370725 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:31:55.390504 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:31:55.413275 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 02:31:55.422605 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 02:31:55.426449 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:31:55.435467 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 02:31:55.440061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:31:55.440248 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:31:55.442820 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:31:55.446888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:31:55.449556 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:31:55.450449 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:31:55.450585 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:31:55.453458 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:31:55.453596 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:31:55.456787 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:31:55.456952 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:31:55.459466 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:31:55.460428 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:31:55.466605 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 02:31:55.468877 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:31:55.477665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:31:55.477893 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:31:55.480718 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:31:55.480851 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:31:55.481942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:31:55.482087 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:31:55.492203 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:31:55.492512 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:31:55.498579 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:31:55.503479 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:31:55.507614 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:31:55.510261 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:31:55.510946 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:31:55.511178 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:31:55.518192 systemd[1]: Finished ensure-sysext.service. Dec 13 02:31:55.519166 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 02:31:55.520082 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:31:55.520361 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:31:55.521685 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:31:55.521845 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:31:55.530355 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 02:31:55.531234 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:31:55.531442 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:31:55.532282 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:31:55.532450 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:31:55.534444 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:31:55.534523 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:31:55.579091 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 02:31:55.761712 ldconfig[1102]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:31:55.896886 systemd-resolved[1329]: Positive Trust Anchors: Dec 13 02:31:55.896930 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:31:55.897076 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:31:55.920400 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 02:31:55.921986 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 02:31:56.052640 systemd-resolved[1329]: Using system hostname 'ci-4081-2-1-d-77a3414d8f.novalocal'. Dec 13 02:31:56.055755 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:31:56.057203 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:31:56.327716 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 02:31:56.329243 augenrules[1366]: No rules Dec 13 02:31:56.331287 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:31:56.343154 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:31:56.356667 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:31:56.432848 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 02:31:56.438526 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 02:31:56.462903 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 02:31:56.482754 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 02:31:56.485341 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1378) Dec 13 02:31:56.489341 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1380) Dec 13 02:31:56.500018 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1380) Dec 13 02:31:56.515423 systemd-networkd[1376]: lo: Link UP Dec 13 02:31:56.515434 systemd-networkd[1376]: lo: Gained carrier Dec 13 02:31:56.516811 systemd-networkd[1376]: Enumeration completed Dec 13 02:31:56.516918 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:31:56.517854 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:31:56.517861 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:31:56.518074 systemd[1]: Reached target network.target - Network. Dec 13 02:31:56.518779 systemd-networkd[1376]: eth0: Link UP Dec 13 02:31:56.518783 systemd-networkd[1376]: eth0: Gained carrier Dec 13 02:31:56.518806 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:31:56.528611 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 02:31:56.537387 systemd-networkd[1376]: eth0: DHCPv4 address 172.24.4.31/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 02:31:56.539485 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Dec 13 02:31:56.560574 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 02:31:56.567493 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 02:31:56.576645 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:31:56.609333 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 02:31:56.609417 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:31:56.615351 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:31:56.699386 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 02:31:56.705764 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 02:31:56.726566 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 02:31:56.766320 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:31:56.807535 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 02:31:56.807663 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 02:31:56.815324 kernel: Console: switching to colour dummy device 80x25 Dec 13 02:31:56.818344 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 02:31:56.818399 kernel: [drm] features: -context_init Dec 13 02:31:56.819517 kernel: [drm] number of scanouts: 1 Dec 13 02:31:56.820363 kernel: [drm] number of cap sets: 0 Dec 13 02:31:56.822316 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 02:31:56.828347 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 02:31:56.832875 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 02:31:56.838752 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 02:31:56.839401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:31:56.839763 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:31:56.855535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:31:56.857912 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 02:31:56.868553 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 02:31:56.928745 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:31:56.996938 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 02:31:56.999534 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:31:57.008617 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 02:31:57.032592 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:31:57.091781 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 02:31:57.673194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:31:58.322147 systemd-networkd[1376]: eth0: Gained IPv6LL Dec 13 02:31:58.323653 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Dec 13 02:31:58.328209 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 02:31:58.330061 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 02:31:58.389272 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 02:31:58.390092 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:31:58.390224 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:31:58.390770 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 02:31:58.391129 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 02:31:58.393197 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 02:31:58.393838 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 02:31:58.394255 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 02:31:58.396143 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:31:58.396270 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:31:58.396584 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:31:58.429587 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 02:31:58.439066 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 02:31:58.615462 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 02:31:58.619674 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 02:31:58.622592 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:31:58.623735 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:31:58.624939 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:31:58.625010 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:31:58.641627 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 02:31:58.647814 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 02:31:58.655603 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 02:31:58.668583 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 02:31:58.683942 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 02:31:58.689192 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 02:31:58.700121 jq[1430]: false Dec 13 02:31:58.774652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:31:58.781707 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 02:31:58.798809 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 02:31:58.816629 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 02:31:58.823608 extend-filesystems[1433]: Found loop4 Dec 13 02:31:58.837555 extend-filesystems[1433]: Found loop5 Dec 13 02:31:58.837555 extend-filesystems[1433]: Found loop6 Dec 13 02:31:58.837555 extend-filesystems[1433]: Found loop7 Dec 13 02:31:58.837555 extend-filesystems[1433]: Found vda Dec 13 02:31:58.837555 extend-filesystems[1433]: Found vda1 Dec 13 02:31:58.837555 extend-filesystems[1433]: Found vda2 Dec 13 02:31:58.837555 extend-filesystems[1433]: Found vda3 Dec 13 02:31:58.837555 extend-filesystems[1433]: Found usr Dec 13 02:31:58.837555 extend-filesystems[1433]: Found vda4 Dec 13 02:31:58.837555 extend-filesystems[1433]: Found vda6 Dec 13 02:31:58.837555 extend-filesystems[1433]: Found vda7 Dec 13 02:31:58.837555 extend-filesystems[1433]: Found vda9 Dec 13 02:31:58.837555 extend-filesystems[1433]: Checking size of /dev/vda9 Dec 13 02:31:58.834416 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 02:31:58.860659 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 02:31:58.869894 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:31:58.870696 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:31:58.879448 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 02:31:58.885978 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 02:31:58.897993 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:31:58.898204 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 02:31:58.900647 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:31:58.900849 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 02:31:58.906490 jq[1447]: true Dec 13 02:31:58.933222 jq[1452]: true Dec 13 02:31:58.938974 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 02:31:58.939700 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:31:58.939891 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 02:31:58.954727 update_engine[1446]: I20241213 02:31:58.954349 1446 main.cc:92] Flatcar Update Engine starting Dec 13 02:31:58.963416 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 02:31:59.013061 extend-filesystems[1433]: Resized partition /dev/vda9 Dec 13 02:31:59.027553 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Dec 13 02:31:59.135270 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1378) Dec 13 02:31:59.035049 systemd-logind[1441]: New seat seat0. Dec 13 02:31:59.137229 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:31:59.137278 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:31:59.138474 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 02:31:59.329508 dbus-daemon[1429]: [system] SELinux support is enabled Dec 13 02:31:59.330526 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 02:31:59.357328 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 02:31:59.359911 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:31:59.359988 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 02:31:59.364963 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:31:59.365025 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 02:31:59.380759 sshd_keygen[1486]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:31:59.389251 dbus-daemon[1429]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 02:31:59.397355 systemd[1]: Started update-engine.service - Update Engine. Dec 13 02:31:59.404320 update_engine[1446]: I20241213 02:31:59.402071 1446 update_check_scheduler.cc:74] Next update check in 4m23s Dec 13 02:31:59.410632 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 02:31:59.425977 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 02:31:59.439540 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 02:31:59.450834 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:31:59.451035 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 02:31:59.465027 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 02:31:59.529634 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 02:31:59.561847 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 02:31:59.574558 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:31:59.579656 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 02:31:59.581406 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 02:31:59.729914 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 02:31:59.730084 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:31:59.732891 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 02:31:59.752666 systemd[1]: Starting sshkeys.service... Dec 13 02:31:59.765129 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 02:31:59.772740 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 02:31:59.824137 extend-filesystems[1492]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 02:31:59.824137 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 02:31:59.824137 extend-filesystems[1492]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 02:31:59.829468 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Dec 13 02:31:59.826528 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:31:59.832841 containerd[1459]: time="2024-12-13T02:31:59.830663302Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 02:31:59.826739 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 02:31:59.882375 containerd[1459]: time="2024-12-13T02:31:59.882097958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:31:59.884326 containerd[1459]: time="2024-12-13T02:31:59.884243883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:31:59.884417 containerd[1459]: time="2024-12-13T02:31:59.884398302Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:31:59.884487 containerd[1459]: time="2024-12-13T02:31:59.884471640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:31:59.884734 containerd[1459]: time="2024-12-13T02:31:59.884714736Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 02:31:59.884829 containerd[1459]: time="2024-12-13T02:31:59.884812619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 02:31:59.884969 containerd[1459]: time="2024-12-13T02:31:59.884948314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:31:59.885034 containerd[1459]: time="2024-12-13T02:31:59.885019978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:31:59.885324 containerd[1459]: time="2024-12-13T02:31:59.885279265Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:31:59.885395 containerd[1459]: time="2024-12-13T02:31:59.885379112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:31:59.885478 containerd[1459]: time="2024-12-13T02:31:59.885460204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:31:59.885540 containerd[1459]: time="2024-12-13T02:31:59.885525226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:31:59.885690 containerd[1459]: time="2024-12-13T02:31:59.885671630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:31:59.885999 containerd[1459]: time="2024-12-13T02:31:59.885979418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:31:59.886214 containerd[1459]: time="2024-12-13T02:31:59.886188941Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:31:59.886286 containerd[1459]: time="2024-12-13T02:31:59.886271045Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:31:59.886465 containerd[1459]: time="2024-12-13T02:31:59.886445682Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:31:59.886580 containerd[1459]: time="2024-12-13T02:31:59.886563112Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:32:00.026046 containerd[1459]: time="2024-12-13T02:32:00.025746972Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:32:00.026046 containerd[1459]: time="2024-12-13T02:32:00.025865254Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:32:00.026046 containerd[1459]: time="2024-12-13T02:32:00.025908675Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 02:32:00.026046 containerd[1459]: time="2024-12-13T02:32:00.025951876Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 02:32:00.026046 containerd[1459]: time="2024-12-13T02:32:00.025990709Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:32:00.026533 containerd[1459]: time="2024-12-13T02:32:00.026375470Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:32:00.027090 containerd[1459]: time="2024-12-13T02:32:00.027017925Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:32:00.030563 containerd[1459]: time="2024-12-13T02:32:00.027260631Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 02:32:00.030563 containerd[1459]: time="2024-12-13T02:32:00.027369014Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 02:32:00.030563 containerd[1459]: time="2024-12-13T02:32:00.027423907Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 02:32:00.030563 containerd[1459]: time="2024-12-13T02:32:00.027468571Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:32:00.030563 containerd[1459]: time="2024-12-13T02:32:00.027503957Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:32:00.030563 containerd[1459]: time="2024-12-13T02:32:00.027537049Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:32:00.030563 containerd[1459]: time="2024-12-13T02:32:00.027572205Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:32:00.030563 containerd[1459]: time="2024-12-13T02:32:00.027608804Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:32:00.030563 containerd[1459]: time="2024-12-13T02:32:00.027641224Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:32:00.030563 containerd[1459]: time="2024-12-13T02:32:00.027675579Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:32:00.030563 containerd[1459]: time="2024-12-13T02:32:00.027706286Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:32:00.030563 containerd[1459]: time="2024-12-13T02:32:00.027763123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.030563 containerd[1459]: time="2024-12-13T02:32:00.027798590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.030563 containerd[1459]: time="2024-12-13T02:32:00.027830409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.031383 containerd[1459]: time="2024-12-13T02:32:00.027868390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.031383 containerd[1459]: time="2024-12-13T02:32:00.027900481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.031383 containerd[1459]: time="2024-12-13T02:32:00.027934194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.031383 containerd[1459]: time="2024-12-13T02:32:00.027967306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.031383 containerd[1459]: time="2024-12-13T02:32:00.028002512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.031383 containerd[1459]: time="2024-12-13T02:32:00.028035344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.031383 containerd[1459]: time="2024-12-13T02:32:00.028071571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.031383 containerd[1459]: time="2024-12-13T02:32:00.028104333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.031383 containerd[1459]: time="2024-12-13T02:32:00.028137405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.031383 containerd[1459]: time="2024-12-13T02:32:00.028168403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.031383 containerd[1459]: time="2024-12-13T02:32:00.028206915Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 02:32:00.031383 containerd[1459]: time="2024-12-13T02:32:00.028254074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.031383 containerd[1459]: time="2024-12-13T02:32:00.028286755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.031383 containerd[1459]: time="2024-12-13T02:32:00.028363429Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:32:00.032091 containerd[1459]: time="2024-12-13T02:32:00.028483083Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:32:00.032091 containerd[1459]: time="2024-12-13T02:32:00.028531294Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 02:32:00.032091 containerd[1459]: time="2024-12-13T02:32:00.028560679Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:32:00.032091 containerd[1459]: time="2024-12-13T02:32:00.028596807Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 02:32:00.032091 containerd[1459]: time="2024-12-13T02:32:00.028625190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.032091 containerd[1459]: time="2024-12-13T02:32:00.028685222Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 02:32:00.032091 containerd[1459]: time="2024-12-13T02:32:00.028720729Z" level=info msg="NRI interface is disabled by configuration." Dec 13 02:32:00.032091 containerd[1459]: time="2024-12-13T02:32:00.028766585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:32:00.032983 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 02:32:00.035274 containerd[1459]: time="2024-12-13T02:32:00.029489190Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:32:00.035274 containerd[1459]: time="2024-12-13T02:32:00.029661073Z" level=info msg="Connect containerd service" Dec 13 02:32:00.035274 containerd[1459]: time="2024-12-13T02:32:00.029742395Z" level=info msg="using legacy CRI server" Dec 13 02:32:00.035274 containerd[1459]: time="2024-12-13T02:32:00.029761992Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 02:32:00.035274 containerd[1459]: time="2024-12-13T02:32:00.029959422Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:32:00.035274 containerd[1459]: time="2024-12-13T02:32:00.031455789Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:32:00.035274 containerd[1459]: time="2024-12-13T02:32:00.032015057Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:32:00.035274 containerd[1459]: time="2024-12-13T02:32:00.032124823Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:32:00.035274 containerd[1459]: time="2024-12-13T02:32:00.032273132Z" level=info msg="Start subscribing containerd event" Dec 13 02:32:00.035274 containerd[1459]: time="2024-12-13T02:32:00.032402514Z" level=info msg="Start recovering state" Dec 13 02:32:00.035274 containerd[1459]: time="2024-12-13T02:32:00.032579847Z" level=info msg="Start event monitor" Dec 13 02:32:00.035274 containerd[1459]: time="2024-12-13T02:32:00.032661480Z" level=info msg="Start snapshots syncer" Dec 13 02:32:00.035274 containerd[1459]: time="2024-12-13T02:32:00.032699441Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:32:00.035274 containerd[1459]: time="2024-12-13T02:32:00.032718336Z" level=info msg="Start streaming server" Dec 13 02:32:00.037460 containerd[1459]: time="2024-12-13T02:32:00.036763533Z" level=info msg="containerd successfully booted in 0.210407s" Dec 13 02:32:00.781352 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 02:32:00.794091 systemd[1]: Started sshd@0-172.24.4.31:22-172.24.4.1:52366.service - OpenSSH per-connection server daemon (172.24.4.1:52366). Dec 13 02:32:02.259610 sshd[1531]: Accepted publickey for core from 172.24.4.1 port 52366 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:32:02.263693 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:32:02.293879 systemd-logind[1441]: New session 1 of user core. Dec 13 02:32:02.299732 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 02:32:02.316177 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 02:32:02.351815 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 02:32:02.362688 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 02:32:02.381887 (systemd)[1537]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:02.531647 systemd[1537]: Queued start job for default target default.target. Dec 13 02:32:02.539398 systemd[1537]: Created slice app.slice - User Application Slice. Dec 13 02:32:02.539423 systemd[1537]: Reached target paths.target - Paths. Dec 13 02:32:02.539438 systemd[1537]: Reached target timers.target - Timers. Dec 13 02:32:02.544355 systemd[1537]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 02:32:02.581896 systemd[1537]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 02:32:02.582200 systemd[1537]: Reached target sockets.target - Sockets. Dec 13 02:32:02.582232 systemd[1537]: Reached target basic.target - Basic System. Dec 13 02:32:02.582370 systemd[1537]: Reached target default.target - Main User Target. Dec 13 02:32:02.582428 systemd[1537]: Startup finished in 186ms. Dec 13 02:32:02.583197 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 02:32:02.596038 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 02:32:03.027237 systemd[1]: Started sshd@1-172.24.4.31:22-172.24.4.1:45216.service - OpenSSH per-connection server daemon (172.24.4.1:45216). Dec 13 02:32:04.403668 sshd[1549]: Accepted publickey for core from 172.24.4.1 port 45216 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:32:04.407694 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:32:04.418411 systemd-logind[1441]: New session 2 of user core. Dec 13 02:32:04.425800 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 02:32:04.553627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:32:04.557181 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:32:04.626041 login[1510]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:32:04.632095 login[1509]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:32:04.639048 systemd-logind[1441]: New session 3 of user core. Dec 13 02:32:04.641504 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 02:32:04.645599 systemd-logind[1441]: New session 4 of user core. Dec 13 02:32:04.651549 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 02:32:05.209061 sshd[1549]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:05.226366 systemd[1]: sshd@1-172.24.4.31:22-172.24.4.1:45216.service: Deactivated successfully. Dec 13 02:32:05.230999 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:32:05.235687 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:32:05.246874 systemd[1]: Started sshd@2-172.24.4.31:22-172.24.4.1:48670.service - OpenSSH per-connection server daemon (172.24.4.1:48670). Dec 13 02:32:05.253886 systemd-logind[1441]: Removed session 2. Dec 13 02:32:05.774240 coreos-metadata[1428]: Dec 13 02:32:05.774 WARN failed to locate config-drive, using the metadata service API instead Dec 13 02:32:05.827110 coreos-metadata[1428]: Dec 13 02:32:05.826 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 02:32:05.998208 coreos-metadata[1428]: Dec 13 02:32:05.998 INFO Fetch successful Dec 13 02:32:05.998208 coreos-metadata[1428]: Dec 13 02:32:05.998 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 02:32:06.014800 coreos-metadata[1428]: Dec 13 02:32:06.014 INFO Fetch successful Dec 13 02:32:06.014800 coreos-metadata[1428]: Dec 13 02:32:06.014 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 02:32:06.031219 coreos-metadata[1428]: Dec 13 02:32:06.030 INFO Fetch successful Dec 13 02:32:06.031219 coreos-metadata[1428]: Dec 13 02:32:06.031 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 02:32:06.049477 coreos-metadata[1428]: Dec 13 02:32:06.049 INFO Fetch successful Dec 13 02:32:06.049477 coreos-metadata[1428]: Dec 13 02:32:06.049 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 02:32:06.067727 coreos-metadata[1428]: Dec 13 02:32:06.067 INFO Fetch successful Dec 13 02:32:06.067727 coreos-metadata[1428]: Dec 13 02:32:06.067 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 02:32:06.086646 coreos-metadata[1428]: Dec 13 02:32:06.086 INFO Fetch successful Dec 13 02:32:06.140008 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 02:32:06.142009 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 02:32:06.153908 kubelet[1557]: E1213 02:32:06.153860 1557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:32:06.157744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:32:06.157901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:32:06.158561 systemd[1]: kubelet.service: Consumed 2.193s CPU time. Dec 13 02:32:06.591835 sshd[1592]: Accepted publickey for core from 172.24.4.1 port 48670 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:32:06.594987 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:32:06.605867 systemd-logind[1441]: New session 5 of user core. Dec 13 02:32:06.615791 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 02:32:06.850515 coreos-metadata[1522]: Dec 13 02:32:06.850 WARN failed to locate config-drive, using the metadata service API instead Dec 13 02:32:06.894010 coreos-metadata[1522]: Dec 13 02:32:06.893 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 02:32:06.910996 coreos-metadata[1522]: Dec 13 02:32:06.910 INFO Fetch successful Dec 13 02:32:06.910996 coreos-metadata[1522]: Dec 13 02:32:06.910 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 02:32:06.925004 coreos-metadata[1522]: Dec 13 02:32:06.924 INFO Fetch successful Dec 13 02:32:06.931342 unknown[1522]: wrote ssh authorized keys file for user: core Dec 13 02:32:06.978133 update-ssh-keys[1608]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:32:06.979513 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 02:32:06.984147 systemd[1]: Finished sshkeys.service. Dec 13 02:32:06.988683 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 02:32:06.989068 systemd[1]: Startup finished in 1.028s (kernel) + 16.546s (initrd) + 14.774s (userspace) = 32.349s. Dec 13 02:32:07.391856 sshd[1592]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:07.397166 systemd[1]: sshd@2-172.24.4.31:22-172.24.4.1:48670.service: Deactivated successfully. Dec 13 02:32:07.400714 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:32:07.403975 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:32:07.406255 systemd-logind[1441]: Removed session 5. Dec 13 02:32:16.252103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:32:16.261744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:32:16.437682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:32:16.441839 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:32:17.414007 systemd[1]: Started sshd@3-172.24.4.31:22-172.24.4.1:40076.service - OpenSSH per-connection server daemon (172.24.4.1:40076). Dec 13 02:32:17.460722 kubelet[1621]: E1213 02:32:17.460523 1621 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:32:17.467090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:32:17.467454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:32:18.656104 sshd[1630]: Accepted publickey for core from 172.24.4.1 port 40076 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:32:18.659121 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:32:18.671159 systemd-logind[1441]: New session 6 of user core. Dec 13 02:32:18.676057 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 02:32:19.280612 sshd[1630]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:19.299808 systemd[1]: sshd@3-172.24.4.31:22-172.24.4.1:40076.service: Deactivated successfully. Dec 13 02:32:19.305960 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:32:19.310852 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:32:19.320963 systemd[1]: Started sshd@4-172.24.4.31:22-172.24.4.1:40086.service - OpenSSH per-connection server daemon (172.24.4.1:40086). Dec 13 02:32:19.324058 systemd-logind[1441]: Removed session 6. Dec 13 02:32:20.916910 sshd[1639]: Accepted publickey for core from 172.24.4.1 port 40086 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:32:20.920758 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:32:20.933406 systemd-logind[1441]: New session 7 of user core. Dec 13 02:32:20.941636 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 02:32:21.513156 sshd[1639]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:21.522088 systemd[1]: sshd@4-172.24.4.31:22-172.24.4.1:40086.service: Deactivated successfully. Dec 13 02:32:21.523505 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:32:21.525147 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:32:21.531920 systemd[1]: Started sshd@5-172.24.4.31:22-172.24.4.1:40092.service - OpenSSH per-connection server daemon (172.24.4.1:40092). Dec 13 02:32:21.535764 systemd-logind[1441]: Removed session 7. Dec 13 02:32:23.086001 sshd[1646]: Accepted publickey for core from 172.24.4.1 port 40092 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:32:23.088831 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:32:23.098517 systemd-logind[1441]: New session 8 of user core. Dec 13 02:32:23.107600 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 02:32:23.827567 sshd[1646]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:23.838067 systemd[1]: sshd@5-172.24.4.31:22-172.24.4.1:40092.service: Deactivated successfully. Dec 13 02:32:23.841459 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:32:23.845774 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:32:23.852931 systemd[1]: Started sshd@6-172.24.4.31:22-172.24.4.1:40100.service - OpenSSH per-connection server daemon (172.24.4.1:40100). Dec 13 02:32:23.855738 systemd-logind[1441]: Removed session 8. Dec 13 02:32:25.336248 sshd[1653]: Accepted publickey for core from 172.24.4.1 port 40100 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:32:25.338907 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:32:25.351405 systemd-logind[1441]: New session 9 of user core. Dec 13 02:32:25.359610 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 02:32:25.733018 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 02:32:25.734371 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:32:25.754068 sudo[1656]: pam_unix(sudo:session): session closed for user root Dec 13 02:32:26.015217 sshd[1653]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:26.027064 systemd[1]: sshd@6-172.24.4.31:22-172.24.4.1:40100.service: Deactivated successfully. Dec 13 02:32:26.030697 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:32:26.032520 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:32:26.040940 systemd[1]: Started sshd@7-172.24.4.31:22-172.24.4.1:60722.service - OpenSSH per-connection server daemon (172.24.4.1:60722). Dec 13 02:32:26.044518 systemd-logind[1441]: Removed session 9. Dec 13 02:32:27.389758 sshd[1661]: Accepted publickey for core from 172.24.4.1 port 60722 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:32:27.393744 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:32:27.407460 systemd-logind[1441]: New session 10 of user core. Dec 13 02:32:27.414696 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 02:32:27.502160 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:32:27.512922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:32:27.866015 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 02:32:27.868702 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:32:27.877489 sudo[1668]: pam_unix(sudo:session): session closed for user root Dec 13 02:32:27.891034 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 02:32:27.892482 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:32:27.919998 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 02:32:27.930404 auditctl[1673]: No rules Dec 13 02:32:27.931150 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 02:32:27.931756 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 02:32:27.945392 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:32:27.947515 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:32:27.959728 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:32:27.976458 augenrules[1699]: No rules Dec 13 02:32:27.978408 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:32:27.979734 sudo[1667]: pam_unix(sudo:session): session closed for user root Dec 13 02:32:28.014633 kubelet[1680]: E1213 02:32:28.014549 1680 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:32:28.018042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:32:28.018255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:32:28.198926 systemd[1]: Started sshd@8-172.24.4.31:22-172.24.4.1:60734.service - OpenSSH per-connection server daemon (172.24.4.1:60734). Dec 13 02:32:28.211061 sshd[1661]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:28.222166 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:32:28.222514 systemd[1]: sshd@7-172.24.4.31:22-172.24.4.1:60722.service: Deactivated successfully. Dec 13 02:32:28.226733 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:32:28.231556 systemd-logind[1441]: Removed session 10. Dec 13 02:32:28.926112 systemd-timesyncd[1348]: Contacted time server 54.36.61.42:123 (2.flatcar.pool.ntp.org). Dec 13 02:32:28.926208 systemd-resolved[1329]: Clock change detected. Flushing caches. Dec 13 02:32:28.926218 systemd-timesyncd[1348]: Initial clock synchronization to Fri 2024-12-13 02:32:28.925448 UTC. Dec 13 02:32:29.884887 sshd[1708]: Accepted publickey for core from 172.24.4.1 port 60734 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:32:29.887779 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:32:29.899336 systemd-logind[1441]: New session 11 of user core. Dec 13 02:32:29.909072 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 02:32:30.542266 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:32:30.543049 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:32:33.178491 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:32:33.188237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:32:33.244932 systemd[1]: Reloading requested from client PID 1752 ('systemctl') (unit session-11.scope)... Dec 13 02:32:33.244967 systemd[1]: Reloading... Dec 13 02:32:33.372040 zram_generator::config[1793]: No configuration found. Dec 13 02:32:34.101126 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:32:34.187327 systemd[1]: Reloading finished in 941 ms. Dec 13 02:32:34.237364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:32:34.240376 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:32:34.247912 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:32:34.248157 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:32:34.252018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:32:34.378937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:32:34.386143 (kubelet)[1859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:32:35.076615 kubelet[1859]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:32:35.076615 kubelet[1859]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:32:35.076615 kubelet[1859]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:32:35.195999 kubelet[1859]: I1213 02:32:35.195833 1859 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:32:35.627046 kubelet[1859]: I1213 02:32:35.626975 1859 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 02:32:35.627046 kubelet[1859]: I1213 02:32:35.627011 1859 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:32:35.627392 kubelet[1859]: I1213 02:32:35.627246 1859 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 02:32:36.364290 kubelet[1859]: I1213 02:32:36.364161 1859 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:32:36.413203 kubelet[1859]: I1213 02:32:36.413131 1859 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:32:36.413790 kubelet[1859]: I1213 02:32:36.413615 1859 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:32:36.414244 kubelet[1859]: I1213 02:32:36.413767 1859 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.31","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:32:36.414244 kubelet[1859]: I1213 02:32:36.414229 1859 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:32:36.414588 kubelet[1859]: I1213 02:32:36.414257 1859 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:32:36.414588 kubelet[1859]: I1213 02:32:36.414494 1859 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:32:36.416572 kubelet[1859]: I1213 02:32:36.416496 1859 kubelet.go:400] "Attempting to sync node with API server" Dec 13 02:32:36.416572 kubelet[1859]: I1213 02:32:36.416543 1859 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:32:36.418007 kubelet[1859]: I1213 02:32:36.416595 1859 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:32:36.418007 kubelet[1859]: I1213 02:32:36.416631 1859 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:32:36.418007 kubelet[1859]: E1213 02:32:36.417441 1859 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:36.420386 kubelet[1859]: E1213 02:32:36.418262 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:36.427605 kubelet[1859]: I1213 02:32:36.426903 1859 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:32:36.431065 kubelet[1859]: I1213 02:32:36.430878 1859 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:32:36.431065 kubelet[1859]: W1213 02:32:36.430998 1859 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:32:36.432864 kubelet[1859]: I1213 02:32:36.432628 1859 server.go:1264] "Started kubelet" Dec 13 02:32:36.436321 kubelet[1859]: I1213 02:32:36.436235 1859 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:32:36.449717 kubelet[1859]: W1213 02:32:36.447787 1859 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 02:32:36.449717 kubelet[1859]: E1213 02:32:36.447894 1859 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 02:32:36.449717 kubelet[1859]: W1213 02:32:36.448177 1859 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.31" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 02:32:36.449717 kubelet[1859]: E1213 02:32:36.448227 1859 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.31" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 02:32:36.450279 kubelet[1859]: I1213 02:32:36.450226 1859 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:32:36.452243 kubelet[1859]: I1213 02:32:36.452175 1859 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:32:36.453107 kubelet[1859]: I1213 02:32:36.453072 1859 server.go:455] "Adding debug handlers to kubelet server" Dec 13 02:32:36.453395 kubelet[1859]: I1213 02:32:36.453346 1859 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 02:32:36.453495 kubelet[1859]: I1213 02:32:36.453482 1859 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:32:36.455904 kubelet[1859]: I1213 02:32:36.455803 1859 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:32:36.456527 kubelet[1859]: I1213 02:32:36.456492 1859 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:32:36.469803 kubelet[1859]: W1213 02:32:36.469748 1859 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 02:32:36.470236 kubelet[1859]: E1213 02:32:36.470206 1859 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 02:32:36.470867 kubelet[1859]: I1213 02:32:36.470832 1859 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:32:36.471199 kubelet[1859]: I1213 02:32:36.471154 1859 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:32:36.473977 kubelet[1859]: I1213 02:32:36.473947 1859 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:32:36.486290 kubelet[1859]: E1213 02:32:36.486263 1859 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:32:36.495464 kubelet[1859]: I1213 02:32:36.495429 1859 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:32:36.495686 kubelet[1859]: I1213 02:32:36.495672 1859 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:32:36.495822 kubelet[1859]: I1213 02:32:36.495810 1859 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:32:36.514279 kubelet[1859]: I1213 02:32:36.514253 1859 policy_none.go:49] "None policy: Start" Dec 13 02:32:36.516071 kubelet[1859]: I1213 02:32:36.516055 1859 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:32:36.516179 kubelet[1859]: I1213 02:32:36.516168 1859 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:32:36.522416 kubelet[1859]: E1213 02:32:36.522386 1859 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.31\" not found" node="172.24.4.31" Dec 13 02:32:36.548036 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 02:32:36.558001 kubelet[1859]: I1213 02:32:36.557967 1859 kubelet_node_status.go:73] "Attempting to register node" node="172.24.4.31" Dec 13 02:32:36.568764 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 02:32:36.570159 kubelet[1859]: I1213 02:32:36.569192 1859 kubelet_node_status.go:76] "Successfully registered node" node="172.24.4.31" Dec 13 02:32:36.574616 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 02:32:36.581877 kubelet[1859]: I1213 02:32:36.581825 1859 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:32:36.582205 kubelet[1859]: I1213 02:32:36.582060 1859 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:32:36.582266 kubelet[1859]: I1213 02:32:36.582221 1859 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:32:36.587544 kubelet[1859]: E1213 02:32:36.587475 1859 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.31\" not found" Dec 13 02:32:36.593349 kubelet[1859]: E1213 02:32:36.593113 1859 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.31\" not found" Dec 13 02:32:36.630429 kubelet[1859]: I1213 02:32:36.629493 1859 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 02:32:36.630429 kubelet[1859]: E1213 02:32:36.629698 1859 request.go:1116] Unexpected error when reading response body: read tcp 172.24.4.31:36678->172.24.4.181:6443: use of closed network connection Dec 13 02:32:36.630429 kubelet[1859]: I1213 02:32:36.630333 1859 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:32:36.634160 kubelet[1859]: I1213 02:32:36.634122 1859 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:32:36.634160 kubelet[1859]: I1213 02:32:36.634159 1859 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:32:36.634271 kubelet[1859]: I1213 02:32:36.634189 1859 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 02:32:36.634271 kubelet[1859]: E1213 02:32:36.634234 1859 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 02:32:36.694179 kubelet[1859]: E1213 02:32:36.694111 1859 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.31\" not found" Dec 13 02:32:36.794379 kubelet[1859]: E1213 02:32:36.794308 1859 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.31\" not found" Dec 13 02:32:36.896551 kubelet[1859]: E1213 02:32:36.895733 1859 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.31\" not found" Dec 13 02:32:36.921611 sudo[1713]: pam_unix(sudo:session): session closed for user root Dec 13 02:32:36.996419 kubelet[1859]: E1213 02:32:36.996326 1859 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.31\" not found" Dec 13 02:32:37.097901 kubelet[1859]: E1213 02:32:37.097624 1859 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.31\" not found" Dec 13 02:32:37.189327 sshd[1708]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:37.197921 systemd[1]: sshd@8-172.24.4.31:22-172.24.4.1:60734.service: Deactivated successfully. Dec 13 02:32:37.198407 kubelet[1859]: E1213 02:32:37.198340 1859 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.31\" not found" Dec 13 02:32:37.202095 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:32:37.202597 systemd[1]: session-11.scope: Consumed 1.139s CPU time, 111.1M memory peak, 0B memory swap peak. Dec 13 02:32:37.204515 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:32:37.207442 systemd-logind[1441]: Removed session 11. Dec 13 02:32:37.299089 kubelet[1859]: E1213 02:32:37.298946 1859 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.31\" not found" Dec 13 02:32:37.399793 kubelet[1859]: E1213 02:32:37.399710 1859 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.31\" not found" Dec 13 02:32:37.419213 kubelet[1859]: E1213 02:32:37.419131 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:37.500512 kubelet[1859]: E1213 02:32:37.500293 1859 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.31\" not found" Dec 13 02:32:37.601084 kubelet[1859]: E1213 02:32:37.600981 1859 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.31\" not found" Dec 13 02:32:37.701492 kubelet[1859]: E1213 02:32:37.701390 1859 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.31\" not found" Dec 13 02:32:37.801983 kubelet[1859]: E1213 02:32:37.801753 1859 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.31\" not found" Dec 13 02:32:37.902071 kubelet[1859]: E1213 02:32:37.901979 1859 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.31\" not found" Dec 13 02:32:38.003891 kubelet[1859]: I1213 02:32:38.003833 1859 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 02:32:38.004427 containerd[1459]: time="2024-12-13T02:32:38.004319618Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:32:38.005765 kubelet[1859]: I1213 02:32:38.004623 1859 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 02:32:38.419326 kubelet[1859]: E1213 02:32:38.419253 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:38.420182 kubelet[1859]: I1213 02:32:38.419448 1859 apiserver.go:52] "Watching apiserver" Dec 13 02:32:38.433302 kubelet[1859]: I1213 02:32:38.433012 1859 topology_manager.go:215] "Topology Admit Handler" podUID="1b220933-f59f-4b42-9119-17ca0974907d" podNamespace="kube-system" podName="kube-proxy-w8bh4" Dec 13 02:32:38.433302 kubelet[1859]: I1213 02:32:38.433194 1859 topology_manager.go:215] "Topology Admit Handler" podUID="a3474f86-76d1-4cef-86bb-b305ef64f399" podNamespace="calico-system" podName="calico-node-6qlwm" Dec 13 02:32:38.433578 kubelet[1859]: I1213 02:32:38.433408 1859 topology_manager.go:215] "Topology Admit Handler" podUID="a8343490-cada-48ea-8455-e41a86be0a3c" podNamespace="calico-system" podName="csi-node-driver-cbm6l" Dec 13 02:32:38.435694 kubelet[1859]: E1213 02:32:38.433797 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbm6l" podUID="a8343490-cada-48ea-8455-e41a86be0a3c" Dec 13 02:32:38.452343 systemd[1]: Created slice kubepods-besteffort-pod1b220933_f59f_4b42_9119_17ca0974907d.slice - libcontainer container kubepods-besteffort-pod1b220933_f59f_4b42_9119_17ca0974907d.slice. Dec 13 02:32:38.462934 kubelet[1859]: I1213 02:32:38.462887 1859 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 02:32:38.466866 kubelet[1859]: I1213 02:32:38.466732 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pk9r\" (UniqueName: \"kubernetes.io/projected/1b220933-f59f-4b42-9119-17ca0974907d-kube-api-access-8pk9r\") pod \"kube-proxy-w8bh4\" (UID: \"1b220933-f59f-4b42-9119-17ca0974907d\") " pod="kube-system/kube-proxy-w8bh4" Dec 13 02:32:38.466866 kubelet[1859]: I1213 02:32:38.466828 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-var-lib-calico\") pod \"calico-node-6qlwm\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " pod="calico-system/calico-node-6qlwm" Dec 13 02:32:38.467099 kubelet[1859]: I1213 02:32:38.466887 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-cni-bin-dir\") pod \"calico-node-6qlwm\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " pod="calico-system/calico-node-6qlwm" Dec 13 02:32:38.467099 kubelet[1859]: I1213 02:32:38.466938 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a8343490-cada-48ea-8455-e41a86be0a3c-socket-dir\") pod \"csi-node-driver-cbm6l\" (UID: \"a8343490-cada-48ea-8455-e41a86be0a3c\") " pod="calico-system/csi-node-driver-cbm6l" Dec 13 02:32:38.467099 kubelet[1859]: I1213 02:32:38.466988 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-lib-modules\") pod \"calico-node-6qlwm\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " pod="calico-system/calico-node-6qlwm" Dec 13 02:32:38.467099 kubelet[1859]: I1213 02:32:38.467032 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpwsb\" (UniqueName: \"kubernetes.io/projected/a3474f86-76d1-4cef-86bb-b305ef64f399-kube-api-access-wpwsb\") pod \"calico-node-6qlwm\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " pod="calico-system/calico-node-6qlwm" Dec 13 02:32:38.467099 kubelet[1859]: I1213 02:32:38.467080 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a8343490-cada-48ea-8455-e41a86be0a3c-varrun\") pod \"csi-node-driver-cbm6l\" (UID: \"a8343490-cada-48ea-8455-e41a86be0a3c\") " pod="calico-system/csi-node-driver-cbm6l" Dec 13 02:32:38.467514 kubelet[1859]: I1213 02:32:38.467125 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8343490-cada-48ea-8455-e41a86be0a3c-kubelet-dir\") pod \"csi-node-driver-cbm6l\" (UID: \"a8343490-cada-48ea-8455-e41a86be0a3c\") " pod="calico-system/csi-node-driver-cbm6l" Dec 13 02:32:38.467514 kubelet[1859]: I1213 02:32:38.467174 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a8343490-cada-48ea-8455-e41a86be0a3c-registration-dir\") pod \"csi-node-driver-cbm6l\" (UID: \"a8343490-cada-48ea-8455-e41a86be0a3c\") " pod="calico-system/csi-node-driver-cbm6l" Dec 13 02:32:38.467514 kubelet[1859]: I1213 02:32:38.467231 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b220933-f59f-4b42-9119-17ca0974907d-lib-modules\") pod \"kube-proxy-w8bh4\" (UID: \"1b220933-f59f-4b42-9119-17ca0974907d\") " pod="kube-system/kube-proxy-w8bh4" Dec 13 02:32:38.467514 kubelet[1859]: I1213 02:32:38.467295 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-xtables-lock\") pod \"calico-node-6qlwm\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " pod="calico-system/calico-node-6qlwm" Dec 13 02:32:38.467514 kubelet[1859]: I1213 02:32:38.467393 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3474f86-76d1-4cef-86bb-b305ef64f399-tigera-ca-bundle\") pod \"calico-node-6qlwm\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " pod="calico-system/calico-node-6qlwm" Dec 13 02:32:38.467912 kubelet[1859]: I1213 02:32:38.467446 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-flexvol-driver-host\") pod \"calico-node-6qlwm\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " pod="calico-system/calico-node-6qlwm" Dec 13 02:32:38.467912 kubelet[1859]: I1213 02:32:38.467492 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpp6r\" (UniqueName: \"kubernetes.io/projected/a8343490-cada-48ea-8455-e41a86be0a3c-kube-api-access-fpp6r\") pod \"csi-node-driver-cbm6l\" (UID: \"a8343490-cada-48ea-8455-e41a86be0a3c\") " pod="calico-system/csi-node-driver-cbm6l" Dec 13 02:32:38.467912 kubelet[1859]: I1213 02:32:38.467536 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1b220933-f59f-4b42-9119-17ca0974907d-kube-proxy\") pod \"kube-proxy-w8bh4\" (UID: \"1b220933-f59f-4b42-9119-17ca0974907d\") " pod="kube-system/kube-proxy-w8bh4" Dec 13 02:32:38.467912 kubelet[1859]: I1213 02:32:38.467611 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b220933-f59f-4b42-9119-17ca0974907d-xtables-lock\") pod \"kube-proxy-w8bh4\" (UID: \"1b220933-f59f-4b42-9119-17ca0974907d\") " pod="kube-system/kube-proxy-w8bh4" Dec 13 02:32:38.469447 kubelet[1859]: I1213 02:32:38.469191 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-policysync\") pod \"calico-node-6qlwm\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " pod="calico-system/calico-node-6qlwm" Dec 13 02:32:38.470710 kubelet[1859]: I1213 02:32:38.469779 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a3474f86-76d1-4cef-86bb-b305ef64f399-node-certs\") pod \"calico-node-6qlwm\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " pod="calico-system/calico-node-6qlwm" Dec 13 02:32:38.470710 kubelet[1859]: I1213 02:32:38.469934 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-var-run-calico\") pod \"calico-node-6qlwm\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " pod="calico-system/calico-node-6qlwm" Dec 13 02:32:38.470710 kubelet[1859]: I1213 02:32:38.470063 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-cni-net-dir\") pod \"calico-node-6qlwm\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " pod="calico-system/calico-node-6qlwm" Dec 13 02:32:38.470710 kubelet[1859]: I1213 02:32:38.470141 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-cni-log-dir\") pod \"calico-node-6qlwm\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " pod="calico-system/calico-node-6qlwm" Dec 13 02:32:38.478179 systemd[1]: Created slice kubepods-besteffort-poda3474f86_76d1_4cef_86bb_b305ef64f399.slice - libcontainer container kubepods-besteffort-poda3474f86_76d1_4cef_86bb_b305ef64f399.slice. Dec 13 02:32:38.582376 kubelet[1859]: E1213 02:32:38.582308 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:38.582376 kubelet[1859]: W1213 02:32:38.582353 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:38.582696 kubelet[1859]: E1213 02:32:38.582416 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:38.591757 kubelet[1859]: E1213 02:32:38.590955 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:38.591757 kubelet[1859]: W1213 02:32:38.591001 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:38.591757 kubelet[1859]: E1213 02:32:38.591073 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:38.622910 kubelet[1859]: E1213 02:32:38.616370 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:38.622910 kubelet[1859]: W1213 02:32:38.617755 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:38.622910 kubelet[1859]: E1213 02:32:38.617979 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:38.629004 kubelet[1859]: E1213 02:32:38.628923 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:38.629276 kubelet[1859]: W1213 02:32:38.629019 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:38.629276 kubelet[1859]: E1213 02:32:38.629059 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:38.646407 kubelet[1859]: E1213 02:32:38.644840 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:38.646407 kubelet[1859]: W1213 02:32:38.645014 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:38.646407 kubelet[1859]: E1213 02:32:38.645035 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:38.773784 containerd[1459]: time="2024-12-13T02:32:38.773502410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w8bh4,Uid:1b220933-f59f-4b42-9119-17ca0974907d,Namespace:kube-system,Attempt:0,}" Dec 13 02:32:38.786346 containerd[1459]: time="2024-12-13T02:32:38.785971296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6qlwm,Uid:a3474f86-76d1-4cef-86bb-b305ef64f399,Namespace:calico-system,Attempt:0,}" Dec 13 02:32:39.420415 kubelet[1859]: E1213 02:32:39.420311 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:39.596639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011988516.mount: Deactivated successfully. Dec 13 02:32:39.607587 containerd[1459]: time="2024-12-13T02:32:39.607423479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:32:39.610621 containerd[1459]: time="2024-12-13T02:32:39.610537780Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:32:39.612594 containerd[1459]: time="2024-12-13T02:32:39.612444055Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 02:32:39.614352 containerd[1459]: time="2024-12-13T02:32:39.614276813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:32:39.615819 containerd[1459]: time="2024-12-13T02:32:39.615683841Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:32:39.624704 containerd[1459]: time="2024-12-13T02:32:39.623291960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:32:39.626150 containerd[1459]: time="2024-12-13T02:32:39.626063649Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 852.358228ms" Dec 13 02:32:39.633475 containerd[1459]: time="2024-12-13T02:32:39.633401321Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 847.28321ms" Dec 13 02:32:39.635932 kubelet[1859]: E1213 02:32:39.635447 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbm6l" podUID="a8343490-cada-48ea-8455-e41a86be0a3c" Dec 13 02:32:39.841663 containerd[1459]: time="2024-12-13T02:32:39.841280237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:32:39.841913 containerd[1459]: time="2024-12-13T02:32:39.841565222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:32:39.844197 containerd[1459]: time="2024-12-13T02:32:39.844026097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:32:39.844316 containerd[1459]: time="2024-12-13T02:32:39.844202127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:32:39.844395 containerd[1459]: time="2024-12-13T02:32:39.844254225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:32:39.845000 containerd[1459]: time="2024-12-13T02:32:39.844457446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:32:39.845724 containerd[1459]: time="2024-12-13T02:32:39.845134206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:32:39.845724 containerd[1459]: time="2024-12-13T02:32:39.845313151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:32:40.221749 systemd[1]: Started cri-containerd-2e505224d4ef19e2103331a823e675b7ddf90d2a2205813318d5d992fad3bde8.scope - libcontainer container 2e505224d4ef19e2103331a823e675b7ddf90d2a2205813318d5d992fad3bde8. Dec 13 02:32:40.238455 systemd[1]: Started cri-containerd-46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c.scope - libcontainer container 46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c. Dec 13 02:32:40.282858 containerd[1459]: time="2024-12-13T02:32:40.282821278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w8bh4,Uid:1b220933-f59f-4b42-9119-17ca0974907d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e505224d4ef19e2103331a823e675b7ddf90d2a2205813318d5d992fad3bde8\"" Dec 13 02:32:40.287156 containerd[1459]: time="2024-12-13T02:32:40.286953998Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 02:32:40.297038 containerd[1459]: time="2024-12-13T02:32:40.297001012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6qlwm,Uid:a3474f86-76d1-4cef-86bb-b305ef64f399,Namespace:calico-system,Attempt:0,} returns sandbox id \"46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c\"" Dec 13 02:32:40.421197 kubelet[1859]: E1213 02:32:40.421139 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:41.422337 kubelet[1859]: E1213 02:32:41.422273 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:41.635206 kubelet[1859]: E1213 02:32:41.634960 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbm6l" podUID="a8343490-cada-48ea-8455-e41a86be0a3c" Dec 13 02:32:42.254286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4035556136.mount: Deactivated successfully. Dec 13 02:32:42.423172 kubelet[1859]: E1213 02:32:42.423098 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:43.423911 kubelet[1859]: E1213 02:32:43.423784 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:43.635230 kubelet[1859]: E1213 02:32:43.635130 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbm6l" podUID="a8343490-cada-48ea-8455-e41a86be0a3c" Dec 13 02:32:43.861754 containerd[1459]: time="2024-12-13T02:32:43.861534714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:32:43.863043 containerd[1459]: time="2024-12-13T02:32:43.862857024Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Dec 13 02:32:43.864109 containerd[1459]: time="2024-12-13T02:32:43.864044120Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:32:43.868260 containerd[1459]: time="2024-12-13T02:32:43.868213600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:32:43.869150 containerd[1459]: time="2024-12-13T02:32:43.868924834Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 3.581931682s" Dec 13 02:32:43.869150 containerd[1459]: time="2024-12-13T02:32:43.868971211Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 02:32:43.871666 containerd[1459]: time="2024-12-13T02:32:43.870886483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 02:32:43.872001 containerd[1459]: time="2024-12-13T02:32:43.871970827Z" level=info msg="CreateContainer within sandbox \"2e505224d4ef19e2103331a823e675b7ddf90d2a2205813318d5d992fad3bde8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:32:43.897621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1901012272.mount: Deactivated successfully. Dec 13 02:32:43.906328 containerd[1459]: time="2024-12-13T02:32:43.906282779Z" level=info msg="CreateContainer within sandbox \"2e505224d4ef19e2103331a823e675b7ddf90d2a2205813318d5d992fad3bde8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7d0d1f6742737ff5b55884201344a50539a1922a195b6f576612cab86c75694f\"" Dec 13 02:32:43.907915 containerd[1459]: time="2024-12-13T02:32:43.907398742Z" level=info msg="StartContainer for \"7d0d1f6742737ff5b55884201344a50539a1922a195b6f576612cab86c75694f\"" Dec 13 02:32:43.942760 systemd[1]: run-containerd-runc-k8s.io-7d0d1f6742737ff5b55884201344a50539a1922a195b6f576612cab86c75694f-runc.sFV3Yx.mount: Deactivated successfully. Dec 13 02:32:43.950812 systemd[1]: Started cri-containerd-7d0d1f6742737ff5b55884201344a50539a1922a195b6f576612cab86c75694f.scope - libcontainer container 7d0d1f6742737ff5b55884201344a50539a1922a195b6f576612cab86c75694f. Dec 13 02:32:43.987791 containerd[1459]: time="2024-12-13T02:32:43.987725024Z" level=info msg="StartContainer for \"7d0d1f6742737ff5b55884201344a50539a1922a195b6f576612cab86c75694f\" returns successfully" Dec 13 02:32:44.424865 kubelet[1859]: E1213 02:32:44.424638 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:44.714213 kubelet[1859]: E1213 02:32:44.713270 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.714213 kubelet[1859]: W1213 02:32:44.713333 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.714213 kubelet[1859]: E1213 02:32:44.713368 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.714213 kubelet[1859]: E1213 02:32:44.713733 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.714213 kubelet[1859]: W1213 02:32:44.713754 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.714213 kubelet[1859]: E1213 02:32:44.713778 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.715166 kubelet[1859]: E1213 02:32:44.714540 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.715166 kubelet[1859]: W1213 02:32:44.714604 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.715166 kubelet[1859]: E1213 02:32:44.714699 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.715166 kubelet[1859]: E1213 02:32:44.715062 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.715166 kubelet[1859]: W1213 02:32:44.715083 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.715166 kubelet[1859]: E1213 02:32:44.715106 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.715583 kubelet[1859]: E1213 02:32:44.715466 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.715583 kubelet[1859]: W1213 02:32:44.715486 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.715583 kubelet[1859]: E1213 02:32:44.715520 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.716612 kubelet[1859]: E1213 02:32:44.715924 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.716612 kubelet[1859]: W1213 02:32:44.715958 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.716612 kubelet[1859]: E1213 02:32:44.715980 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.716612 kubelet[1859]: E1213 02:32:44.716273 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.716612 kubelet[1859]: W1213 02:32:44.716330 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.716612 kubelet[1859]: E1213 02:32:44.716355 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.717050 kubelet[1859]: E1213 02:32:44.716727 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.717050 kubelet[1859]: W1213 02:32:44.716747 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.717050 kubelet[1859]: E1213 02:32:44.716770 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.717303 kubelet[1859]: E1213 02:32:44.717082 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.717303 kubelet[1859]: W1213 02:32:44.717103 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.717303 kubelet[1859]: E1213 02:32:44.717124 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.717487 kubelet[1859]: E1213 02:32:44.717414 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.717556 kubelet[1859]: W1213 02:32:44.717434 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.717556 kubelet[1859]: E1213 02:32:44.717543 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.719006 kubelet[1859]: E1213 02:32:44.717907 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.719006 kubelet[1859]: W1213 02:32:44.717942 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.719006 kubelet[1859]: E1213 02:32:44.717965 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.719006 kubelet[1859]: E1213 02:32:44.718292 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.719006 kubelet[1859]: W1213 02:32:44.718312 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.719006 kubelet[1859]: E1213 02:32:44.718347 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.719006 kubelet[1859]: E1213 02:32:44.718746 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.719006 kubelet[1859]: W1213 02:32:44.718768 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.719006 kubelet[1859]: E1213 02:32:44.718790 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.719817 kubelet[1859]: E1213 02:32:44.719095 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.719817 kubelet[1859]: W1213 02:32:44.719115 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.719817 kubelet[1859]: E1213 02:32:44.719136 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.719817 kubelet[1859]: E1213 02:32:44.719473 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.719817 kubelet[1859]: W1213 02:32:44.719493 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.719817 kubelet[1859]: E1213 02:32:44.719519 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.720245 kubelet[1859]: E1213 02:32:44.719932 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.720245 kubelet[1859]: W1213 02:32:44.719952 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.720245 kubelet[1859]: E1213 02:32:44.719974 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.720454 kubelet[1859]: E1213 02:32:44.720304 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.720454 kubelet[1859]: W1213 02:32:44.720325 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.720454 kubelet[1859]: E1213 02:32:44.720356 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.722237 kubelet[1859]: E1213 02:32:44.720736 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.722237 kubelet[1859]: W1213 02:32:44.720771 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.722237 kubelet[1859]: E1213 02:32:44.720806 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.722237 kubelet[1859]: E1213 02:32:44.721094 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.722237 kubelet[1859]: W1213 02:32:44.721126 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.722237 kubelet[1859]: E1213 02:32:44.721147 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.722237 kubelet[1859]: E1213 02:32:44.721446 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.722237 kubelet[1859]: W1213 02:32:44.721465 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.722237 kubelet[1859]: E1213 02:32:44.721486 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.726385 kubelet[1859]: E1213 02:32:44.726030 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.726385 kubelet[1859]: W1213 02:32:44.726072 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.726385 kubelet[1859]: E1213 02:32:44.726103 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.726694 kubelet[1859]: E1213 02:32:44.726575 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.726694 kubelet[1859]: W1213 02:32:44.726604 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.726845 kubelet[1859]: E1213 02:32:44.726632 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.727465 kubelet[1859]: E1213 02:32:44.727167 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.727465 kubelet[1859]: W1213 02:32:44.727206 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.727465 kubelet[1859]: E1213 02:32:44.727231 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.727772 kubelet[1859]: E1213 02:32:44.727703 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.727772 kubelet[1859]: W1213 02:32:44.727727 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.728324 kubelet[1859]: E1213 02:32:44.728054 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.728866 kubelet[1859]: E1213 02:32:44.728793 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.729369 kubelet[1859]: W1213 02:32:44.729002 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.729369 kubelet[1859]: E1213 02:32:44.729076 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.729742 kubelet[1859]: E1213 02:32:44.729711 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.729924 kubelet[1859]: W1213 02:32:44.729893 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.730135 kubelet[1859]: E1213 02:32:44.730084 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.731041 kubelet[1859]: E1213 02:32:44.730741 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.731041 kubelet[1859]: W1213 02:32:44.730780 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.731041 kubelet[1859]: E1213 02:32:44.730817 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.731525 kubelet[1859]: E1213 02:32:44.731491 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.731751 kubelet[1859]: W1213 02:32:44.731717 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.732089 kubelet[1859]: E1213 02:32:44.731946 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.732878 kubelet[1859]: E1213 02:32:44.732492 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.732878 kubelet[1859]: W1213 02:32:44.732522 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.732878 kubelet[1859]: E1213 02:32:44.732549 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.733139 kubelet[1859]: E1213 02:32:44.732973 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.733139 kubelet[1859]: W1213 02:32:44.732997 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.733139 kubelet[1859]: E1213 02:32:44.733059 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.734520 kubelet[1859]: E1213 02:32:44.733747 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.734520 kubelet[1859]: W1213 02:32:44.733783 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.734520 kubelet[1859]: E1213 02:32:44.733812 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:44.735745 kubelet[1859]: E1213 02:32:44.735702 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:44.736046 kubelet[1859]: W1213 02:32:44.736009 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:44.736265 kubelet[1859]: E1213 02:32:44.736195 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.043005 update_engine[1446]: I20241213 02:32:45.042392 1446 update_attempter.cc:509] Updating boot flags... Dec 13 02:32:45.092283 kubelet[1859]: I1213 02:32:45.090383 1859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w8bh4" podStartSLOduration=5.506131796 podStartE2EDuration="9.090352401s" podCreationTimestamp="2024-12-13 02:32:36 +0000 UTC" firstStartedPulling="2024-12-13 02:32:40.285813289 +0000 UTC m=+5.895206730" lastFinishedPulling="2024-12-13 02:32:43.870033894 +0000 UTC m=+9.479427335" observedRunningTime="2024-12-13 02:32:44.712474802 +0000 UTC m=+10.321868283" watchObservedRunningTime="2024-12-13 02:32:45.090352401 +0000 UTC m=+10.699745882" Dec 13 02:32:45.092283 kubelet[1859]: I1213 02:32:45.090710 1859 topology_manager.go:215] "Topology Admit Handler" podUID="7a10c570-2830-4abb-9451-aa898b2dda9d" podNamespace="calico-system" podName="calico-typha-55b9c6d646-rqsvh" Dec 13 02:32:45.111357 systemd[1]: Created slice kubepods-besteffort-pod7a10c570_2830_4abb_9451_aa898b2dda9d.slice - libcontainer container kubepods-besteffort-pod7a10c570_2830_4abb_9451_aa898b2dda9d.slice. Dec 13 02:32:45.122766 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2193) Dec 13 02:32:45.125973 kubelet[1859]: E1213 02:32:45.125950 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.126235 kubelet[1859]: W1213 02:32:45.126095 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.126235 kubelet[1859]: E1213 02:32:45.126120 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.126534 kubelet[1859]: E1213 02:32:45.126355 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.126534 kubelet[1859]: W1213 02:32:45.126365 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.126534 kubelet[1859]: E1213 02:32:45.126377 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.127872 kubelet[1859]: E1213 02:32:45.127709 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.127872 kubelet[1859]: W1213 02:32:45.127723 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.127872 kubelet[1859]: E1213 02:32:45.127736 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.128207 kubelet[1859]: E1213 02:32:45.128063 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.128207 kubelet[1859]: W1213 02:32:45.128075 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.128207 kubelet[1859]: E1213 02:32:45.128098 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.128569 kubelet[1859]: E1213 02:32:45.128415 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.128569 kubelet[1859]: W1213 02:32:45.128427 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.128569 kubelet[1859]: E1213 02:32:45.128438 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.129015 kubelet[1859]: E1213 02:32:45.128757 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.129015 kubelet[1859]: W1213 02:32:45.128766 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.129015 kubelet[1859]: E1213 02:32:45.128777 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.129164 kubelet[1859]: E1213 02:32:45.129152 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.129323 kubelet[1859]: W1213 02:32:45.129192 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.129323 kubelet[1859]: E1213 02:32:45.129206 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.129695 kubelet[1859]: E1213 02:32:45.129483 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.129695 kubelet[1859]: W1213 02:32:45.129494 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.129695 kubelet[1859]: E1213 02:32:45.129505 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.130084 kubelet[1859]: E1213 02:32:45.129935 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.130084 kubelet[1859]: W1213 02:32:45.129948 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.130084 kubelet[1859]: E1213 02:32:45.129959 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.130385 kubelet[1859]: E1213 02:32:45.130254 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.130385 kubelet[1859]: W1213 02:32:45.130266 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.130385 kubelet[1859]: E1213 02:32:45.130296 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.130771 kubelet[1859]: E1213 02:32:45.130757 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.131789 kubelet[1859]: W1213 02:32:45.130796 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.131789 kubelet[1859]: E1213 02:32:45.130809 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.141027 kubelet[1859]: E1213 02:32:45.134717 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.141027 kubelet[1859]: W1213 02:32:45.134733 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.141027 kubelet[1859]: E1213 02:32:45.134768 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.141027 kubelet[1859]: E1213 02:32:45.136610 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.141027 kubelet[1859]: W1213 02:32:45.136622 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.141027 kubelet[1859]: E1213 02:32:45.136633 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.141027 kubelet[1859]: I1213 02:32:45.136768 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7a10c570-2830-4abb-9451-aa898b2dda9d-typha-certs\") pod \"calico-typha-55b9c6d646-rqsvh\" (UID: \"7a10c570-2830-4abb-9451-aa898b2dda9d\") " pod="calico-system/calico-typha-55b9c6d646-rqsvh" Dec 13 02:32:45.141027 kubelet[1859]: E1213 02:32:45.136947 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.141027 kubelet[1859]: W1213 02:32:45.136957 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.141288 kubelet[1859]: E1213 02:32:45.136966 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.141288 kubelet[1859]: I1213 02:32:45.136982 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwt8t\" (UniqueName: \"kubernetes.io/projected/7a10c570-2830-4abb-9451-aa898b2dda9d-kube-api-access-jwt8t\") pod \"calico-typha-55b9c6d646-rqsvh\" (UID: \"7a10c570-2830-4abb-9451-aa898b2dda9d\") " pod="calico-system/calico-typha-55b9c6d646-rqsvh" Dec 13 02:32:45.141288 kubelet[1859]: E1213 02:32:45.137152 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.141288 kubelet[1859]: W1213 02:32:45.137175 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.141288 kubelet[1859]: E1213 02:32:45.137185 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.141288 kubelet[1859]: I1213 02:32:45.137204 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a10c570-2830-4abb-9451-aa898b2dda9d-tigera-ca-bundle\") pod \"calico-typha-55b9c6d646-rqsvh\" (UID: \"7a10c570-2830-4abb-9451-aa898b2dda9d\") " pod="calico-system/calico-typha-55b9c6d646-rqsvh" Dec 13 02:32:45.141288 kubelet[1859]: E1213 02:32:45.137405 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.141288 kubelet[1859]: W1213 02:32:45.137415 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.141469 kubelet[1859]: E1213 02:32:45.137424 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.141469 kubelet[1859]: E1213 02:32:45.137600 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.141469 kubelet[1859]: W1213 02:32:45.137610 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.141469 kubelet[1859]: E1213 02:32:45.137621 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.141469 kubelet[1859]: E1213 02:32:45.137822 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.141469 kubelet[1859]: W1213 02:32:45.137832 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.141469 kubelet[1859]: E1213 02:32:45.137848 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.141469 kubelet[1859]: E1213 02:32:45.138094 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.141469 kubelet[1859]: W1213 02:32:45.138167 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.141469 kubelet[1859]: E1213 02:32:45.138180 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.141718 kubelet[1859]: E1213 02:32:45.139838 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.141718 kubelet[1859]: W1213 02:32:45.139897 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.141718 kubelet[1859]: E1213 02:32:45.139972 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.141718 kubelet[1859]: E1213 02:32:45.140249 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.141718 kubelet[1859]: W1213 02:32:45.140259 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.141718 kubelet[1859]: E1213 02:32:45.140340 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.200046 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2197) Dec 13 02:32:45.240629 kubelet[1859]: E1213 02:32:45.239857 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.240629 kubelet[1859]: W1213 02:32:45.239907 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.240629 kubelet[1859]: E1213 02:32:45.239948 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.241869 kubelet[1859]: E1213 02:32:45.241787 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.241869 kubelet[1859]: W1213 02:32:45.241824 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.241869 kubelet[1859]: E1213 02:32:45.241863 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.242812 kubelet[1859]: E1213 02:32:45.242288 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.242812 kubelet[1859]: W1213 02:32:45.242306 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.242812 kubelet[1859]: E1213 02:32:45.242360 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.242812 kubelet[1859]: E1213 02:32:45.242802 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.243038 kubelet[1859]: W1213 02:32:45.242823 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.244673 kubelet[1859]: E1213 02:32:45.243399 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.244747 kubelet[1859]: E1213 02:32:45.244732 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.244785 kubelet[1859]: W1213 02:32:45.244747 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.244967 kubelet[1859]: E1213 02:32:45.244846 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.245456 kubelet[1859]: E1213 02:32:45.245308 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.245456 kubelet[1859]: W1213 02:32:45.245318 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.247784 kubelet[1859]: E1213 02:32:45.246204 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.247784 kubelet[1859]: E1213 02:32:45.247516 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.247784 kubelet[1859]: W1213 02:32:45.247536 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.247784 kubelet[1859]: E1213 02:32:45.247729 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.247784 kubelet[1859]: E1213 02:32:45.247923 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.247784 kubelet[1859]: W1213 02:32:45.247934 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.247784 kubelet[1859]: E1213 02:32:45.248029 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.249313 kubelet[1859]: E1213 02:32:45.248453 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.249313 kubelet[1859]: W1213 02:32:45.248470 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.252280 kubelet[1859]: E1213 02:32:45.249316 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.252280 kubelet[1859]: E1213 02:32:45.250365 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.252280 kubelet[1859]: W1213 02:32:45.250377 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.252280 kubelet[1859]: E1213 02:32:45.250396 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.252280 kubelet[1859]: E1213 02:32:45.251981 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.252280 kubelet[1859]: W1213 02:32:45.252004 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.252280 kubelet[1859]: E1213 02:32:45.252027 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.254013 kubelet[1859]: E1213 02:32:45.252822 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.254013 kubelet[1859]: W1213 02:32:45.252837 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.254013 kubelet[1859]: E1213 02:32:45.252851 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.256555 kubelet[1859]: E1213 02:32:45.256123 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.256555 kubelet[1859]: W1213 02:32:45.256176 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.256555 kubelet[1859]: E1213 02:32:45.256200 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.258215 kubelet[1859]: E1213 02:32:45.258184 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.258988 kubelet[1859]: W1213 02:32:45.258874 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.258988 kubelet[1859]: E1213 02:32:45.258908 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.259803 kubelet[1859]: E1213 02:32:45.259754 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.259803 kubelet[1859]: W1213 02:32:45.259770 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.260615 kubelet[1859]: E1213 02:32:45.259783 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.261983 kubelet[1859]: E1213 02:32:45.261953 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.261983 kubelet[1859]: W1213 02:32:45.261977 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.262072 kubelet[1859]: E1213 02:32:45.261998 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.271300 kubelet[1859]: E1213 02:32:45.269421 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.271300 kubelet[1859]: W1213 02:32:45.269456 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.271300 kubelet[1859]: E1213 02:32:45.269482 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.281117 kubelet[1859]: E1213 02:32:45.281024 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.281117 kubelet[1859]: W1213 02:32:45.281049 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.281117 kubelet[1859]: E1213 02:32:45.281074 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.419574 containerd[1459]: time="2024-12-13T02:32:45.419468127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55b9c6d646-rqsvh,Uid:7a10c570-2830-4abb-9451-aa898b2dda9d,Namespace:calico-system,Attempt:0,}" Dec 13 02:32:45.425884 kubelet[1859]: E1213 02:32:45.425803 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:45.495417 containerd[1459]: time="2024-12-13T02:32:45.494802527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:32:45.495417 containerd[1459]: time="2024-12-13T02:32:45.494854485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:32:45.495417 containerd[1459]: time="2024-12-13T02:32:45.494868471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:32:45.495417 containerd[1459]: time="2024-12-13T02:32:45.494945155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:32:45.525042 systemd[1]: Started cri-containerd-e05b07a61dde58a56fca670cb922c845b9234b1a7875086af99fdf9ab5cd795d.scope - libcontainer container e05b07a61dde58a56fca670cb922c845b9234b1a7875086af99fdf9ab5cd795d. Dec 13 02:32:45.597376 containerd[1459]: time="2024-12-13T02:32:45.597196557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55b9c6d646-rqsvh,Uid:7a10c570-2830-4abb-9451-aa898b2dda9d,Namespace:calico-system,Attempt:0,} returns sandbox id \"e05b07a61dde58a56fca670cb922c845b9234b1a7875086af99fdf9ab5cd795d\"" Dec 13 02:32:45.635114 kubelet[1859]: E1213 02:32:45.635034 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbm6l" podUID="a8343490-cada-48ea-8455-e41a86be0a3c" Dec 13 02:32:45.744977 kubelet[1859]: E1213 02:32:45.743791 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.744977 kubelet[1859]: W1213 02:32:45.743854 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.744977 kubelet[1859]: E1213 02:32:45.743906 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.744977 kubelet[1859]: E1213 02:32:45.744408 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.745596 kubelet[1859]: W1213 02:32:45.745111 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.745596 kubelet[1859]: E1213 02:32:45.745165 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.746891 kubelet[1859]: E1213 02:32:45.745614 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.746891 kubelet[1859]: W1213 02:32:45.745701 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.746891 kubelet[1859]: E1213 02:32:45.745740 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.746891 kubelet[1859]: E1213 02:32:45.746187 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.746891 kubelet[1859]: W1213 02:32:45.746215 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.746891 kubelet[1859]: E1213 02:32:45.746247 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.746891 kubelet[1859]: E1213 02:32:45.746861 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.746891 kubelet[1859]: W1213 02:32:45.746890 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.746891 kubelet[1859]: E1213 02:32:45.746927 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.748480 kubelet[1859]: E1213 02:32:45.747419 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.748480 kubelet[1859]: W1213 02:32:45.747446 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.748480 kubelet[1859]: E1213 02:32:45.747474 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.748480 kubelet[1859]: E1213 02:32:45.747981 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.748480 kubelet[1859]: W1213 02:32:45.748011 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.748480 kubelet[1859]: E1213 02:32:45.748040 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.750261 kubelet[1859]: E1213 02:32:45.748506 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.750261 kubelet[1859]: W1213 02:32:45.748539 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.750261 kubelet[1859]: E1213 02:32:45.748572 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.750261 kubelet[1859]: E1213 02:32:45.749517 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.750261 kubelet[1859]: W1213 02:32:45.749550 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.750261 kubelet[1859]: E1213 02:32:45.749587 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.751486 kubelet[1859]: E1213 02:32:45.750979 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.751486 kubelet[1859]: W1213 02:32:45.751033 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.751486 kubelet[1859]: E1213 02:32:45.751072 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.752286 kubelet[1859]: E1213 02:32:45.751598 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.752286 kubelet[1859]: W1213 02:32:45.751629 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.752286 kubelet[1859]: E1213 02:32:45.751726 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.752286 kubelet[1859]: E1213 02:32:45.752221 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.752286 kubelet[1859]: W1213 02:32:45.752252 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.752286 kubelet[1859]: E1213 02:32:45.752282 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.753540 kubelet[1859]: E1213 02:32:45.753049 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.753540 kubelet[1859]: W1213 02:32:45.753095 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.753540 kubelet[1859]: E1213 02:32:45.753130 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.754357 kubelet[1859]: E1213 02:32:45.754099 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.754357 kubelet[1859]: W1213 02:32:45.754145 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.754357 kubelet[1859]: E1213 02:32:45.754183 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.754840 kubelet[1859]: E1213 02:32:45.754628 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.754840 kubelet[1859]: W1213 02:32:45.754726 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.754840 kubelet[1859]: E1213 02:32:45.754762 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.755922 kubelet[1859]: E1213 02:32:45.755404 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.755922 kubelet[1859]: W1213 02:32:45.755430 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.755922 kubelet[1859]: E1213 02:32:45.755450 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.755922 kubelet[1859]: E1213 02:32:45.755858 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.755922 kubelet[1859]: W1213 02:32:45.755872 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.755922 kubelet[1859]: E1213 02:32:45.755885 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.757043 kubelet[1859]: E1213 02:32:45.756155 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.757043 kubelet[1859]: W1213 02:32:45.756169 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.757043 kubelet[1859]: E1213 02:32:45.756182 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.757691 kubelet[1859]: E1213 02:32:45.757480 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.757691 kubelet[1859]: W1213 02:32:45.757503 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.757691 kubelet[1859]: E1213 02:32:45.757519 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.758620 kubelet[1859]: E1213 02:32:45.757762 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.758620 kubelet[1859]: W1213 02:32:45.757776 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.758620 kubelet[1859]: E1213 02:32:45.757790 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.758620 kubelet[1859]: E1213 02:32:45.758054 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.758620 kubelet[1859]: W1213 02:32:45.758065 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.758620 kubelet[1859]: E1213 02:32:45.758078 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.758620 kubelet[1859]: E1213 02:32:45.758357 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.758620 kubelet[1859]: W1213 02:32:45.758372 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.758620 kubelet[1859]: E1213 02:32:45.758409 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.759239 kubelet[1859]: E1213 02:32:45.758719 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.759239 kubelet[1859]: W1213 02:32:45.758732 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.759239 kubelet[1859]: E1213 02:32:45.758761 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.759239 kubelet[1859]: E1213 02:32:45.758964 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.759239 kubelet[1859]: W1213 02:32:45.758977 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.759239 kubelet[1859]: E1213 02:32:45.759009 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.759239 kubelet[1859]: E1213 02:32:45.759203 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.759239 kubelet[1859]: W1213 02:32:45.759214 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.759239 kubelet[1859]: E1213 02:32:45.759244 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.762495 kubelet[1859]: E1213 02:32:45.759486 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.762495 kubelet[1859]: W1213 02:32:45.759499 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.762495 kubelet[1859]: E1213 02:32:45.759606 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.762495 kubelet[1859]: E1213 02:32:45.760028 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.762495 kubelet[1859]: W1213 02:32:45.760039 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.762495 kubelet[1859]: E1213 02:32:45.760055 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.762495 kubelet[1859]: E1213 02:32:45.760227 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.762495 kubelet[1859]: W1213 02:32:45.760238 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.762495 kubelet[1859]: E1213 02:32:45.760249 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.762495 kubelet[1859]: E1213 02:32:45.760506 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.763523 kubelet[1859]: W1213 02:32:45.760517 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.763523 kubelet[1859]: E1213 02:32:45.760540 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.763523 kubelet[1859]: E1213 02:32:45.760931 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.763523 kubelet[1859]: W1213 02:32:45.760943 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.763523 kubelet[1859]: E1213 02:32:45.760970 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.763523 kubelet[1859]: E1213 02:32:45.761500 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.763523 kubelet[1859]: W1213 02:32:45.761549 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.763523 kubelet[1859]: E1213 02:32:45.761610 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.763523 kubelet[1859]: E1213 02:32:45.761864 1859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:32:45.763523 kubelet[1859]: W1213 02:32:45.761878 1859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:32:45.764724 kubelet[1859]: E1213 02:32:45.761919 1859 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:32:45.953095 containerd[1459]: time="2024-12-13T02:32:45.952971192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:32:45.954615 containerd[1459]: time="2024-12-13T02:32:45.954561846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 02:32:45.956629 containerd[1459]: time="2024-12-13T02:32:45.956478961Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:32:45.960350 containerd[1459]: time="2024-12-13T02:32:45.960238222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:32:45.961200 containerd[1459]: time="2024-12-13T02:32:45.961144772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.090220769s" Dec 13 02:32:45.961200 containerd[1459]: time="2024-12-13T02:32:45.961189566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 02:32:45.965191 containerd[1459]: time="2024-12-13T02:32:45.965129074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 02:32:45.966584 containerd[1459]: time="2024-12-13T02:32:45.966514573Z" level=info msg="CreateContainer within sandbox \"46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 02:32:46.009968 containerd[1459]: time="2024-12-13T02:32:46.009627521Z" level=info msg="CreateContainer within sandbox \"46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2\"" Dec 13 02:32:46.013517 containerd[1459]: time="2024-12-13T02:32:46.011843367Z" level=info msg="StartContainer for \"7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2\"" Dec 13 02:32:46.046965 systemd[1]: Started cri-containerd-7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2.scope - libcontainer container 7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2. Dec 13 02:32:46.103452 containerd[1459]: time="2024-12-13T02:32:46.103398018Z" level=info msg="StartContainer for \"7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2\" returns successfully" Dec 13 02:32:46.113221 systemd[1]: cri-containerd-7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2.scope: Deactivated successfully. Dec 13 02:32:46.427021 kubelet[1859]: E1213 02:32:46.426944 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:46.832706 containerd[1459]: time="2024-12-13T02:32:46.692744066Z" level=info msg="StopContainer for \"7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2\" with timeout 5 (s)" Dec 13 02:32:46.900817 containerd[1459]: time="2024-12-13T02:32:46.900521162Z" level=info msg="Stop container \"7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2\" with signal terminated" Dec 13 02:32:46.901556 containerd[1459]: time="2024-12-13T02:32:46.901416601Z" level=info msg="shim disconnected" id=7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2 namespace=k8s.io Dec 13 02:32:46.901556 containerd[1459]: time="2024-12-13T02:32:46.901516709Z" level=warning msg="cleaning up after shim disconnected" id=7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2 namespace=k8s.io Dec 13 02:32:46.901556 containerd[1459]: time="2024-12-13T02:32:46.901537949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:32:46.951729 containerd[1459]: time="2024-12-13T02:32:46.951587737Z" level=info msg="StopContainer for \"7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2\" returns successfully" Dec 13 02:32:46.954460 containerd[1459]: time="2024-12-13T02:32:46.954345560Z" level=info msg="StopPodSandbox for \"46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c\"" Dec 13 02:32:46.954460 containerd[1459]: time="2024-12-13T02:32:46.954430349Z" level=info msg="Container to stop \"7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:32:46.959565 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c-shm.mount: Deactivated successfully. Dec 13 02:32:46.974613 systemd[1]: cri-containerd-46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c.scope: Deactivated successfully. Dec 13 02:32:47.023420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c-rootfs.mount: Deactivated successfully. Dec 13 02:32:47.048722 containerd[1459]: time="2024-12-13T02:32:47.048582050Z" level=info msg="shim disconnected" id=46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c namespace=k8s.io Dec 13 02:32:47.049418 containerd[1459]: time="2024-12-13T02:32:47.048913742Z" level=warning msg="cleaning up after shim disconnected" id=46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c namespace=k8s.io Dec 13 02:32:47.049418 containerd[1459]: time="2024-12-13T02:32:47.048945812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:32:47.079772 containerd[1459]: time="2024-12-13T02:32:47.079687188Z" level=info msg="TearDown network for sandbox \"46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c\" successfully" Dec 13 02:32:47.079772 containerd[1459]: time="2024-12-13T02:32:47.079755556Z" level=info msg="StopPodSandbox for \"46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c\" returns successfully" Dec 13 02:32:47.173258 kubelet[1859]: I1213 02:32:47.173173 1859 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-var-run-calico\") pod \"a3474f86-76d1-4cef-86bb-b305ef64f399\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " Dec 13 02:32:47.173258 kubelet[1859]: I1213 02:32:47.173254 1859 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-cni-net-dir\") pod \"a3474f86-76d1-4cef-86bb-b305ef64f399\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " Dec 13 02:32:47.173596 kubelet[1859]: I1213 02:32:47.173300 1859 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-var-lib-calico\") pod \"a3474f86-76d1-4cef-86bb-b305ef64f399\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " Dec 13 02:32:47.173596 kubelet[1859]: I1213 02:32:47.173340 1859 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-lib-modules\") pod \"a3474f86-76d1-4cef-86bb-b305ef64f399\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " Dec 13 02:32:47.173596 kubelet[1859]: I1213 02:32:47.173392 1859 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpwsb\" (UniqueName: \"kubernetes.io/projected/a3474f86-76d1-4cef-86bb-b305ef64f399-kube-api-access-wpwsb\") pod \"a3474f86-76d1-4cef-86bb-b305ef64f399\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " Dec 13 02:32:47.173596 kubelet[1859]: I1213 02:32:47.173433 1859 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-policysync\") pod \"a3474f86-76d1-4cef-86bb-b305ef64f399\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " Dec 13 02:32:47.173596 kubelet[1859]: I1213 02:32:47.173486 1859 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-flexvol-driver-host\") pod \"a3474f86-76d1-4cef-86bb-b305ef64f399\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " Dec 13 02:32:47.173596 kubelet[1859]: I1213 02:32:47.173528 1859 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-cni-log-dir\") pod \"a3474f86-76d1-4cef-86bb-b305ef64f399\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " Dec 13 02:32:47.174025 kubelet[1859]: I1213 02:32:47.173566 1859 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-cni-bin-dir\") pod \"a3474f86-76d1-4cef-86bb-b305ef64f399\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " Dec 13 02:32:47.174025 kubelet[1859]: I1213 02:32:47.173614 1859 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3474f86-76d1-4cef-86bb-b305ef64f399-tigera-ca-bundle\") pod \"a3474f86-76d1-4cef-86bb-b305ef64f399\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " Dec 13 02:32:47.174025 kubelet[1859]: I1213 02:32:47.173706 1859 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a3474f86-76d1-4cef-86bb-b305ef64f399-node-certs\") pod \"a3474f86-76d1-4cef-86bb-b305ef64f399\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " Dec 13 02:32:47.174025 kubelet[1859]: I1213 02:32:47.173749 1859 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-xtables-lock\") pod \"a3474f86-76d1-4cef-86bb-b305ef64f399\" (UID: \"a3474f86-76d1-4cef-86bb-b305ef64f399\") " Dec 13 02:32:47.174025 kubelet[1859]: I1213 02:32:47.173846 1859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a3474f86-76d1-4cef-86bb-b305ef64f399" (UID: "a3474f86-76d1-4cef-86bb-b305ef64f399"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:32:47.174025 kubelet[1859]: I1213 02:32:47.173918 1859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "a3474f86-76d1-4cef-86bb-b305ef64f399" (UID: "a3474f86-76d1-4cef-86bb-b305ef64f399"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:32:47.174440 kubelet[1859]: I1213 02:32:47.173956 1859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "a3474f86-76d1-4cef-86bb-b305ef64f399" (UID: "a3474f86-76d1-4cef-86bb-b305ef64f399"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:32:47.174440 kubelet[1859]: I1213 02:32:47.173995 1859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "a3474f86-76d1-4cef-86bb-b305ef64f399" (UID: "a3474f86-76d1-4cef-86bb-b305ef64f399"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:32:47.174440 kubelet[1859]: I1213 02:32:47.174031 1859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a3474f86-76d1-4cef-86bb-b305ef64f399" (UID: "a3474f86-76d1-4cef-86bb-b305ef64f399"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:32:47.174613 kubelet[1859]: I1213 02:32:47.174572 1859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "a3474f86-76d1-4cef-86bb-b305ef64f399" (UID: "a3474f86-76d1-4cef-86bb-b305ef64f399"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:32:47.174710 kubelet[1859]: I1213 02:32:47.174625 1859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-policysync" (OuterVolumeSpecName: "policysync") pod "a3474f86-76d1-4cef-86bb-b305ef64f399" (UID: "a3474f86-76d1-4cef-86bb-b305ef64f399"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:32:47.174784 kubelet[1859]: I1213 02:32:47.174754 1859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "a3474f86-76d1-4cef-86bb-b305ef64f399" (UID: "a3474f86-76d1-4cef-86bb-b305ef64f399"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:32:47.175636 kubelet[1859]: I1213 02:32:47.175570 1859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3474f86-76d1-4cef-86bb-b305ef64f399-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "a3474f86-76d1-4cef-86bb-b305ef64f399" (UID: "a3474f86-76d1-4cef-86bb-b305ef64f399"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:32:47.175795 kubelet[1859]: I1213 02:32:47.175707 1859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "a3474f86-76d1-4cef-86bb-b305ef64f399" (UID: "a3474f86-76d1-4cef-86bb-b305ef64f399"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:32:47.180840 kubelet[1859]: I1213 02:32:47.180532 1859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3474f86-76d1-4cef-86bb-b305ef64f399-node-certs" (OuterVolumeSpecName: "node-certs") pod "a3474f86-76d1-4cef-86bb-b305ef64f399" (UID: "a3474f86-76d1-4cef-86bb-b305ef64f399"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:32:47.182727 kubelet[1859]: I1213 02:32:47.181837 1859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3474f86-76d1-4cef-86bb-b305ef64f399-kube-api-access-wpwsb" (OuterVolumeSpecName: "kube-api-access-wpwsb") pod "a3474f86-76d1-4cef-86bb-b305ef64f399" (UID: "a3474f86-76d1-4cef-86bb-b305ef64f399"). InnerVolumeSpecName "kube-api-access-wpwsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:32:47.184447 systemd[1]: var-lib-kubelet-pods-a3474f86\x2d76d1\x2d4cef\x2d86bb\x2db305ef64f399-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwpwsb.mount: Deactivated successfully. Dec 13 02:32:47.189131 systemd[1]: var-lib-kubelet-pods-a3474f86\x2d76d1\x2d4cef\x2d86bb\x2db305ef64f399-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Dec 13 02:32:47.274584 kubelet[1859]: I1213 02:32:47.274478 1859 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-flexvol-driver-host\") on node \"172.24.4.31\" DevicePath \"\"" Dec 13 02:32:47.274584 kubelet[1859]: I1213 02:32:47.274549 1859 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-cni-log-dir\") on node \"172.24.4.31\" DevicePath \"\"" Dec 13 02:32:47.274584 kubelet[1859]: I1213 02:32:47.274574 1859 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3474f86-76d1-4cef-86bb-b305ef64f399-tigera-ca-bundle\") on node \"172.24.4.31\" DevicePath \"\"" Dec 13 02:32:47.274584 kubelet[1859]: I1213 02:32:47.274598 1859 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a3474f86-76d1-4cef-86bb-b305ef64f399-node-certs\") on node \"172.24.4.31\" DevicePath \"\"" Dec 13 02:32:47.274584 kubelet[1859]: I1213 02:32:47.274622 1859 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-cni-bin-dir\") on node \"172.24.4.31\" DevicePath \"\"" Dec 13 02:32:47.275294 kubelet[1859]: I1213 02:32:47.274688 1859 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-xtables-lock\") on node \"172.24.4.31\" DevicePath \"\"" Dec 13 02:32:47.275294 kubelet[1859]: I1213 02:32:47.274712 1859 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-lib-modules\") on node \"172.24.4.31\" DevicePath \"\"" Dec 13 02:32:47.275294 kubelet[1859]: I1213 02:32:47.274733 1859 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wpwsb\" (UniqueName: \"kubernetes.io/projected/a3474f86-76d1-4cef-86bb-b305ef64f399-kube-api-access-wpwsb\") on node \"172.24.4.31\" DevicePath \"\"" Dec 13 02:32:47.275294 kubelet[1859]: I1213 02:32:47.274756 1859 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-policysync\") on node \"172.24.4.31\" DevicePath \"\"" Dec 13 02:32:47.275294 kubelet[1859]: I1213 02:32:47.274776 1859 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-var-run-calico\") on node \"172.24.4.31\" DevicePath \"\"" Dec 13 02:32:47.275294 kubelet[1859]: I1213 02:32:47.274796 1859 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-cni-net-dir\") on node \"172.24.4.31\" DevicePath \"\"" Dec 13 02:32:47.275294 kubelet[1859]: I1213 02:32:47.274817 1859 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3474f86-76d1-4cef-86bb-b305ef64f399-var-lib-calico\") on node \"172.24.4.31\" DevicePath \"\"" Dec 13 02:32:47.428360 kubelet[1859]: E1213 02:32:47.428103 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:47.635065 kubelet[1859]: E1213 02:32:47.634895 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbm6l" podUID="a8343490-cada-48ea-8455-e41a86be0a3c" Dec 13 02:32:47.694854 kubelet[1859]: I1213 02:32:47.693334 1859 scope.go:117] "RemoveContainer" containerID="7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2" Dec 13 02:32:47.697728 containerd[1459]: time="2024-12-13T02:32:47.697586448Z" level=info msg="RemoveContainer for \"7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2\"" Dec 13 02:32:47.704975 systemd[1]: Removed slice kubepods-besteffort-poda3474f86_76d1_4cef_86bb_b305ef64f399.slice - libcontainer container kubepods-besteffort-poda3474f86_76d1_4cef_86bb_b305ef64f399.slice. Dec 13 02:32:47.935669 containerd[1459]: time="2024-12-13T02:32:47.935519902Z" level=info msg="RemoveContainer for \"7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2\" returns successfully" Dec 13 02:32:47.937735 kubelet[1859]: I1213 02:32:47.937469 1859 scope.go:117] "RemoveContainer" containerID="7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2" Dec 13 02:32:47.939070 containerd[1459]: time="2024-12-13T02:32:47.938730834Z" level=error msg="ContainerStatus for \"7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2\": not found" Dec 13 02:32:47.939237 kubelet[1859]: E1213 02:32:47.939147 1859 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2\": not found" containerID="7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2" Dec 13 02:32:47.939409 kubelet[1859]: I1213 02:32:47.939213 1859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2"} err="failed to get container status \"7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c6d0244685daf1f8fbe57d9759ed69cf09cee32a50da10ccf2b5bf51940beb2\": not found" Dec 13 02:32:48.359383 kubelet[1859]: I1213 02:32:48.359259 1859 topology_manager.go:215] "Topology Admit Handler" podUID="f563a5f9-c59b-4887-944b-589bfcd6f95a" podNamespace="calico-system" podName="calico-node-j92l2" Dec 13 02:32:48.359383 kubelet[1859]: E1213 02:32:48.359392 1859 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3474f86-76d1-4cef-86bb-b305ef64f399" containerName="flexvol-driver" Dec 13 02:32:48.362559 kubelet[1859]: I1213 02:32:48.359438 1859 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3474f86-76d1-4cef-86bb-b305ef64f399" containerName="flexvol-driver" Dec 13 02:32:48.379111 systemd[1]: Created slice kubepods-besteffort-podf563a5f9_c59b_4887_944b_589bfcd6f95a.slice - libcontainer container kubepods-besteffort-podf563a5f9_c59b_4887_944b_589bfcd6f95a.slice. Dec 13 02:32:48.383057 kubelet[1859]: I1213 02:32:48.382806 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f563a5f9-c59b-4887-944b-589bfcd6f95a-lib-modules\") pod \"calico-node-j92l2\" (UID: \"f563a5f9-c59b-4887-944b-589bfcd6f95a\") " pod="calico-system/calico-node-j92l2" Dec 13 02:32:48.384422 kubelet[1859]: I1213 02:32:48.383604 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f563a5f9-c59b-4887-944b-589bfcd6f95a-xtables-lock\") pod \"calico-node-j92l2\" (UID: \"f563a5f9-c59b-4887-944b-589bfcd6f95a\") " pod="calico-system/calico-node-j92l2" Dec 13 02:32:48.384422 kubelet[1859]: I1213 02:32:48.383754 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f563a5f9-c59b-4887-944b-589bfcd6f95a-tigera-ca-bundle\") pod \"calico-node-j92l2\" (UID: \"f563a5f9-c59b-4887-944b-589bfcd6f95a\") " pod="calico-system/calico-node-j92l2" Dec 13 02:32:48.384422 kubelet[1859]: I1213 02:32:48.383808 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f563a5f9-c59b-4887-944b-589bfcd6f95a-node-certs\") pod \"calico-node-j92l2\" (UID: \"f563a5f9-c59b-4887-944b-589bfcd6f95a\") " pod="calico-system/calico-node-j92l2" Dec 13 02:32:48.384422 kubelet[1859]: I1213 02:32:48.383853 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f563a5f9-c59b-4887-944b-589bfcd6f95a-var-run-calico\") pod \"calico-node-j92l2\" (UID: \"f563a5f9-c59b-4887-944b-589bfcd6f95a\") " pod="calico-system/calico-node-j92l2" Dec 13 02:32:48.384422 kubelet[1859]: I1213 02:32:48.383894 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f563a5f9-c59b-4887-944b-589bfcd6f95a-cni-bin-dir\") pod \"calico-node-j92l2\" (UID: \"f563a5f9-c59b-4887-944b-589bfcd6f95a\") " pod="calico-system/calico-node-j92l2" Dec 13 02:32:48.385337 kubelet[1859]: I1213 02:32:48.383939 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f563a5f9-c59b-4887-944b-589bfcd6f95a-flexvol-driver-host\") pod \"calico-node-j92l2\" (UID: \"f563a5f9-c59b-4887-944b-589bfcd6f95a\") " pod="calico-system/calico-node-j92l2" Dec 13 02:32:48.385337 kubelet[1859]: I1213 02:32:48.383990 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f563a5f9-c59b-4887-944b-589bfcd6f95a-cni-net-dir\") pod \"calico-node-j92l2\" (UID: \"f563a5f9-c59b-4887-944b-589bfcd6f95a\") " pod="calico-system/calico-node-j92l2" Dec 13 02:32:48.385337 kubelet[1859]: I1213 02:32:48.384040 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f563a5f9-c59b-4887-944b-589bfcd6f95a-policysync\") pod \"calico-node-j92l2\" (UID: \"f563a5f9-c59b-4887-944b-589bfcd6f95a\") " pod="calico-system/calico-node-j92l2" Dec 13 02:32:48.385337 kubelet[1859]: I1213 02:32:48.384079 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f563a5f9-c59b-4887-944b-589bfcd6f95a-var-lib-calico\") pod \"calico-node-j92l2\" (UID: \"f563a5f9-c59b-4887-944b-589bfcd6f95a\") " pod="calico-system/calico-node-j92l2" Dec 13 02:32:48.385337 kubelet[1859]: I1213 02:32:48.384121 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f563a5f9-c59b-4887-944b-589bfcd6f95a-cni-log-dir\") pod \"calico-node-j92l2\" (UID: \"f563a5f9-c59b-4887-944b-589bfcd6f95a\") " pod="calico-system/calico-node-j92l2" Dec 13 02:32:48.385773 kubelet[1859]: I1213 02:32:48.384186 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl9mm\" (UniqueName: \"kubernetes.io/projected/f563a5f9-c59b-4887-944b-589bfcd6f95a-kube-api-access-zl9mm\") pod \"calico-node-j92l2\" (UID: \"f563a5f9-c59b-4887-944b-589bfcd6f95a\") " pod="calico-system/calico-node-j92l2" Dec 13 02:32:48.428408 kubelet[1859]: E1213 02:32:48.428326 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:48.642592 kubelet[1859]: I1213 02:32:48.642041 1859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3474f86-76d1-4cef-86bb-b305ef64f399" path="/var/lib/kubelet/pods/a3474f86-76d1-4cef-86bb-b305ef64f399/volumes" Dec 13 02:32:48.688038 containerd[1459]: time="2024-12-13T02:32:48.687884692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j92l2,Uid:f563a5f9-c59b-4887-944b-589bfcd6f95a,Namespace:calico-system,Attempt:0,}" Dec 13 02:32:48.757435 containerd[1459]: time="2024-12-13T02:32:48.755971168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:32:48.757435 containerd[1459]: time="2024-12-13T02:32:48.756132190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:32:48.757435 containerd[1459]: time="2024-12-13T02:32:48.756221838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:32:48.757435 containerd[1459]: time="2024-12-13T02:32:48.756417776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:32:48.796878 systemd[1]: Started cri-containerd-abd706cb242bbb32596125590c0dd5404ba8a5b0f29970458f3b32e71097caf3.scope - libcontainer container abd706cb242bbb32596125590c0dd5404ba8a5b0f29970458f3b32e71097caf3. Dec 13 02:32:48.825558 containerd[1459]: time="2024-12-13T02:32:48.825516120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j92l2,Uid:f563a5f9-c59b-4887-944b-589bfcd6f95a,Namespace:calico-system,Attempt:0,} returns sandbox id \"abd706cb242bbb32596125590c0dd5404ba8a5b0f29970458f3b32e71097caf3\"" Dec 13 02:32:48.829516 containerd[1459]: time="2024-12-13T02:32:48.829327117Z" level=info msg="CreateContainer within sandbox \"abd706cb242bbb32596125590c0dd5404ba8a5b0f29970458f3b32e71097caf3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 02:32:48.880515 containerd[1459]: time="2024-12-13T02:32:48.880313763Z" level=info msg="CreateContainer within sandbox \"abd706cb242bbb32596125590c0dd5404ba8a5b0f29970458f3b32e71097caf3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c5accf9021a2ce5de0161475accfde7a9f6c6e7413c4b90786dcef9e6edb9344\"" Dec 13 02:32:48.882853 containerd[1459]: time="2024-12-13T02:32:48.881383659Z" level=info msg="StartContainer for \"c5accf9021a2ce5de0161475accfde7a9f6c6e7413c4b90786dcef9e6edb9344\"" Dec 13 02:32:48.932870 systemd[1]: Started cri-containerd-c5accf9021a2ce5de0161475accfde7a9f6c6e7413c4b90786dcef9e6edb9344.scope - libcontainer container c5accf9021a2ce5de0161475accfde7a9f6c6e7413c4b90786dcef9e6edb9344. Dec 13 02:32:48.988570 containerd[1459]: time="2024-12-13T02:32:48.988442177Z" level=info msg="StartContainer for \"c5accf9021a2ce5de0161475accfde7a9f6c6e7413c4b90786dcef9e6edb9344\" returns successfully" Dec 13 02:32:48.997118 systemd[1]: cri-containerd-c5accf9021a2ce5de0161475accfde7a9f6c6e7413c4b90786dcef9e6edb9344.scope: Deactivated successfully. Dec 13 02:32:49.116975 containerd[1459]: time="2024-12-13T02:32:49.116551166Z" level=info msg="shim disconnected" id=c5accf9021a2ce5de0161475accfde7a9f6c6e7413c4b90786dcef9e6edb9344 namespace=k8s.io Dec 13 02:32:49.116975 containerd[1459]: time="2024-12-13T02:32:49.116696458Z" level=warning msg="cleaning up after shim disconnected" id=c5accf9021a2ce5de0161475accfde7a9f6c6e7413c4b90786dcef9e6edb9344 namespace=k8s.io Dec 13 02:32:49.116975 containerd[1459]: time="2024-12-13T02:32:49.116723178Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:32:49.428992 kubelet[1859]: E1213 02:32:49.428917 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:49.634762 kubelet[1859]: E1213 02:32:49.634696 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbm6l" podUID="a8343490-cada-48ea-8455-e41a86be0a3c" Dec 13 02:32:50.429855 kubelet[1859]: E1213 02:32:50.429809 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:50.776894 containerd[1459]: time="2024-12-13T02:32:50.776589137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:32:50.780155 containerd[1459]: time="2024-12-13T02:32:50.780049186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Dec 13 02:32:50.784978 containerd[1459]: time="2024-12-13T02:32:50.784830714Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:32:50.794057 containerd[1459]: time="2024-12-13T02:32:50.793847174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:32:50.797364 containerd[1459]: time="2024-12-13T02:32:50.795767716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 4.830582616s" Dec 13 02:32:50.797364 containerd[1459]: time="2024-12-13T02:32:50.795851122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 02:32:50.799180 containerd[1459]: time="2024-12-13T02:32:50.799120484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 02:32:50.819219 containerd[1459]: time="2024-12-13T02:32:50.819119332Z" level=info msg="CreateContainer within sandbox \"e05b07a61dde58a56fca670cb922c845b9234b1a7875086af99fdf9ab5cd795d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 02:32:50.912049 containerd[1459]: time="2024-12-13T02:32:50.911897918Z" level=info msg="CreateContainer within sandbox \"e05b07a61dde58a56fca670cb922c845b9234b1a7875086af99fdf9ab5cd795d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"229a5f44c16a5e75459089986f941cf27ea87408c1f2d8a8dd36c3c8b1d6142f\"" Dec 13 02:32:50.914710 containerd[1459]: time="2024-12-13T02:32:50.913132884Z" level=info msg="StartContainer for \"229a5f44c16a5e75459089986f941cf27ea87408c1f2d8a8dd36c3c8b1d6142f\"" Dec 13 02:32:50.982890 systemd[1]: Started cri-containerd-229a5f44c16a5e75459089986f941cf27ea87408c1f2d8a8dd36c3c8b1d6142f.scope - libcontainer container 229a5f44c16a5e75459089986f941cf27ea87408c1f2d8a8dd36c3c8b1d6142f. Dec 13 02:32:51.062460 containerd[1459]: time="2024-12-13T02:32:51.062162589Z" level=info msg="StartContainer for \"229a5f44c16a5e75459089986f941cf27ea87408c1f2d8a8dd36c3c8b1d6142f\" returns successfully" Dec 13 02:32:51.431250 kubelet[1859]: E1213 02:32:51.431177 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:51.635398 kubelet[1859]: E1213 02:32:51.635152 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbm6l" podUID="a8343490-cada-48ea-8455-e41a86be0a3c" Dec 13 02:32:52.432318 kubelet[1859]: E1213 02:32:52.432206 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:52.727817 kubelet[1859]: I1213 02:32:52.727153 1859 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:32:53.433383 kubelet[1859]: E1213 02:32:53.433250 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:53.635534 kubelet[1859]: E1213 02:32:53.635448 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbm6l" podUID="a8343490-cada-48ea-8455-e41a86be0a3c" Dec 13 02:32:54.433747 kubelet[1859]: E1213 02:32:54.433586 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:55.434250 kubelet[1859]: E1213 02:32:55.434217 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:55.635535 kubelet[1859]: E1213 02:32:55.635148 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbm6l" podUID="a8343490-cada-48ea-8455-e41a86be0a3c" Dec 13 02:32:56.416867 kubelet[1859]: E1213 02:32:56.416795 1859 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:56.435397 kubelet[1859]: E1213 02:32:56.435194 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:57.436185 kubelet[1859]: E1213 02:32:57.436070 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:58.159336 kubelet[1859]: E1213 02:32:57.634707 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbm6l" podUID="a8343490-cada-48ea-8455-e41a86be0a3c" Dec 13 02:32:58.159336 kubelet[1859]: I1213 02:32:57.638046 1859 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:32:58.412484 kubelet[1859]: I1213 02:32:58.411574 1859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55b9c6d646-rqsvh" podStartSLOduration=8.221677999 podStartE2EDuration="13.411535867s" podCreationTimestamp="2024-12-13 02:32:45 +0000 UTC" firstStartedPulling="2024-12-13 02:32:45.608331921 +0000 UTC m=+11.217725362" lastFinishedPulling="2024-12-13 02:32:50.798189748 +0000 UTC m=+16.407583230" observedRunningTime="2024-12-13 02:32:51.756958589 +0000 UTC m=+17.366352100" watchObservedRunningTime="2024-12-13 02:32:58.411535867 +0000 UTC m=+24.020929348" Dec 13 02:32:58.437254 kubelet[1859]: E1213 02:32:58.437179 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:58.464415 containerd[1459]: time="2024-12-13T02:32:58.464182596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:32:58.466399 containerd[1459]: time="2024-12-13T02:32:58.466277124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 02:32:58.468896 containerd[1459]: time="2024-12-13T02:32:58.468709667Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:32:58.476968 containerd[1459]: time="2024-12-13T02:32:58.476794931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:32:58.478823 containerd[1459]: time="2024-12-13T02:32:58.478575450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 7.679095651s" Dec 13 02:32:58.478823 containerd[1459]: time="2024-12-13T02:32:58.478636094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 02:32:58.484283 containerd[1459]: time="2024-12-13T02:32:58.484173018Z" level=info msg="CreateContainer within sandbox \"abd706cb242bbb32596125590c0dd5404ba8a5b0f29970458f3b32e71097caf3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 02:32:58.530483 containerd[1459]: time="2024-12-13T02:32:58.530337500Z" level=info msg="CreateContainer within sandbox \"abd706cb242bbb32596125590c0dd5404ba8a5b0f29970458f3b32e71097caf3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b52380892374f22f7e4f6d78d6d87ec4393120bcec27d794560c60bd6a5517f0\"" Dec 13 02:32:58.532006 containerd[1459]: time="2024-12-13T02:32:58.531901994Z" level=info msg="StartContainer for \"b52380892374f22f7e4f6d78d6d87ec4393120bcec27d794560c60bd6a5517f0\"" Dec 13 02:32:58.590853 systemd[1]: Started cri-containerd-b52380892374f22f7e4f6d78d6d87ec4393120bcec27d794560c60bd6a5517f0.scope - libcontainer container b52380892374f22f7e4f6d78d6d87ec4393120bcec27d794560c60bd6a5517f0. Dec 13 02:32:58.639365 containerd[1459]: time="2024-12-13T02:32:58.639264963Z" level=info msg="StartContainer for \"b52380892374f22f7e4f6d78d6d87ec4393120bcec27d794560c60bd6a5517f0\" returns successfully" Dec 13 02:32:59.437693 kubelet[1859]: E1213 02:32:59.437619 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:59.635601 kubelet[1859]: E1213 02:32:59.635495 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbm6l" podUID="a8343490-cada-48ea-8455-e41a86be0a3c" Dec 13 02:33:00.439912 kubelet[1859]: E1213 02:33:00.439802 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:00.677633 containerd[1459]: time="2024-12-13T02:33:00.677530199Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:33:00.684427 systemd[1]: cri-containerd-b52380892374f22f7e4f6d78d6d87ec4393120bcec27d794560c60bd6a5517f0.scope: Deactivated successfully. Dec 13 02:33:00.723818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b52380892374f22f7e4f6d78d6d87ec4393120bcec27d794560c60bd6a5517f0-rootfs.mount: Deactivated successfully. Dec 13 02:33:00.763908 kubelet[1859]: I1213 02:33:00.763538 1859 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:33:01.440741 kubelet[1859]: E1213 02:33:01.440613 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:01.650049 systemd[1]: Created slice kubepods-besteffort-poda8343490_cada_48ea_8455_e41a86be0a3c.slice - libcontainer container kubepods-besteffort-poda8343490_cada_48ea_8455_e41a86be0a3c.slice. Dec 13 02:33:01.656539 containerd[1459]: time="2024-12-13T02:33:01.656089947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbm6l,Uid:a8343490-cada-48ea-8455-e41a86be0a3c,Namespace:calico-system,Attempt:0,}" Dec 13 02:33:01.952084 containerd[1459]: time="2024-12-13T02:33:01.951925547Z" level=info msg="shim disconnected" id=b52380892374f22f7e4f6d78d6d87ec4393120bcec27d794560c60bd6a5517f0 namespace=k8s.io Dec 13 02:33:01.952084 containerd[1459]: time="2024-12-13T02:33:01.952066842Z" level=warning msg="cleaning up after shim disconnected" id=b52380892374f22f7e4f6d78d6d87ec4393120bcec27d794560c60bd6a5517f0 namespace=k8s.io Dec 13 02:33:01.952911 containerd[1459]: time="2024-12-13T02:33:01.952093171Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:33:02.193782 containerd[1459]: time="2024-12-13T02:33:02.193629894Z" level=error msg="Failed to destroy network for sandbox \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:33:02.196023 containerd[1459]: time="2024-12-13T02:33:02.195959300Z" level=error msg="encountered an error cleaning up failed sandbox \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:33:02.196183 containerd[1459]: time="2024-12-13T02:33:02.196039301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbm6l,Uid:a8343490-cada-48ea-8455-e41a86be0a3c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:33:02.196287 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed-shm.mount: Deactivated successfully. Dec 13 02:33:02.197533 kubelet[1859]: E1213 02:33:02.196416 1859 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:33:02.197533 kubelet[1859]: E1213 02:33:02.196487 1859 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cbm6l" Dec 13 02:33:02.197533 kubelet[1859]: E1213 02:33:02.196514 1859 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cbm6l" Dec 13 02:33:02.197673 kubelet[1859]: E1213 02:33:02.196570 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cbm6l_calico-system(a8343490-cada-48ea-8455-e41a86be0a3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cbm6l_calico-system(a8343490-cada-48ea-8455-e41a86be0a3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cbm6l" podUID="a8343490-cada-48ea-8455-e41a86be0a3c" Dec 13 02:33:02.441520 kubelet[1859]: E1213 02:33:02.441416 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:02.761739 containerd[1459]: time="2024-12-13T02:33:02.760269863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 02:33:02.761927 kubelet[1859]: I1213 02:33:02.760537 1859 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Dec 13 02:33:02.762049 containerd[1459]: time="2024-12-13T02:33:02.761799276Z" level=info msg="StopPodSandbox for \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\"" Dec 13 02:33:02.762821 containerd[1459]: time="2024-12-13T02:33:02.762423166Z" level=info msg="Ensure that sandbox 3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed in task-service has been cleanup successfully" Dec 13 02:33:02.833006 containerd[1459]: time="2024-12-13T02:33:02.832780620Z" level=error msg="StopPodSandbox for \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\" failed" error="failed to destroy network for sandbox \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:33:02.833496 kubelet[1859]: E1213 02:33:02.833353 1859 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Dec 13 02:33:02.833496 kubelet[1859]: E1213 02:33:02.833447 1859 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed"} Dec 13 02:33:02.833753 kubelet[1859]: E1213 02:33:02.833523 1859 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8343490-cada-48ea-8455-e41a86be0a3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:33:02.833753 kubelet[1859]: E1213 02:33:02.833599 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8343490-cada-48ea-8455-e41a86be0a3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cbm6l" podUID="a8343490-cada-48ea-8455-e41a86be0a3c" Dec 13 02:33:03.441832 kubelet[1859]: E1213 02:33:03.441703 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:04.442137 kubelet[1859]: E1213 02:33:04.442076 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:05.442425 kubelet[1859]: E1213 02:33:05.442328 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:06.443211 kubelet[1859]: E1213 02:33:06.443168 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:06.550371 kubelet[1859]: I1213 02:33:06.550296 1859 topology_manager.go:215] "Topology Admit Handler" podUID="4369aa74-ff91-4584-b36b-6b69892ac078" podNamespace="default" podName="nginx-deployment-85f456d6dd-xfkc9" Dec 13 02:33:06.560445 systemd[1]: Created slice kubepods-besteffort-pod4369aa74_ff91_4584_b36b_6b69892ac078.slice - libcontainer container kubepods-besteffort-pod4369aa74_ff91_4584_b36b_6b69892ac078.slice. Dec 13 02:33:06.713689 kubelet[1859]: I1213 02:33:06.713194 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls959\" (UniqueName: \"kubernetes.io/projected/4369aa74-ff91-4584-b36b-6b69892ac078-kube-api-access-ls959\") pod \"nginx-deployment-85f456d6dd-xfkc9\" (UID: \"4369aa74-ff91-4584-b36b-6b69892ac078\") " pod="default/nginx-deployment-85f456d6dd-xfkc9" Dec 13 02:33:06.864797 containerd[1459]: time="2024-12-13T02:33:06.864715610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xfkc9,Uid:4369aa74-ff91-4584-b36b-6b69892ac078,Namespace:default,Attempt:0,}" Dec 13 02:33:06.999271 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24-shm.mount: Deactivated successfully. Dec 13 02:33:07.001158 kubelet[1859]: E1213 02:33:06.999660 1859 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:33:07.001158 kubelet[1859]: E1213 02:33:06.999739 1859 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xfkc9" Dec 13 02:33:07.001158 kubelet[1859]: E1213 02:33:06.999764 1859 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xfkc9" Dec 13 02:33:07.001267 containerd[1459]: time="2024-12-13T02:33:06.996799657Z" level=error msg="Failed to destroy network for sandbox \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:33:07.001267 containerd[1459]: time="2024-12-13T02:33:06.997165368Z" level=error msg="encountered an error cleaning up failed sandbox \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:33:07.001267 containerd[1459]: time="2024-12-13T02:33:06.997260017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xfkc9,Uid:4369aa74-ff91-4584-b36b-6b69892ac078,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:33:07.001396 kubelet[1859]: E1213 02:33:06.999813 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-xfkc9_default(4369aa74-ff91-4584-b36b-6b69892ac078)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-xfkc9_default(4369aa74-ff91-4584-b36b-6b69892ac078)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-xfkc9" podUID="4369aa74-ff91-4584-b36b-6b69892ac078" Dec 13 02:33:07.443661 kubelet[1859]: E1213 02:33:07.443543 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:07.791184 kubelet[1859]: I1213 02:33:07.790771 1859 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Dec 13 02:33:07.792455 containerd[1459]: time="2024-12-13T02:33:07.792364041Z" level=info msg="StopPodSandbox for \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\"" Dec 13 02:33:07.792853 containerd[1459]: time="2024-12-13T02:33:07.792789193Z" level=info msg="Ensure that sandbox 845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24 in task-service has been cleanup successfully" Dec 13 02:33:07.861543 containerd[1459]: time="2024-12-13T02:33:07.861438952Z" level=error msg="StopPodSandbox for \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\" failed" error="failed to destroy network for sandbox \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:33:07.862115 kubelet[1859]: E1213 02:33:07.861988 1859 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Dec 13 02:33:07.862241 kubelet[1859]: E1213 02:33:07.862168 1859 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24"} Dec 13 02:33:07.862388 kubelet[1859]: E1213 02:33:07.862290 1859 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4369aa74-ff91-4584-b36b-6b69892ac078\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:33:07.862556 kubelet[1859]: E1213 02:33:07.862439 1859 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4369aa74-ff91-4584-b36b-6b69892ac078\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-xfkc9" podUID="4369aa74-ff91-4584-b36b-6b69892ac078" Dec 13 02:33:08.444811 kubelet[1859]: E1213 02:33:08.444741 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:09.445444 kubelet[1859]: E1213 02:33:09.445000 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:10.446009 kubelet[1859]: E1213 02:33:10.445957 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:11.446519 kubelet[1859]: E1213 02:33:11.446464 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:12.447455 kubelet[1859]: E1213 02:33:12.447357 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:12.940379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2428526478.mount: Deactivated successfully. Dec 13 02:33:13.448404 kubelet[1859]: E1213 02:33:13.448311 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:13.474726 containerd[1459]: time="2024-12-13T02:33:13.473978783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:13.477824 containerd[1459]: time="2024-12-13T02:33:13.477625792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 02:33:13.480022 containerd[1459]: time="2024-12-13T02:33:13.479881160Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:13.488261 containerd[1459]: time="2024-12-13T02:33:13.488137470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:13.491190 containerd[1459]: time="2024-12-13T02:33:13.489916200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.729533583s" Dec 13 02:33:13.491190 containerd[1459]: time="2024-12-13T02:33:13.490011439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 02:33:13.562506 containerd[1459]: time="2024-12-13T02:33:13.562472501Z" level=info msg="CreateContainer within sandbox \"abd706cb242bbb32596125590c0dd5404ba8a5b0f29970458f3b32e71097caf3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 02:33:13.656616 containerd[1459]: time="2024-12-13T02:33:13.656513195Z" level=info msg="CreateContainer within sandbox \"abd706cb242bbb32596125590c0dd5404ba8a5b0f29970458f3b32e71097caf3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cc226e7982261b105ead3ac0a5f895e2f66c81efeabce251ed22bb5c67b8f3b3\"" Dec 13 02:33:13.657950 containerd[1459]: time="2024-12-13T02:33:13.657554816Z" level=info msg="StartContainer for \"cc226e7982261b105ead3ac0a5f895e2f66c81efeabce251ed22bb5c67b8f3b3\"" Dec 13 02:33:13.798211 systemd[1]: Started cri-containerd-cc226e7982261b105ead3ac0a5f895e2f66c81efeabce251ed22bb5c67b8f3b3.scope - libcontainer container cc226e7982261b105ead3ac0a5f895e2f66c81efeabce251ed22bb5c67b8f3b3. Dec 13 02:33:13.887457 containerd[1459]: time="2024-12-13T02:33:13.887335720Z" level=info msg="StartContainer for \"cc226e7982261b105ead3ac0a5f895e2f66c81efeabce251ed22bb5c67b8f3b3\" returns successfully" Dec 13 02:33:14.152156 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 02:33:14.152490 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 02:33:14.448998 kubelet[1859]: E1213 02:33:14.448863 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:15.449588 kubelet[1859]: E1213 02:33:15.449472 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:16.233723 kernel: bpftool[2992]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 02:33:16.417838 kubelet[1859]: E1213 02:33:16.417788 1859 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:16.450458 kubelet[1859]: E1213 02:33:16.450366 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:16.523256 systemd-networkd[1376]: vxlan.calico: Link UP Dec 13 02:33:16.523265 systemd-networkd[1376]: vxlan.calico: Gained carrier Dec 13 02:33:16.638469 containerd[1459]: time="2024-12-13T02:33:16.638117005Z" level=info msg="StopPodSandbox for \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\"" Dec 13 02:33:16.921773 kubelet[1859]: I1213 02:33:16.921698 1859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j92l2" podStartSLOduration=6.135822982 podStartE2EDuration="29.921674989s" podCreationTimestamp="2024-12-13 02:32:47 +0000 UTC" firstStartedPulling="2024-12-13 02:32:49.708429233 +0000 UTC m=+15.317822714" lastFinishedPulling="2024-12-13 02:33:13.49428123 +0000 UTC m=+39.103674721" observedRunningTime="2024-12-13 02:33:14.937794639 +0000 UTC m=+40.547188150" watchObservedRunningTime="2024-12-13 02:33:16.921674989 +0000 UTC m=+42.531068430" Dec 13 02:33:17.451321 kubelet[1859]: E1213 02:33:17.451222 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:18.147081 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Dec 13 02:33:18.346475 containerd[1459]: 2024-12-13 02:33:16.920 [INFO][3042] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Dec 13 02:33:18.346475 containerd[1459]: 2024-12-13 02:33:16.922 [INFO][3042] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" iface="eth0" netns="/var/run/netns/cni-feb240be-5836-8198-4f70-f93b166a2b5c" Dec 13 02:33:18.346475 containerd[1459]: 2024-12-13 02:33:16.924 [INFO][3042] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" iface="eth0" netns="/var/run/netns/cni-feb240be-5836-8198-4f70-f93b166a2b5c" Dec 13 02:33:18.346475 containerd[1459]: 2024-12-13 02:33:16.930 [INFO][3042] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" iface="eth0" netns="/var/run/netns/cni-feb240be-5836-8198-4f70-f93b166a2b5c" Dec 13 02:33:18.346475 containerd[1459]: 2024-12-13 02:33:16.930 [INFO][3042] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Dec 13 02:33:18.346475 containerd[1459]: 2024-12-13 02:33:16.930 [INFO][3042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Dec 13 02:33:18.346475 containerd[1459]: 2024-12-13 02:33:18.305 [INFO][3077] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" HandleID="k8s-pod-network.3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Workload="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:18.346475 containerd[1459]: 2024-12-13 02:33:18.305 [INFO][3077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:33:18.346475 containerd[1459]: 2024-12-13 02:33:18.305 [INFO][3077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:33:18.346475 containerd[1459]: 2024-12-13 02:33:18.333 [WARNING][3077] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" HandleID="k8s-pod-network.3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Workload="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:18.346475 containerd[1459]: 2024-12-13 02:33:18.333 [INFO][3077] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" HandleID="k8s-pod-network.3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Workload="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:18.346475 containerd[1459]: 2024-12-13 02:33:18.340 [INFO][3077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:33:18.346475 containerd[1459]: 2024-12-13 02:33:18.343 [INFO][3042] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Dec 13 02:33:18.348572 containerd[1459]: time="2024-12-13T02:33:18.348313388Z" level=info msg="TearDown network for sandbox \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\" successfully" Dec 13 02:33:18.348572 containerd[1459]: time="2024-12-13T02:33:18.348376307Z" level=info msg="StopPodSandbox for \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\" returns successfully" Dec 13 02:33:18.351947 containerd[1459]: time="2024-12-13T02:33:18.351853548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbm6l,Uid:a8343490-cada-48ea-8455-e41a86be0a3c,Namespace:calico-system,Attempt:1,}" Dec 13 02:33:18.355236 systemd[1]: run-netns-cni\x2dfeb240be\x2d5836\x2d8198\x2d4f70\x2df93b166a2b5c.mount: Deactivated successfully. Dec 13 02:33:18.452356 kubelet[1859]: E1213 02:33:18.452299 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:18.647271 systemd-networkd[1376]: cali98fc69a4789: Link UP Dec 13 02:33:18.648696 systemd-networkd[1376]: cali98fc69a4789: Gained carrier Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.475 [INFO][3089] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.31-k8s-csi--node--driver--cbm6l-eth0 csi-node-driver- calico-system a8343490-cada-48ea-8455-e41a86be0a3c 1304 0 2024-12-13 02:32:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.24.4.31 csi-node-driver-cbm6l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali98fc69a4789 [] []}} ContainerID="d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" Namespace="calico-system" Pod="csi-node-driver-cbm6l" WorkloadEndpoint="172.24.4.31-k8s-csi--node--driver--cbm6l-" Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.475 [INFO][3089] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" Namespace="calico-system" Pod="csi-node-driver-cbm6l" WorkloadEndpoint="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.538 [INFO][3100] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" HandleID="k8s-pod-network.d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" Workload="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.559 [INFO][3100] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" HandleID="k8s-pod-network.d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" Workload="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319880), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.31", "pod":"csi-node-driver-cbm6l", "timestamp":"2024-12-13 02:33:18.538051384 +0000 UTC"}, Hostname:"172.24.4.31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.560 [INFO][3100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.560 [INFO][3100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.560 [INFO][3100] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.31' Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.564 [INFO][3100] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" host="172.24.4.31" Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.578 [INFO][3100] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.31" Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.588 [INFO][3100] ipam/ipam.go 489: Trying affinity for 192.168.44.64/26 host="172.24.4.31" Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.594 [INFO][3100] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.64/26 host="172.24.4.31" Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.598 [INFO][3100] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="172.24.4.31" Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.599 [INFO][3100] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" host="172.24.4.31" Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.604 [INFO][3100] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283 Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.615 [INFO][3100] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" host="172.24.4.31" Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.626 [INFO][3100] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.65/26] block=192.168.44.64/26 handle="k8s-pod-network.d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" host="172.24.4.31" Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.626 [INFO][3100] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.65/26] handle="k8s-pod-network.d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" host="172.24.4.31" Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.626 [INFO][3100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:33:18.694903 containerd[1459]: 2024-12-13 02:33:18.627 [INFO][3100] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.65/26] IPv6=[] ContainerID="d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" HandleID="k8s-pod-network.d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" Workload="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:18.700080 containerd[1459]: 2024-12-13 02:33:18.631 [INFO][3089] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" Namespace="calico-system" Pod="csi-node-driver-cbm6l" WorkloadEndpoint="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.31-k8s-csi--node--driver--cbm6l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8343490-cada-48ea-8455-e41a86be0a3c", ResourceVersion:"1304", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 32, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.31", ContainerID:"", Pod:"csi-node-driver-cbm6l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali98fc69a4789", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:33:18.700080 containerd[1459]: 2024-12-13 02:33:18.632 [INFO][3089] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.65/32] ContainerID="d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" Namespace="calico-system" Pod="csi-node-driver-cbm6l" WorkloadEndpoint="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:18.700080 containerd[1459]: 2024-12-13 02:33:18.632 [INFO][3089] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98fc69a4789 ContainerID="d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" Namespace="calico-system" Pod="csi-node-driver-cbm6l" WorkloadEndpoint="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:18.700080 containerd[1459]: 2024-12-13 02:33:18.656 [INFO][3089] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" Namespace="calico-system" Pod="csi-node-driver-cbm6l" WorkloadEndpoint="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:18.700080 containerd[1459]: 2024-12-13 02:33:18.657 [INFO][3089] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" Namespace="calico-system" Pod="csi-node-driver-cbm6l" WorkloadEndpoint="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.31-k8s-csi--node--driver--cbm6l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8343490-cada-48ea-8455-e41a86be0a3c", ResourceVersion:"1304", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 32, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.31", ContainerID:"d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283", Pod:"csi-node-driver-cbm6l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali98fc69a4789", MAC:"f2:a7:80:8c:1d:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:33:18.700080 containerd[1459]: 2024-12-13 02:33:18.688 [INFO][3089] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283" Namespace="calico-system" Pod="csi-node-driver-cbm6l" WorkloadEndpoint="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:18.794114 containerd[1459]: time="2024-12-13T02:33:18.793185844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:33:18.794114 containerd[1459]: time="2024-12-13T02:33:18.793427770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:33:18.794114 containerd[1459]: time="2024-12-13T02:33:18.793463156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:33:18.794114 containerd[1459]: time="2024-12-13T02:33:18.793620672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:33:18.848811 systemd[1]: Started cri-containerd-d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283.scope - libcontainer container d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283. Dec 13 02:33:18.876238 containerd[1459]: time="2024-12-13T02:33:18.876184792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbm6l,Uid:a8343490-cada-48ea-8455-e41a86be0a3c,Namespace:calico-system,Attempt:1,} returns sandbox id \"d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283\"" Dec 13 02:33:18.878533 containerd[1459]: time="2024-12-13T02:33:18.878410599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 02:33:19.453535 kubelet[1859]: E1213 02:33:19.453454 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:20.130971 systemd-networkd[1376]: cali98fc69a4789: Gained IPv6LL Dec 13 02:33:20.454380 kubelet[1859]: E1213 02:33:20.454310 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:20.948964 containerd[1459]: time="2024-12-13T02:33:20.948901014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:20.950171 containerd[1459]: time="2024-12-13T02:33:20.950113454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 02:33:20.951528 containerd[1459]: time="2024-12-13T02:33:20.951474784Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:20.953917 containerd[1459]: time="2024-12-13T02:33:20.953890887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:20.954626 containerd[1459]: time="2024-12-13T02:33:20.954484634Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.075963086s" Dec 13 02:33:20.954626 containerd[1459]: time="2024-12-13T02:33:20.954523417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 02:33:20.957289 containerd[1459]: time="2024-12-13T02:33:20.957258600Z" level=info msg="CreateContainer within sandbox \"d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 02:33:20.979804 containerd[1459]: time="2024-12-13T02:33:20.979696175Z" level=info msg="CreateContainer within sandbox \"d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c596a56a606c1055d88471e3e950cfe815c450f4b47b8c5cdc9ab4ba716f1608\"" Dec 13 02:33:20.980675 containerd[1459]: time="2024-12-13T02:33:20.980618168Z" level=info msg="StartContainer for \"c596a56a606c1055d88471e3e950cfe815c450f4b47b8c5cdc9ab4ba716f1608\"" Dec 13 02:33:21.014833 systemd[1]: Started cri-containerd-c596a56a606c1055d88471e3e950cfe815c450f4b47b8c5cdc9ab4ba716f1608.scope - libcontainer container c596a56a606c1055d88471e3e950cfe815c450f4b47b8c5cdc9ab4ba716f1608. Dec 13 02:33:21.054023 containerd[1459]: time="2024-12-13T02:33:21.053020555Z" level=info msg="StartContainer for \"c596a56a606c1055d88471e3e950cfe815c450f4b47b8c5cdc9ab4ba716f1608\" returns successfully" Dec 13 02:33:21.054500 containerd[1459]: time="2024-12-13T02:33:21.054467265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 02:33:21.455366 kubelet[1859]: E1213 02:33:21.455263 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:22.456221 kubelet[1859]: E1213 02:33:22.456103 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:22.804157 containerd[1459]: time="2024-12-13T02:33:22.636474638Z" level=info msg="StopPodSandbox for \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\"" Dec 13 02:33:23.018827 containerd[1459]: 2024-12-13 02:33:22.939 [INFO][3214] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Dec 13 02:33:23.018827 containerd[1459]: 2024-12-13 02:33:22.939 [INFO][3214] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" iface="eth0" netns="/var/run/netns/cni-68f1d70c-087e-332c-4162-c12ab2b2d126" Dec 13 02:33:23.018827 containerd[1459]: 2024-12-13 02:33:22.940 [INFO][3214] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" iface="eth0" netns="/var/run/netns/cni-68f1d70c-087e-332c-4162-c12ab2b2d126" Dec 13 02:33:23.018827 containerd[1459]: 2024-12-13 02:33:22.940 [INFO][3214] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" iface="eth0" netns="/var/run/netns/cni-68f1d70c-087e-332c-4162-c12ab2b2d126" Dec 13 02:33:23.018827 containerd[1459]: 2024-12-13 02:33:22.941 [INFO][3214] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Dec 13 02:33:23.018827 containerd[1459]: 2024-12-13 02:33:22.941 [INFO][3214] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Dec 13 02:33:23.018827 containerd[1459]: 2024-12-13 02:33:22.990 [INFO][3221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" HandleID="k8s-pod-network.845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Workload="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:23.018827 containerd[1459]: 2024-12-13 02:33:22.990 [INFO][3221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:33:23.018827 containerd[1459]: 2024-12-13 02:33:22.990 [INFO][3221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:33:23.018827 containerd[1459]: 2024-12-13 02:33:23.007 [WARNING][3221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" HandleID="k8s-pod-network.845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Workload="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:23.018827 containerd[1459]: 2024-12-13 02:33:23.007 [INFO][3221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" HandleID="k8s-pod-network.845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Workload="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:23.018827 containerd[1459]: 2024-12-13 02:33:23.011 [INFO][3221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:33:23.018827 containerd[1459]: 2024-12-13 02:33:23.013 [INFO][3214] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Dec 13 02:33:23.021108 containerd[1459]: time="2024-12-13T02:33:23.020328772Z" level=info msg="TearDown network for sandbox \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\" successfully" Dec 13 02:33:23.021545 containerd[1459]: time="2024-12-13T02:33:23.021283397Z" level=info msg="StopPodSandbox for \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\" returns successfully" Dec 13 02:33:23.025268 containerd[1459]: time="2024-12-13T02:33:23.024840974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xfkc9,Uid:4369aa74-ff91-4584-b36b-6b69892ac078,Namespace:default,Attempt:1,}" Dec 13 02:33:23.029947 systemd[1]: run-netns-cni\x2d68f1d70c\x2d087e\x2d332c\x2d4162\x2dc12ab2b2d126.mount: Deactivated successfully. Dec 13 02:33:23.457083 kubelet[1859]: E1213 02:33:23.456994 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:23.746811 systemd[1]: run-containerd-runc-k8s.io-cc226e7982261b105ead3ac0a5f895e2f66c81efeabce251ed22bb5c67b8f3b3-runc.aLdZb3.mount: Deactivated successfully. Dec 13 02:33:23.786377 systemd-networkd[1376]: cali328b59f9b07: Link UP Dec 13 02:33:23.786955 systemd-networkd[1376]: cali328b59f9b07: Gained carrier Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.619 [INFO][3227] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0 nginx-deployment-85f456d6dd- default 4369aa74-ff91-4584-b36b-6b69892ac078 1326 0 2024-12-13 02:33:06 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.31 nginx-deployment-85f456d6dd-xfkc9 eth0 default [] [] [kns.default ksa.default.default] cali328b59f9b07 [] []}} ContainerID="f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" Namespace="default" Pod="nginx-deployment-85f456d6dd-xfkc9" WorkloadEndpoint="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-" Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.619 [INFO][3227] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" Namespace="default" Pod="nginx-deployment-85f456d6dd-xfkc9" WorkloadEndpoint="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.690 [INFO][3238] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" HandleID="k8s-pod-network.f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" Workload="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.709 [INFO][3238] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" HandleID="k8s-pod-network.f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" Workload="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051d60), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.31", "pod":"nginx-deployment-85f456d6dd-xfkc9", "timestamp":"2024-12-13 02:33:23.690750086 +0000 UTC"}, Hostname:"172.24.4.31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.709 [INFO][3238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.709 [INFO][3238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.709 [INFO][3238] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.31' Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.713 [INFO][3238] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" host="172.24.4.31" Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.730 [INFO][3238] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.31" Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.740 [INFO][3238] ipam/ipam.go 489: Trying affinity for 192.168.44.64/26 host="172.24.4.31" Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.746 [INFO][3238] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.64/26 host="172.24.4.31" Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.750 [INFO][3238] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="172.24.4.31" Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.750 [INFO][3238] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" host="172.24.4.31" Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.756 [INFO][3238] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.767 [INFO][3238] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" host="172.24.4.31" Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.779 [INFO][3238] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.66/26] block=192.168.44.64/26 handle="k8s-pod-network.f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" host="172.24.4.31" Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.779 [INFO][3238] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.66/26] handle="k8s-pod-network.f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" host="172.24.4.31" Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.779 [INFO][3238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:33:23.811692 containerd[1459]: 2024-12-13 02:33:23.779 [INFO][3238] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.66/26] IPv6=[] ContainerID="f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" HandleID="k8s-pod-network.f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" Workload="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:23.813833 containerd[1459]: 2024-12-13 02:33:23.781 [INFO][3227] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" Namespace="default" Pod="nginx-deployment-85f456d6dd-xfkc9" WorkloadEndpoint="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"4369aa74-ff91-4584-b36b-6b69892ac078", ResourceVersion:"1326", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.31", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-xfkc9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali328b59f9b07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:33:23.813833 containerd[1459]: 2024-12-13 02:33:23.782 [INFO][3227] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.66/32] ContainerID="f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" Namespace="default" Pod="nginx-deployment-85f456d6dd-xfkc9" WorkloadEndpoint="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:23.813833 containerd[1459]: 2024-12-13 02:33:23.782 [INFO][3227] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali328b59f9b07 ContainerID="f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" Namespace="default" Pod="nginx-deployment-85f456d6dd-xfkc9" WorkloadEndpoint="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:23.813833 containerd[1459]: 2024-12-13 02:33:23.786 [INFO][3227] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" Namespace="default" Pod="nginx-deployment-85f456d6dd-xfkc9" WorkloadEndpoint="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:23.813833 containerd[1459]: 2024-12-13 02:33:23.788 [INFO][3227] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" Namespace="default" Pod="nginx-deployment-85f456d6dd-xfkc9" WorkloadEndpoint="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"4369aa74-ff91-4584-b36b-6b69892ac078", ResourceVersion:"1326", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.31", ContainerID:"f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d", Pod:"nginx-deployment-85f456d6dd-xfkc9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali328b59f9b07", MAC:"0a:a7:c8:d1:2d:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:33:23.813833 containerd[1459]: 2024-12-13 02:33:23.806 [INFO][3227] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d" Namespace="default" Pod="nginx-deployment-85f456d6dd-xfkc9" WorkloadEndpoint="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:23.846522 containerd[1459]: time="2024-12-13T02:33:23.846403757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:33:23.846726 containerd[1459]: time="2024-12-13T02:33:23.846550894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:33:23.846726 containerd[1459]: time="2024-12-13T02:33:23.846593975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:33:23.846844 containerd[1459]: time="2024-12-13T02:33:23.846797658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:33:23.865807 systemd[1]: Started cri-containerd-f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d.scope - libcontainer container f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d. Dec 13 02:33:23.918880 containerd[1459]: time="2024-12-13T02:33:23.918728271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xfkc9,Uid:4369aa74-ff91-4584-b36b-6b69892ac078,Namespace:default,Attempt:1,} returns sandbox id \"f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d\"" Dec 13 02:33:24.457856 kubelet[1859]: E1213 02:33:24.457732 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:24.594800 containerd[1459]: time="2024-12-13T02:33:24.594020047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:24.606482 containerd[1459]: time="2024-12-13T02:33:24.605871791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 02:33:24.638215 containerd[1459]: time="2024-12-13T02:33:24.638127942Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:24.646879 containerd[1459]: time="2024-12-13T02:33:24.646804719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:24.649565 containerd[1459]: time="2024-12-13T02:33:24.649431496Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.594796785s" Dec 13 02:33:24.649565 containerd[1459]: time="2024-12-13T02:33:24.649530262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 02:33:24.653328 containerd[1459]: time="2024-12-13T02:33:24.652746867Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:33:24.655397 containerd[1459]: time="2024-12-13T02:33:24.655269588Z" level=info msg="CreateContainer within sandbox \"d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 02:33:24.844570 containerd[1459]: time="2024-12-13T02:33:24.844306739Z" level=info msg="CreateContainer within sandbox \"d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8e46038a2af4399de66a5197499b5a4a423114b4a755575e5a979d7e16ffa943\"" Dec 13 02:33:24.849087 containerd[1459]: time="2024-12-13T02:33:24.848743738Z" level=info msg="StartContainer for \"8e46038a2af4399de66a5197499b5a4a423114b4a755575e5a979d7e16ffa943\"" Dec 13 02:33:24.913130 systemd[1]: Started cri-containerd-8e46038a2af4399de66a5197499b5a4a423114b4a755575e5a979d7e16ffa943.scope - libcontainer container 8e46038a2af4399de66a5197499b5a4a423114b4a755575e5a979d7e16ffa943. Dec 13 02:33:25.014671 containerd[1459]: time="2024-12-13T02:33:25.014572608Z" level=info msg="StartContainer for \"8e46038a2af4399de66a5197499b5a4a423114b4a755575e5a979d7e16ffa943\" returns successfully" Dec 13 02:33:25.122982 systemd-networkd[1376]: cali328b59f9b07: Gained IPv6LL Dec 13 02:33:25.458817 kubelet[1859]: E1213 02:33:25.458720 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:25.667015 kubelet[1859]: I1213 02:33:25.666617 1859 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 02:33:25.667015 kubelet[1859]: I1213 02:33:25.666740 1859 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 02:33:26.460818 kubelet[1859]: E1213 02:33:26.459448 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:27.461140 kubelet[1859]: E1213 02:33:27.461009 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:28.461696 kubelet[1859]: E1213 02:33:28.461594 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:29.311398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2192246638.mount: Deactivated successfully. Dec 13 02:33:29.461983 kubelet[1859]: E1213 02:33:29.461891 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:30.463179 kubelet[1859]: E1213 02:33:30.463040 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:31.463560 kubelet[1859]: E1213 02:33:31.463392 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:32.463596 kubelet[1859]: E1213 02:33:32.463558 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:32.804288 containerd[1459]: time="2024-12-13T02:33:32.803576115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:32.806228 containerd[1459]: time="2024-12-13T02:33:32.806109863Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036027" Dec 13 02:33:32.808440 containerd[1459]: time="2024-12-13T02:33:32.807331618Z" level=info msg="ImageCreate event name:\"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:32.911246 containerd[1459]: time="2024-12-13T02:33:32.911141593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:32.914946 containerd[1459]: time="2024-12-13T02:33:32.914873381Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 8.262047816s" Dec 13 02:33:32.915079 containerd[1459]: time="2024-12-13T02:33:32.914951397Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:33:33.032104 containerd[1459]: time="2024-12-13T02:33:33.032011830Z" level=info msg="CreateContainer within sandbox \"f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 02:33:33.085773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2850427872.mount: Deactivated successfully. Dec 13 02:33:33.109606 containerd[1459]: time="2024-12-13T02:33:33.109533009Z" level=info msg="CreateContainer within sandbox \"f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d5eb7f6f3cb572b15567e5eb273bc96fc8408eb2a8891ec48b2abf8e45d418ae\"" Dec 13 02:33:33.110795 containerd[1459]: time="2024-12-13T02:33:33.110743893Z" level=info msg="StartContainer for \"d5eb7f6f3cb572b15567e5eb273bc96fc8408eb2a8891ec48b2abf8e45d418ae\"" Dec 13 02:33:33.167810 systemd[1]: Started cri-containerd-d5eb7f6f3cb572b15567e5eb273bc96fc8408eb2a8891ec48b2abf8e45d418ae.scope - libcontainer container d5eb7f6f3cb572b15567e5eb273bc96fc8408eb2a8891ec48b2abf8e45d418ae. Dec 13 02:33:33.202742 containerd[1459]: time="2024-12-13T02:33:33.202636396Z" level=info msg="StartContainer for \"d5eb7f6f3cb572b15567e5eb273bc96fc8408eb2a8891ec48b2abf8e45d418ae\" returns successfully" Dec 13 02:33:33.464583 kubelet[1859]: E1213 02:33:33.464503 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:34.063883 kubelet[1859]: I1213 02:33:34.063735 1859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cbm6l" podStartSLOduration=52.290391521 podStartE2EDuration="58.063696111s" podCreationTimestamp="2024-12-13 02:32:36 +0000 UTC" firstStartedPulling="2024-12-13 02:33:18.87769365 +0000 UTC m=+44.487087091" lastFinishedPulling="2024-12-13 02:33:24.6509982 +0000 UTC m=+50.260391681" observedRunningTime="2024-12-13 02:33:25.988445593 +0000 UTC m=+51.597839124" watchObservedRunningTime="2024-12-13 02:33:34.063696111 +0000 UTC m=+59.673089662" Dec 13 02:33:34.464909 kubelet[1859]: E1213 02:33:34.464809 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:35.465132 kubelet[1859]: E1213 02:33:35.465012 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:36.417777 kubelet[1859]: E1213 02:33:36.417577 1859 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:36.466229 kubelet[1859]: E1213 02:33:36.466100 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:36.552034 containerd[1459]: time="2024-12-13T02:33:36.551199213Z" level=info msg="StopPodSandbox for \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\"" Dec 13 02:33:36.710047 containerd[1459]: 2024-12-13 02:33:36.631 [WARNING][3472] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.31-k8s-csi--node--driver--cbm6l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8343490-cada-48ea-8455-e41a86be0a3c", ResourceVersion:"1345", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 32, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.31", ContainerID:"d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283", Pod:"csi-node-driver-cbm6l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali98fc69a4789", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:33:36.710047 containerd[1459]: 2024-12-13 02:33:36.632 [INFO][3472] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Dec 13 02:33:36.710047 containerd[1459]: 2024-12-13 02:33:36.632 [INFO][3472] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" iface="eth0" netns="" Dec 13 02:33:36.710047 containerd[1459]: 2024-12-13 02:33:36.632 [INFO][3472] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Dec 13 02:33:36.710047 containerd[1459]: 2024-12-13 02:33:36.632 [INFO][3472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Dec 13 02:33:36.710047 containerd[1459]: 2024-12-13 02:33:36.683 [INFO][3478] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" HandleID="k8s-pod-network.3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Workload="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:36.710047 containerd[1459]: 2024-12-13 02:33:36.683 [INFO][3478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:33:36.710047 containerd[1459]: 2024-12-13 02:33:36.683 [INFO][3478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:33:36.710047 containerd[1459]: 2024-12-13 02:33:36.699 [WARNING][3478] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" HandleID="k8s-pod-network.3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Workload="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:36.710047 containerd[1459]: 2024-12-13 02:33:36.699 [INFO][3478] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" HandleID="k8s-pod-network.3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Workload="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:36.710047 containerd[1459]: 2024-12-13 02:33:36.703 [INFO][3478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:33:36.710047 containerd[1459]: 2024-12-13 02:33:36.706 [INFO][3472] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Dec 13 02:33:36.710047 containerd[1459]: time="2024-12-13T02:33:36.709154070Z" level=info msg="TearDown network for sandbox \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\" successfully" Dec 13 02:33:36.710047 containerd[1459]: time="2024-12-13T02:33:36.709181602Z" level=info msg="StopPodSandbox for \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\" returns successfully" Dec 13 02:33:36.715633 containerd[1459]: time="2024-12-13T02:33:36.715544687Z" level=info msg="RemovePodSandbox for \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\"" Dec 13 02:33:36.715633 containerd[1459]: time="2024-12-13T02:33:36.715607815Z" level=info msg="Forcibly stopping sandbox \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\"" Dec 13 02:33:36.838269 containerd[1459]: 2024-12-13 02:33:36.775 [WARNING][3499] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.31-k8s-csi--node--driver--cbm6l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8343490-cada-48ea-8455-e41a86be0a3c", ResourceVersion:"1345", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 32, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.31", ContainerID:"d8c6affb3c0ea6f1e4f2c20cedbc76891b8a9b5ce77f14fa4fd8308746c69283", Pod:"csi-node-driver-cbm6l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali98fc69a4789", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:33:36.838269 containerd[1459]: 2024-12-13 02:33:36.775 [INFO][3499] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Dec 13 02:33:36.838269 containerd[1459]: 2024-12-13 02:33:36.775 [INFO][3499] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" iface="eth0" netns="" Dec 13 02:33:36.838269 containerd[1459]: 2024-12-13 02:33:36.775 [INFO][3499] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Dec 13 02:33:36.838269 containerd[1459]: 2024-12-13 02:33:36.775 [INFO][3499] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Dec 13 02:33:36.838269 containerd[1459]: 2024-12-13 02:33:36.818 [INFO][3505] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" HandleID="k8s-pod-network.3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Workload="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:36.838269 containerd[1459]: 2024-12-13 02:33:36.819 [INFO][3505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:33:36.838269 containerd[1459]: 2024-12-13 02:33:36.819 [INFO][3505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:33:36.838269 containerd[1459]: 2024-12-13 02:33:36.832 [WARNING][3505] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" HandleID="k8s-pod-network.3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Workload="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:36.838269 containerd[1459]: 2024-12-13 02:33:36.832 [INFO][3505] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" HandleID="k8s-pod-network.3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Workload="172.24.4.31-k8s-csi--node--driver--cbm6l-eth0" Dec 13 02:33:36.838269 containerd[1459]: 2024-12-13 02:33:36.834 [INFO][3505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:33:36.838269 containerd[1459]: 2024-12-13 02:33:36.836 [INFO][3499] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed" Dec 13 02:33:36.840091 containerd[1459]: time="2024-12-13T02:33:36.838742504Z" level=info msg="TearDown network for sandbox \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\" successfully" Dec 13 02:33:36.930548 containerd[1459]: time="2024-12-13T02:33:36.930447440Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:33:36.931374 containerd[1459]: time="2024-12-13T02:33:36.930569980Z" level=info msg="RemovePodSandbox \"3c5e552e0fc7009bd135653690f5b8e60aa7df1281daf8c84bb3be88f3bff2ed\" returns successfully" Dec 13 02:33:36.957610 containerd[1459]: time="2024-12-13T02:33:36.955933876Z" level=info msg="StopPodSandbox for \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\"" Dec 13 02:33:37.154556 containerd[1459]: 2024-12-13 02:33:37.081 [WARNING][3531] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"4369aa74-ff91-4584-b36b-6b69892ac078", ResourceVersion:"1364", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.31", ContainerID:"f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d", Pod:"nginx-deployment-85f456d6dd-xfkc9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali328b59f9b07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:33:37.154556 containerd[1459]: 2024-12-13 02:33:37.081 [INFO][3531] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Dec 13 02:33:37.154556 containerd[1459]: 2024-12-13 02:33:37.081 [INFO][3531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" iface="eth0" netns="" Dec 13 02:33:37.154556 containerd[1459]: 2024-12-13 02:33:37.081 [INFO][3531] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Dec 13 02:33:37.154556 containerd[1459]: 2024-12-13 02:33:37.081 [INFO][3531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Dec 13 02:33:37.154556 containerd[1459]: 2024-12-13 02:33:37.126 [INFO][3537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" HandleID="k8s-pod-network.845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Workload="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:37.154556 containerd[1459]: 2024-12-13 02:33:37.127 [INFO][3537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:33:37.154556 containerd[1459]: 2024-12-13 02:33:37.127 [INFO][3537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:33:37.154556 containerd[1459]: 2024-12-13 02:33:37.146 [WARNING][3537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" HandleID="k8s-pod-network.845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Workload="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:37.154556 containerd[1459]: 2024-12-13 02:33:37.146 [INFO][3537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" HandleID="k8s-pod-network.845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Workload="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:37.154556 containerd[1459]: 2024-12-13 02:33:37.151 [INFO][3537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:33:37.154556 containerd[1459]: 2024-12-13 02:33:37.153 [INFO][3531] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Dec 13 02:33:37.155506 containerd[1459]: time="2024-12-13T02:33:37.154599320Z" level=info msg="TearDown network for sandbox \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\" successfully" Dec 13 02:33:37.155506 containerd[1459]: time="2024-12-13T02:33:37.154624878Z" level=info msg="StopPodSandbox for \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\" returns successfully" Dec 13 02:33:37.156092 containerd[1459]: time="2024-12-13T02:33:37.155756582Z" level=info msg="RemovePodSandbox for \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\"" Dec 13 02:33:37.156092 containerd[1459]: time="2024-12-13T02:33:37.155800445Z" level=info msg="Forcibly stopping sandbox \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\"" Dec 13 02:33:37.304762 containerd[1459]: 2024-12-13 02:33:37.231 [WARNING][3556] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"4369aa74-ff91-4584-b36b-6b69892ac078", ResourceVersion:"1364", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.31", ContainerID:"f4fda711bf9a9afdd302c7ccb03b62bc22c6c2a34dcf2beb31756c0cb327e36d", Pod:"nginx-deployment-85f456d6dd-xfkc9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali328b59f9b07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:33:37.304762 containerd[1459]: 2024-12-13 02:33:37.232 [INFO][3556] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Dec 13 02:33:37.304762 containerd[1459]: 2024-12-13 02:33:37.232 [INFO][3556] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" iface="eth0" netns="" Dec 13 02:33:37.304762 containerd[1459]: 2024-12-13 02:33:37.232 [INFO][3556] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Dec 13 02:33:37.304762 containerd[1459]: 2024-12-13 02:33:37.232 [INFO][3556] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Dec 13 02:33:37.304762 containerd[1459]: 2024-12-13 02:33:37.283 [INFO][3562] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" HandleID="k8s-pod-network.845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Workload="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:37.304762 containerd[1459]: 2024-12-13 02:33:37.283 [INFO][3562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:33:37.304762 containerd[1459]: 2024-12-13 02:33:37.283 [INFO][3562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:33:37.304762 containerd[1459]: 2024-12-13 02:33:37.296 [WARNING][3562] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" HandleID="k8s-pod-network.845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Workload="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:37.304762 containerd[1459]: 2024-12-13 02:33:37.296 [INFO][3562] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" HandleID="k8s-pod-network.845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Workload="172.24.4.31-k8s-nginx--deployment--85f456d6dd--xfkc9-eth0" Dec 13 02:33:37.304762 containerd[1459]: 2024-12-13 02:33:37.299 [INFO][3562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:33:37.304762 containerd[1459]: 2024-12-13 02:33:37.302 [INFO][3556] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24" Dec 13 02:33:37.306479 containerd[1459]: time="2024-12-13T02:33:37.304830971Z" level=info msg="TearDown network for sandbox \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\" successfully" Dec 13 02:33:37.318379 containerd[1459]: time="2024-12-13T02:33:37.318262594Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:33:37.318502 containerd[1459]: time="2024-12-13T02:33:37.318379142Z" level=info msg="RemovePodSandbox \"845047c490d5c2c66104cdbcbeab19d22bf73019620e6b380521d94cc352bd24\" returns successfully" Dec 13 02:33:37.319765 containerd[1459]: time="2024-12-13T02:33:37.319426178Z" level=info msg="StopPodSandbox for \"46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c\"" Dec 13 02:33:37.319765 containerd[1459]: time="2024-12-13T02:33:37.319593522Z" level=info msg="TearDown network for sandbox \"46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c\" successfully" Dec 13 02:33:37.319765 containerd[1459]: time="2024-12-13T02:33:37.319621304Z" level=info msg="StopPodSandbox for \"46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c\" returns successfully" Dec 13 02:33:37.321039 containerd[1459]: time="2024-12-13T02:33:37.320492520Z" level=info msg="RemovePodSandbox for \"46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c\"" Dec 13 02:33:37.321039 containerd[1459]: time="2024-12-13T02:33:37.320550098Z" level=info msg="Forcibly stopping sandbox \"46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c\"" Dec 13 02:33:37.321039 containerd[1459]: time="2024-12-13T02:33:37.320693186Z" level=info msg="TearDown network for sandbox \"46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c\" successfully" Dec 13 02:33:37.334708 containerd[1459]: time="2024-12-13T02:33:37.334398183Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:33:37.334708 containerd[1459]: time="2024-12-13T02:33:37.334531734Z" level=info msg="RemovePodSandbox \"46e0561f7140e73492e168f9d3a7f3ab512b896d642443675be0c72efe96f11c\" returns successfully" Dec 13 02:33:37.467326 kubelet[1859]: E1213 02:33:37.467104 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:38.467846 kubelet[1859]: E1213 02:33:38.467755 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:39.468349 kubelet[1859]: E1213 02:33:39.468266 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:40.468546 kubelet[1859]: E1213 02:33:40.468460 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:41.469218 kubelet[1859]: E1213 02:33:41.469142 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:42.040115 kubelet[1859]: I1213 02:33:42.039718 1859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-xfkc9" podStartSLOduration=26.985712138 podStartE2EDuration="36.039583043s" podCreationTimestamp="2024-12-13 02:33:06 +0000 UTC" firstStartedPulling="2024-12-13 02:33:23.920076816 +0000 UTC m=+49.529470257" lastFinishedPulling="2024-12-13 02:33:32.973947681 +0000 UTC m=+58.583341162" observedRunningTime="2024-12-13 02:33:34.067155085 +0000 UTC m=+59.676548596" watchObservedRunningTime="2024-12-13 02:33:42.039583043 +0000 UTC m=+67.648976524" Dec 13 02:33:42.041221 kubelet[1859]: I1213 02:33:42.041124 1859 topology_manager.go:215] "Topology Admit Handler" podUID="24ff737f-1743-48a7-bad4-f81df9d26448" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 02:33:42.058036 systemd[1]: Created slice kubepods-besteffort-pod24ff737f_1743_48a7_bad4_f81df9d26448.slice - libcontainer container kubepods-besteffort-pod24ff737f_1743_48a7_bad4_f81df9d26448.slice. Dec 13 02:33:42.165258 kubelet[1859]: I1213 02:33:42.165049 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkxhc\" (UniqueName: \"kubernetes.io/projected/24ff737f-1743-48a7-bad4-f81df9d26448-kube-api-access-bkxhc\") pod \"nfs-server-provisioner-0\" (UID: \"24ff737f-1743-48a7-bad4-f81df9d26448\") " pod="default/nfs-server-provisioner-0" Dec 13 02:33:42.165258 kubelet[1859]: I1213 02:33:42.165141 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/24ff737f-1743-48a7-bad4-f81df9d26448-data\") pod \"nfs-server-provisioner-0\" (UID: \"24ff737f-1743-48a7-bad4-f81df9d26448\") " pod="default/nfs-server-provisioner-0" Dec 13 02:33:42.365266 containerd[1459]: time="2024-12-13T02:33:42.364998153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:24ff737f-1743-48a7-bad4-f81df9d26448,Namespace:default,Attempt:0,}" Dec 13 02:33:42.469314 kubelet[1859]: E1213 02:33:42.469256 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:42.605762 systemd-networkd[1376]: cali60e51b789ff: Link UP Dec 13 02:33:42.606337 systemd-networkd[1376]: cali60e51b789ff: Gained carrier Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.475 [INFO][3574] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.31-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 24ff737f-1743-48a7-bad4-f81df9d26448 1401 0 2024-12-13 02:33:41 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.24.4.31 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.31-k8s-nfs--server--provisioner--0-" Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.475 [INFO][3574] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.31-k8s-nfs--server--provisioner--0-eth0" Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.533 [INFO][3585] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" HandleID="k8s-pod-network.c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" Workload="172.24.4.31-k8s-nfs--server--provisioner--0-eth0" Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.547 [INFO][3585] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" HandleID="k8s-pod-network.c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" Workload="172.24.4.31-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334c80), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.31", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 02:33:42.533186544 +0000 UTC"}, Hostname:"172.24.4.31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.547 [INFO][3585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.547 [INFO][3585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.547 [INFO][3585] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.31' Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.550 [INFO][3585] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" host="172.24.4.31" Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.557 [INFO][3585] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.31" Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.565 [INFO][3585] ipam/ipam.go 489: Trying affinity for 192.168.44.64/26 host="172.24.4.31" Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.568 [INFO][3585] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.64/26 host="172.24.4.31" Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.572 [INFO][3585] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="172.24.4.31" Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.572 [INFO][3585] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" host="172.24.4.31" Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.575 [INFO][3585] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0 Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.582 [INFO][3585] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" host="172.24.4.31" Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.599 [INFO][3585] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.67/26] block=192.168.44.64/26 handle="k8s-pod-network.c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" host="172.24.4.31" Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.599 [INFO][3585] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.67/26] handle="k8s-pod-network.c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" host="172.24.4.31" Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.599 [INFO][3585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:33:42.634782 containerd[1459]: 2024-12-13 02:33:42.599 [INFO][3585] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.67/26] IPv6=[] ContainerID="c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" HandleID="k8s-pod-network.c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" Workload="172.24.4.31-k8s-nfs--server--provisioner--0-eth0" Dec 13 02:33:42.640810 containerd[1459]: 2024-12-13 02:33:42.601 [INFO][3574] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.31-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.31-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"24ff737f-1743-48a7-bad4-f81df9d26448", ResourceVersion:"1401", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 33, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.31", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:33:42.640810 containerd[1459]: 2024-12-13 02:33:42.601 [INFO][3574] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.67/32] ContainerID="c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.31-k8s-nfs--server--provisioner--0-eth0" Dec 13 02:33:42.640810 containerd[1459]: 2024-12-13 02:33:42.602 [INFO][3574] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.31-k8s-nfs--server--provisioner--0-eth0" Dec 13 02:33:42.640810 containerd[1459]: 2024-12-13 02:33:42.607 [INFO][3574] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.31-k8s-nfs--server--provisioner--0-eth0" Dec 13 02:33:42.644500 containerd[1459]: 2024-12-13 02:33:42.607 [INFO][3574] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.31-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.31-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"24ff737f-1743-48a7-bad4-f81df9d26448", ResourceVersion:"1401", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 33, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.31", ContainerID:"c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"12:7a:66:23:8d:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:33:42.644500 containerd[1459]: 2024-12-13 02:33:42.628 [INFO][3574] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.31-k8s-nfs--server--provisioner--0-eth0" Dec 13 02:33:42.667749 containerd[1459]: time="2024-12-13T02:33:42.667351493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:33:42.667749 containerd[1459]: time="2024-12-13T02:33:42.667426815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:33:42.667749 containerd[1459]: time="2024-12-13T02:33:42.667456089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:33:42.667749 containerd[1459]: time="2024-12-13T02:33:42.667572538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:33:42.696912 systemd[1]: Started cri-containerd-c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0.scope - libcontainer container c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0. Dec 13 02:33:42.735956 containerd[1459]: time="2024-12-13T02:33:42.735709411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:24ff737f-1743-48a7-bad4-f81df9d26448,Namespace:default,Attempt:0,} returns sandbox id \"c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0\"" Dec 13 02:33:42.738207 containerd[1459]: time="2024-12-13T02:33:42.738181190Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 02:33:43.470322 kubelet[1859]: E1213 02:33:43.470216 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:44.470884 kubelet[1859]: E1213 02:33:44.470795 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:44.515243 systemd-networkd[1376]: cali60e51b789ff: Gained IPv6LL Dec 13 02:33:45.471208 kubelet[1859]: E1213 02:33:45.471153 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:46.472182 kubelet[1859]: E1213 02:33:46.472114 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:46.486408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2329950289.mount: Deactivated successfully. Dec 13 02:33:47.472786 kubelet[1859]: E1213 02:33:47.472725 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:48.473573 kubelet[1859]: E1213 02:33:48.473352 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:49.473779 kubelet[1859]: E1213 02:33:49.473701 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:49.622677 containerd[1459]: time="2024-12-13T02:33:49.622526704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:49.626905 containerd[1459]: time="2024-12-13T02:33:49.626115847Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Dec 13 02:33:49.646719 containerd[1459]: time="2024-12-13T02:33:49.644913167Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:49.653455 containerd[1459]: time="2024-12-13T02:33:49.653378780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:49.657022 containerd[1459]: time="2024-12-13T02:33:49.656907241Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.918676468s" Dec 13 02:33:49.657162 containerd[1459]: time="2024-12-13T02:33:49.657021645Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 02:33:49.663538 containerd[1459]: time="2024-12-13T02:33:49.663478059Z" level=info msg="CreateContainer within sandbox \"c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 02:33:49.717005 containerd[1459]: time="2024-12-13T02:33:49.716919731Z" level=info msg="CreateContainer within sandbox \"c631b31ea16ca07971ab85d963f541dc4b1ed5bcd206559019da52399f27c9a0\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"31d79e8caaa18f44300b3b4a7131ef05b37ef3b444fe01dbdc4ffe42a55b9b2e\"" Dec 13 02:33:49.718886 containerd[1459]: time="2024-12-13T02:33:49.718831377Z" level=info msg="StartContainer for \"31d79e8caaa18f44300b3b4a7131ef05b37ef3b444fe01dbdc4ffe42a55b9b2e\"" Dec 13 02:33:49.790909 systemd[1]: Started cri-containerd-31d79e8caaa18f44300b3b4a7131ef05b37ef3b444fe01dbdc4ffe42a55b9b2e.scope - libcontainer container 31d79e8caaa18f44300b3b4a7131ef05b37ef3b444fe01dbdc4ffe42a55b9b2e. Dec 13 02:33:49.843074 containerd[1459]: time="2024-12-13T02:33:49.842995733Z" level=info msg="StartContainer for \"31d79e8caaa18f44300b3b4a7131ef05b37ef3b444fe01dbdc4ffe42a55b9b2e\" returns successfully" Dec 13 02:33:50.199718 kubelet[1859]: I1213 02:33:50.199570 1859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.277193032 podStartE2EDuration="9.199537613s" podCreationTimestamp="2024-12-13 02:33:41 +0000 UTC" firstStartedPulling="2024-12-13 02:33:42.737555546 +0000 UTC m=+68.346948977" lastFinishedPulling="2024-12-13 02:33:49.659900077 +0000 UTC m=+75.269293558" observedRunningTime="2024-12-13 02:33:50.197220055 +0000 UTC m=+75.806613567" watchObservedRunningTime="2024-12-13 02:33:50.199537613 +0000 UTC m=+75.808931124" Dec 13 02:33:50.474922 kubelet[1859]: E1213 02:33:50.474694 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:51.475843 kubelet[1859]: E1213 02:33:51.475751 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:52.476791 kubelet[1859]: E1213 02:33:52.476630 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:53.477854 kubelet[1859]: E1213 02:33:53.477764 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:54.478145 kubelet[1859]: E1213 02:33:54.478052 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:55.478835 kubelet[1859]: E1213 02:33:55.478736 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:56.417204 kubelet[1859]: E1213 02:33:56.417102 1859 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:56.479112 kubelet[1859]: E1213 02:33:56.479012 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:57.479697 kubelet[1859]: E1213 02:33:57.479550 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:58.480671 kubelet[1859]: E1213 02:33:58.480538 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:59.481844 kubelet[1859]: E1213 02:33:59.481758 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:00.482939 kubelet[1859]: E1213 02:34:00.482855 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:01.483728 kubelet[1859]: E1213 02:34:01.483588 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:02.484921 kubelet[1859]: E1213 02:34:02.484817 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:03.485207 kubelet[1859]: E1213 02:34:03.485083 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:04.486517 kubelet[1859]: E1213 02:34:04.486415 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:05.487405 kubelet[1859]: E1213 02:34:05.487269 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:06.488422 kubelet[1859]: E1213 02:34:06.488292 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:07.488697 kubelet[1859]: E1213 02:34:07.488542 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:08.489142 kubelet[1859]: E1213 02:34:08.488962 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:09.489884 kubelet[1859]: E1213 02:34:09.489781 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:10.490163 kubelet[1859]: E1213 02:34:10.490028 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:11.490953 kubelet[1859]: E1213 02:34:11.490834 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:12.491466 kubelet[1859]: E1213 02:34:12.491316 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:13.492636 kubelet[1859]: E1213 02:34:13.492500 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:14.493913 kubelet[1859]: E1213 02:34:14.493804 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:14.750136 kubelet[1859]: I1213 02:34:14.749815 1859 topology_manager.go:215] "Topology Admit Handler" podUID="460a8903-b05a-4113-bd07-152c42df4478" podNamespace="default" podName="test-pod-1" Dec 13 02:34:14.776754 systemd[1]: Created slice kubepods-besteffort-pod460a8903_b05a_4113_bd07_152c42df4478.slice - libcontainer container kubepods-besteffort-pod460a8903_b05a_4113_bd07_152c42df4478.slice. Dec 13 02:34:14.914139 kubelet[1859]: I1213 02:34:14.914033 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls6fs\" (UniqueName: \"kubernetes.io/projected/460a8903-b05a-4113-bd07-152c42df4478-kube-api-access-ls6fs\") pod \"test-pod-1\" (UID: \"460a8903-b05a-4113-bd07-152c42df4478\") " pod="default/test-pod-1" Dec 13 02:34:14.914139 kubelet[1859]: I1213 02:34:14.914145 1859 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99e0747e-92dc-433c-b423-cd740c1e825c\" (UniqueName: \"kubernetes.io/nfs/460a8903-b05a-4113-bd07-152c42df4478-pvc-99e0747e-92dc-433c-b423-cd740c1e825c\") pod \"test-pod-1\" (UID: \"460a8903-b05a-4113-bd07-152c42df4478\") " pod="default/test-pod-1" Dec 13 02:34:15.111799 kernel: FS-Cache: Loaded Dec 13 02:34:15.216945 kernel: RPC: Registered named UNIX socket transport module. Dec 13 02:34:15.217282 kernel: RPC: Registered udp transport module. Dec 13 02:34:15.217341 kernel: RPC: Registered tcp transport module. Dec 13 02:34:15.217384 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 02:34:15.217834 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 02:34:15.495105 kubelet[1859]: E1213 02:34:15.494208 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:15.557022 kernel: NFS: Registering the id_resolver key type Dec 13 02:34:15.557285 kernel: Key type id_resolver registered Dec 13 02:34:15.557350 kernel: Key type id_legacy registered Dec 13 02:34:15.607425 nfsidmap[3809]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 02:34:15.619067 nfsidmap[3810]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 02:34:15.685853 containerd[1459]: time="2024-12-13T02:34:15.685703920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:460a8903-b05a-4113-bd07-152c42df4478,Namespace:default,Attempt:0,}" Dec 13 02:34:15.968617 systemd-networkd[1376]: cali5ec59c6bf6e: Link UP Dec 13 02:34:15.972842 systemd-networkd[1376]: cali5ec59c6bf6e: Gained carrier Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.805 [INFO][3811] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.31-k8s-test--pod--1-eth0 default 460a8903-b05a-4113-bd07-152c42df4478 1508 0 2024-12-13 02:33:46 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.31 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.31-k8s-test--pod--1-" Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.805 [INFO][3811] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.31-k8s-test--pod--1-eth0" Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.873 [INFO][3822] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" HandleID="k8s-pod-network.c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" Workload="172.24.4.31-k8s-test--pod--1-eth0" Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.894 [INFO][3822] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" HandleID="k8s-pod-network.c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" Workload="172.24.4.31-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002937c0), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.31", "pod":"test-pod-1", "timestamp":"2024-12-13 02:34:15.8738737 +0000 UTC"}, Hostname:"172.24.4.31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.894 [INFO][3822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.894 [INFO][3822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.894 [INFO][3822] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.31' Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.898 [INFO][3822] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" host="172.24.4.31" Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.908 [INFO][3822] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.31" Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.917 [INFO][3822] ipam/ipam.go 489: Trying affinity for 192.168.44.64/26 host="172.24.4.31" Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.921 [INFO][3822] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.64/26 host="172.24.4.31" Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.927 [INFO][3822] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="172.24.4.31" Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.928 [INFO][3822] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" host="172.24.4.31" Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.931 [INFO][3822] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236 Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.939 [INFO][3822] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" host="172.24.4.31" Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.954 [INFO][3822] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.68/26] block=192.168.44.64/26 handle="k8s-pod-network.c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" host="172.24.4.31" Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.954 [INFO][3822] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.68/26] handle="k8s-pod-network.c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" host="172.24.4.31" Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.954 [INFO][3822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.955 [INFO][3822] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.68/26] IPv6=[] ContainerID="c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" HandleID="k8s-pod-network.c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" Workload="172.24.4.31-k8s-test--pod--1-eth0" Dec 13 02:34:16.014464 containerd[1459]: 2024-12-13 02:34:15.958 [INFO][3811] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.31-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.31-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"460a8903-b05a-4113-bd07-152c42df4478", ResourceVersion:"1508", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.31", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:34:16.017134 containerd[1459]: 2024-12-13 02:34:15.959 [INFO][3811] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.68/32] ContainerID="c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.31-k8s-test--pod--1-eth0" Dec 13 02:34:16.017134 containerd[1459]: 2024-12-13 02:34:15.959 [INFO][3811] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.31-k8s-test--pod--1-eth0" Dec 13 02:34:16.017134 containerd[1459]: 2024-12-13 02:34:15.975 [INFO][3811] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.31-k8s-test--pod--1-eth0" Dec 13 02:34:16.017134 containerd[1459]: 2024-12-13 02:34:15.983 [INFO][3811] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.31-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.31-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"460a8903-b05a-4113-bd07-152c42df4478", ResourceVersion:"1508", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.31", ContainerID:"c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"c6:3f:f0:11:fb:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:34:16.017134 containerd[1459]: 2024-12-13 02:34:16.003 [INFO][3811] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.31-k8s-test--pod--1-eth0" Dec 13 02:34:16.050030 containerd[1459]: time="2024-12-13T02:34:16.049549889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:34:16.050030 containerd[1459]: time="2024-12-13T02:34:16.049705853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:34:16.050030 containerd[1459]: time="2024-12-13T02:34:16.049743945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:16.050382 containerd[1459]: time="2024-12-13T02:34:16.049993484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:16.076393 systemd[1]: run-containerd-runc-k8s.io-c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236-runc.VP4OAd.mount: Deactivated successfully. Dec 13 02:34:16.085855 systemd[1]: Started cri-containerd-c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236.scope - libcontainer container c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236. Dec 13 02:34:16.136839 containerd[1459]: time="2024-12-13T02:34:16.136751960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:460a8903-b05a-4113-bd07-152c42df4478,Namespace:default,Attempt:0,} returns sandbox id \"c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236\"" Dec 13 02:34:16.139204 containerd[1459]: time="2024-12-13T02:34:16.139096243Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:34:16.417756 kubelet[1859]: E1213 02:34:16.417628 1859 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:16.496183 kubelet[1859]: E1213 02:34:16.496110 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:16.545840 containerd[1459]: time="2024-12-13T02:34:16.545733222Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:16.547785 containerd[1459]: time="2024-12-13T02:34:16.547622889Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 02:34:16.555617 containerd[1459]: time="2024-12-13T02:34:16.555513472Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 416.366754ms" Dec 13 02:34:16.555617 containerd[1459]: time="2024-12-13T02:34:16.555598802Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:34:16.560999 containerd[1459]: time="2024-12-13T02:34:16.560872350Z" level=info msg="CreateContainer within sandbox \"c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 02:34:16.608130 containerd[1459]: time="2024-12-13T02:34:16.608032014Z" level=info msg="CreateContainer within sandbox \"c09c9b32f4a60a3c5f0201a4c46c4834ca1e7ab1a9c919689c88766394d66236\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6e5dea6e1b6669f072d37140ea6eac8730ea2c436ba03e631a43eb50d0d87ed4\"" Dec 13 02:34:16.610177 containerd[1459]: time="2024-12-13T02:34:16.609892747Z" level=info msg="StartContainer for \"6e5dea6e1b6669f072d37140ea6eac8730ea2c436ba03e631a43eb50d0d87ed4\"" Dec 13 02:34:16.679807 systemd[1]: Started cri-containerd-6e5dea6e1b6669f072d37140ea6eac8730ea2c436ba03e631a43eb50d0d87ed4.scope - libcontainer container 6e5dea6e1b6669f072d37140ea6eac8730ea2c436ba03e631a43eb50d0d87ed4. Dec 13 02:34:16.718976 containerd[1459]: time="2024-12-13T02:34:16.718895816Z" level=info msg="StartContainer for \"6e5dea6e1b6669f072d37140ea6eac8730ea2c436ba03e631a43eb50d0d87ed4\" returns successfully" Dec 13 02:34:17.154993 systemd-networkd[1376]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 02:34:17.496984 kubelet[1859]: E1213 02:34:17.496747 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:18.497024 kubelet[1859]: E1213 02:34:18.496943 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:19.498281 kubelet[1859]: E1213 02:34:19.498159 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:20.499551 kubelet[1859]: E1213 02:34:20.499451 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:21.500702 kubelet[1859]: E1213 02:34:21.500544 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:22.501284 kubelet[1859]: E1213 02:34:22.501181 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:23.501516 kubelet[1859]: E1213 02:34:23.501425 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:34:24.502150 kubelet[1859]: E1213 02:34:24.502054 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"