Jan 30 16:01:25.099472 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 16:01:25.099498 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 16:01:25.099529 kernel: BIOS-provided physical RAM map: Jan 30 16:01:25.099537 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 16:01:25.099544 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 16:01:25.099554 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 16:01:25.099562 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 30 16:01:25.099570 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 30 16:01:25.099577 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 16:01:25.099584 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 16:01:25.099591 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 30 16:01:25.099599 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 16:01:25.099606 kernel: NX (Execute Disable) protection: active Jan 30 16:01:25.099613 kernel: APIC: Static calls initialized Jan 30 16:01:25.099624 kernel: SMBIOS 3.0.0 present. Jan 30 16:01:25.099632 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 30 16:01:25.099640 kernel: Hypervisor detected: KVM Jan 30 16:01:25.099647 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 16:01:25.099655 kernel: kvm-clock: using sched offset of 3439068257 cycles Jan 30 16:01:25.099664 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 16:01:25.099673 kernel: tsc: Detected 1996.249 MHz processor Jan 30 16:01:25.099681 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 16:01:25.099689 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 16:01:25.099697 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 30 16:01:25.099705 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 16:01:25.099713 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 16:01:25.099721 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 30 16:01:25.099728 kernel: ACPI: Early table checksum verification disabled Jan 30 16:01:25.099738 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 30 16:01:25.099746 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 16:01:25.099754 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 16:01:25.099761 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 16:01:25.099769 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 30 16:01:25.099777 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 16:01:25.099785 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 16:01:25.099793 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 30 16:01:25.099801 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 30 16:01:25.099810 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 30 16:01:25.099818 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 30 16:01:25.099826 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 30 16:01:25.099837 kernel: No NUMA configuration found Jan 30 16:01:25.099854 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 30 16:01:25.099863 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 30 16:01:25.099873 kernel: Zone ranges: Jan 30 16:01:25.099881 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 16:01:25.099889 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 16:01:25.099897 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 30 16:01:25.099905 kernel: Movable zone start for each node Jan 30 16:01:25.099913 kernel: Early memory node ranges Jan 30 16:01:25.099921 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 16:01:25.099929 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 30 16:01:25.099938 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 30 16:01:25.099948 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 30 16:01:25.099956 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 16:01:25.099964 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 16:01:25.099972 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 30 16:01:25.099980 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 16:01:25.099989 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 16:01:25.099997 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 16:01:25.100005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 16:01:25.100013 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 16:01:25.100023 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 16:01:25.100031 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 16:01:25.100039 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 16:01:25.100048 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 16:01:25.100056 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 16:01:25.100064 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 16:01:25.100072 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 30 16:01:25.100080 kernel: Booting paravirtualized kernel on KVM Jan 30 16:01:25.100089 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 16:01:25.100099 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 16:01:25.100107 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 16:01:25.100116 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 16:01:25.100124 kernel: pcpu-alloc: [0] 0 1 Jan 30 16:01:25.100132 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 16:01:25.100141 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 16:01:25.100150 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 16:01:25.100160 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 16:01:25.100169 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 16:01:25.100177 kernel: Fallback order for Node 0: 0 Jan 30 16:01:25.100185 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 30 16:01:25.100193 kernel: Policy zone: Normal Jan 30 16:01:25.100201 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 16:01:25.100209 kernel: software IO TLB: area num 2. Jan 30 16:01:25.100218 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 227308K reserved, 0K cma-reserved) Jan 30 16:01:25.100226 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 16:01:25.100236 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 16:01:25.100244 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 16:01:25.100252 kernel: Dynamic Preempt: voluntary Jan 30 16:01:25.100260 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 16:01:25.100269 kernel: rcu: RCU event tracing is enabled. Jan 30 16:01:25.100278 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 16:01:25.100286 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 16:01:25.100294 kernel: Rude variant of Tasks RCU enabled. Jan 30 16:01:25.100302 kernel: Tracing variant of Tasks RCU enabled. Jan 30 16:01:25.100310 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 16:01:25.100321 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 16:01:25.100329 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 16:01:25.100337 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 16:01:25.100345 kernel: Console: colour VGA+ 80x25 Jan 30 16:01:25.100353 kernel: printk: console [tty0] enabled Jan 30 16:01:25.100362 kernel: printk: console [ttyS0] enabled Jan 30 16:01:25.100370 kernel: ACPI: Core revision 20230628 Jan 30 16:01:25.100378 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 16:01:25.100386 kernel: x2apic enabled Jan 30 16:01:25.100397 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 16:01:25.100405 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 16:01:25.100413 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 16:01:25.100422 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 30 16:01:25.100430 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 16:01:25.100438 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 16:01:25.100446 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 16:01:25.100456 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 16:01:25.100465 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 16:01:25.100477 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 16:01:25.100485 kernel: Speculative Store Bypass: Vulnerable Jan 30 16:01:25.100494 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 30 16:01:25.100503 kernel: Freeing SMP alternatives memory: 32K Jan 30 16:01:25.102553 kernel: pid_max: default: 32768 minimum: 301 Jan 30 16:01:25.102565 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 16:01:25.102574 kernel: landlock: Up and running. Jan 30 16:01:25.102583 kernel: SELinux: Initializing. Jan 30 16:01:25.102592 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 16:01:25.102601 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 16:01:25.102610 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 30 16:01:25.102621 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 16:01:25.102630 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 16:01:25.102639 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 16:01:25.102647 kernel: Performance Events: AMD PMU driver. Jan 30 16:01:25.102656 kernel: ... version: 0 Jan 30 16:01:25.102667 kernel: ... bit width: 48 Jan 30 16:01:25.102675 kernel: ... generic registers: 4 Jan 30 16:01:25.102684 kernel: ... value mask: 0000ffffffffffff Jan 30 16:01:25.102693 kernel: ... max period: 00007fffffffffff Jan 30 16:01:25.102701 kernel: ... fixed-purpose events: 0 Jan 30 16:01:25.102710 kernel: ... event mask: 000000000000000f Jan 30 16:01:25.102719 kernel: signal: max sigframe size: 1440 Jan 30 16:01:25.102727 kernel: rcu: Hierarchical SRCU implementation. Jan 30 16:01:25.102736 kernel: rcu: Max phase no-delay instances is 400. Jan 30 16:01:25.102745 kernel: smp: Bringing up secondary CPUs ... Jan 30 16:01:25.102755 kernel: smpboot: x86: Booting SMP configuration: Jan 30 16:01:25.102764 kernel: .... node #0, CPUs: #1 Jan 30 16:01:25.102772 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 16:01:25.102781 kernel: smpboot: Max logical packages: 2 Jan 30 16:01:25.102790 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 30 16:01:25.102798 kernel: devtmpfs: initialized Jan 30 16:01:25.102807 kernel: x86/mm: Memory block size: 128MB Jan 30 16:01:25.102816 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 16:01:25.102825 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 16:01:25.102836 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 16:01:25.102845 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 16:01:25.102853 kernel: audit: initializing netlink subsys (disabled) Jan 30 16:01:25.102862 kernel: audit: type=2000 audit(1738252883.954:1): state=initialized audit_enabled=0 res=1 Jan 30 16:01:25.102870 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 16:01:25.102879 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 16:01:25.102888 kernel: cpuidle: using governor menu Jan 30 16:01:25.102896 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 16:01:25.102905 kernel: dca service started, version 1.12.1 Jan 30 16:01:25.102915 kernel: PCI: Using configuration type 1 for base access Jan 30 16:01:25.102924 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 16:01:25.102933 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 16:01:25.102941 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 16:01:25.102950 kernel: ACPI: Added _OSI(Module Device) Jan 30 16:01:25.102959 kernel: ACPI: Added _OSI(Processor Device) Jan 30 16:01:25.102967 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 16:01:25.102976 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 16:01:25.102984 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 16:01:25.102995 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 16:01:25.103004 kernel: ACPI: Interpreter enabled Jan 30 16:01:25.103012 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 16:01:25.103021 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 16:01:25.103030 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 16:01:25.103038 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 16:01:25.103047 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 16:01:25.103056 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 16:01:25.103207 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 16:01:25.103312 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 16:01:25.103404 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 16:01:25.103418 kernel: acpiphp: Slot [3] registered Jan 30 16:01:25.103427 kernel: acpiphp: Slot [4] registered Jan 30 16:01:25.103435 kernel: acpiphp: Slot [5] registered Jan 30 16:01:25.103444 kernel: acpiphp: Slot [6] registered Jan 30 16:01:25.103453 kernel: acpiphp: Slot [7] registered Jan 30 16:01:25.103465 kernel: acpiphp: Slot [8] registered Jan 30 16:01:25.103474 kernel: acpiphp: Slot [9] registered Jan 30 16:01:25.103483 kernel: acpiphp: Slot [10] registered Jan 30 16:01:25.103491 kernel: acpiphp: Slot [11] registered Jan 30 16:01:25.103500 kernel: acpiphp: Slot [12] registered Jan 30 16:01:25.103529 kernel: acpiphp: Slot [13] registered Jan 30 16:01:25.103538 kernel: acpiphp: Slot [14] registered Jan 30 16:01:25.103546 kernel: acpiphp: Slot [15] registered Jan 30 16:01:25.103555 kernel: acpiphp: Slot [16] registered Jan 30 16:01:25.103566 kernel: acpiphp: Slot [17] registered Jan 30 16:01:25.103575 kernel: acpiphp: Slot [18] registered Jan 30 16:01:25.103583 kernel: acpiphp: Slot [19] registered Jan 30 16:01:25.103592 kernel: acpiphp: Slot [20] registered Jan 30 16:01:25.103600 kernel: acpiphp: Slot [21] registered Jan 30 16:01:25.103609 kernel: acpiphp: Slot [22] registered Jan 30 16:01:25.103617 kernel: acpiphp: Slot [23] registered Jan 30 16:01:25.103625 kernel: acpiphp: Slot [24] registered Jan 30 16:01:25.103634 kernel: acpiphp: Slot [25] registered Jan 30 16:01:25.103642 kernel: acpiphp: Slot [26] registered Jan 30 16:01:25.103653 kernel: acpiphp: Slot [27] registered Jan 30 16:01:25.103662 kernel: acpiphp: Slot [28] registered Jan 30 16:01:25.103670 kernel: acpiphp: Slot [29] registered Jan 30 16:01:25.103678 kernel: acpiphp: Slot [30] registered Jan 30 16:01:25.103687 kernel: acpiphp: Slot [31] registered Jan 30 16:01:25.103695 kernel: PCI host bridge to bus 0000:00 Jan 30 16:01:25.103800 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 16:01:25.103899 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 16:01:25.103995 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 16:01:25.104083 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 16:01:25.104169 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 30 16:01:25.104255 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 16:01:25.104371 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 16:01:25.104480 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 16:01:25.104619 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 16:01:25.104727 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 30 16:01:25.104826 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 16:01:25.104924 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 16:01:25.105020 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 16:01:25.105111 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 16:01:25.105211 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 16:01:25.105313 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 16:01:25.105405 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 16:01:25.107302 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 16:01:25.107416 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 16:01:25.107525 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 30 16:01:25.107622 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 30 16:01:25.107713 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 30 16:01:25.107810 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 16:01:25.107927 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 16:01:25.108028 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 30 16:01:25.108125 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 30 16:01:25.108222 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 30 16:01:25.108318 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 30 16:01:25.108426 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 30 16:01:25.108549 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 16:01:25.108654 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 30 16:01:25.108751 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 30 16:01:25.108857 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 16:01:25.108955 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 30 16:01:25.109052 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 30 16:01:25.109151 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 16:01:25.109250 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 30 16:01:25.109341 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 30 16:01:25.109430 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 30 16:01:25.109443 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 16:01:25.109452 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 16:01:25.109461 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 16:01:25.109470 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 16:01:25.109479 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 16:01:25.109492 kernel: iommu: Default domain type: Translated Jan 30 16:01:25.109500 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 16:01:25.109566 kernel: PCI: Using ACPI for IRQ routing Jan 30 16:01:25.109577 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 16:01:25.109586 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 16:01:25.109594 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 30 16:01:25.109690 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 16:01:25.109783 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 16:01:25.109879 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 16:01:25.109892 kernel: vgaarb: loaded Jan 30 16:01:25.109901 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 16:01:25.109910 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 16:01:25.109919 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 16:01:25.109928 kernel: pnp: PnP ACPI init Jan 30 16:01:25.110022 kernel: pnp 00:03: [dma 2] Jan 30 16:01:25.110037 kernel: pnp: PnP ACPI: found 5 devices Jan 30 16:01:25.110046 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 16:01:25.110059 kernel: NET: Registered PF_INET protocol family Jan 30 16:01:25.110068 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 16:01:25.110077 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 16:01:25.110086 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 16:01:25.110095 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 16:01:25.110104 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 16:01:25.110113 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 16:01:25.110122 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 16:01:25.110131 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 16:01:25.110141 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 16:01:25.110150 kernel: NET: Registered PF_XDP protocol family Jan 30 16:01:25.110233 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 16:01:25.110315 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 16:01:25.110395 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 16:01:25.110475 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 30 16:01:25.110695 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 30 16:01:25.110791 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 16:01:25.110890 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 16:01:25.110903 kernel: PCI: CLS 0 bytes, default 64 Jan 30 16:01:25.110912 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 16:01:25.110921 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 30 16:01:25.110930 kernel: Initialise system trusted keyrings Jan 30 16:01:25.110939 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 16:01:25.110948 kernel: Key type asymmetric registered Jan 30 16:01:25.110957 kernel: Asymmetric key parser 'x509' registered Jan 30 16:01:25.110968 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 16:01:25.110977 kernel: io scheduler mq-deadline registered Jan 30 16:01:25.110986 kernel: io scheduler kyber registered Jan 30 16:01:25.110995 kernel: io scheduler bfq registered Jan 30 16:01:25.111003 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 16:01:25.111013 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 16:01:25.111021 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 16:01:25.111030 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 16:01:25.111039 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 16:01:25.111050 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 16:01:25.111059 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 16:01:25.111067 kernel: random: crng init done Jan 30 16:01:25.111076 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 16:01:25.111085 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 16:01:25.111093 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 16:01:25.111184 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 16:01:25.111198 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 16:01:25.111277 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 16:01:25.111364 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T16:01:24 UTC (1738252884) Jan 30 16:01:25.111446 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 30 16:01:25.111459 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 16:01:25.111468 kernel: NET: Registered PF_INET6 protocol family Jan 30 16:01:25.111476 kernel: Segment Routing with IPv6 Jan 30 16:01:25.111485 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 16:01:25.111494 kernel: NET: Registered PF_PACKET protocol family Jan 30 16:01:25.111502 kernel: Key type dns_resolver registered Jan 30 16:01:25.111530 kernel: IPI shorthand broadcast: enabled Jan 30 16:01:25.111539 kernel: sched_clock: Marking stable (1025007503, 175714586)->(1240300944, -39578855) Jan 30 16:01:25.111548 kernel: registered taskstats version 1 Jan 30 16:01:25.111556 kernel: Loading compiled-in X.509 certificates Jan 30 16:01:25.111571 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 16:01:25.115569 kernel: Key type .fscrypt registered Jan 30 16:01:25.115590 kernel: Key type fscrypt-provisioning registered Jan 30 16:01:25.115601 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 16:01:25.115611 kernel: ima: Allocated hash algorithm: sha1 Jan 30 16:01:25.115626 kernel: ima: No architecture policies found Jan 30 16:01:25.115635 kernel: clk: Disabling unused clocks Jan 30 16:01:25.115645 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 16:01:25.115655 kernel: Write protecting the kernel read-only data: 36864k Jan 30 16:01:25.115664 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 16:01:25.115674 kernel: Run /init as init process Jan 30 16:01:25.115683 kernel: with arguments: Jan 30 16:01:25.115692 kernel: /init Jan 30 16:01:25.115701 kernel: with environment: Jan 30 16:01:25.115713 kernel: HOME=/ Jan 30 16:01:25.115722 kernel: TERM=linux Jan 30 16:01:25.115731 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 16:01:25.115745 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 16:01:25.115758 systemd[1]: Detected virtualization kvm. Jan 30 16:01:25.115769 systemd[1]: Detected architecture x86-64. Jan 30 16:01:25.115779 systemd[1]: Running in initrd. Jan 30 16:01:25.115790 systemd[1]: No hostname configured, using default hostname. Jan 30 16:01:25.115800 systemd[1]: Hostname set to . Jan 30 16:01:25.115811 systemd[1]: Initializing machine ID from VM UUID. Jan 30 16:01:25.115821 systemd[1]: Queued start job for default target initrd.target. Jan 30 16:01:25.115831 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 16:01:25.115858 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 16:01:25.115872 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 16:01:25.115893 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 16:01:25.115905 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 16:01:25.115916 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 16:01:25.115928 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 16:01:25.115939 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 16:01:25.115949 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 16:01:25.115962 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 16:01:25.115972 systemd[1]: Reached target paths.target - Path Units. Jan 30 16:01:25.115983 systemd[1]: Reached target slices.target - Slice Units. Jan 30 16:01:25.115993 systemd[1]: Reached target swap.target - Swaps. Jan 30 16:01:25.116003 systemd[1]: Reached target timers.target - Timer Units. Jan 30 16:01:25.116013 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 16:01:25.116024 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 16:01:25.116034 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 16:01:25.116047 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 16:01:25.116057 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 16:01:25.116068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 16:01:25.116078 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 16:01:25.116089 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 16:01:25.116099 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 16:01:25.116109 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 16:01:25.116120 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 16:01:25.116130 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 16:01:25.116142 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 16:01:25.116152 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 16:01:25.116163 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 16:01:25.116173 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 16:01:25.116184 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 16:01:25.116194 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 16:01:25.116236 systemd-journald[184]: Collecting audit messages is disabled. Jan 30 16:01:25.116263 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 16:01:25.116277 systemd-journald[184]: Journal started Jan 30 16:01:25.116301 systemd-journald[184]: Runtime Journal (/run/log/journal/ce8dc1684c074b6bb590321b01b291af) is 8.0M, max 78.3M, 70.3M free. Jan 30 16:01:25.081208 systemd-modules-load[185]: Inserted module 'overlay' Jan 30 16:01:25.121546 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 16:01:25.131567 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 16:01:25.133195 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 30 16:01:25.179358 kernel: Bridge firewalling registered Jan 30 16:01:25.178759 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 16:01:25.180106 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 16:01:25.181352 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 16:01:25.189746 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 16:01:25.191693 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 16:01:25.196714 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 16:01:25.200161 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 16:01:25.220819 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 16:01:25.224435 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 16:01:25.226276 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 16:01:25.237650 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 16:01:25.238697 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 16:01:25.260650 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 16:01:25.271017 dracut-cmdline[216]: dracut-dracut-053 Jan 30 16:01:25.277157 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 16:01:25.291351 systemd-resolved[219]: Positive Trust Anchors: Jan 30 16:01:25.291369 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 16:01:25.291412 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 16:01:25.294636 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 30 16:01:25.295734 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 16:01:25.298825 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 16:01:25.369612 kernel: SCSI subsystem initialized Jan 30 16:01:25.380569 kernel: Loading iSCSI transport class v2.0-870. Jan 30 16:01:25.392562 kernel: iscsi: registered transport (tcp) Jan 30 16:01:25.416679 kernel: iscsi: registered transport (qla4xxx) Jan 30 16:01:25.416746 kernel: QLogic iSCSI HBA Driver Jan 30 16:01:25.479065 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 16:01:25.486644 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 16:01:25.532984 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 16:01:25.533064 kernel: device-mapper: uevent: version 1.0.3 Jan 30 16:01:25.533744 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 16:01:25.603618 kernel: raid6: sse2x4 gen() 5195 MB/s Jan 30 16:01:25.621601 kernel: raid6: sse2x2 gen() 7790 MB/s Jan 30 16:01:25.640153 kernel: raid6: sse2x1 gen() 9698 MB/s Jan 30 16:01:25.640217 kernel: raid6: using algorithm sse2x1 gen() 9698 MB/s Jan 30 16:01:25.659394 kernel: raid6: .... xor() 7026 MB/s, rmw enabled Jan 30 16:01:25.659475 kernel: raid6: using ssse3x2 recovery algorithm Jan 30 16:01:25.681825 kernel: xor: measuring software checksum speed Jan 30 16:01:25.681899 kernel: prefetch64-sse : 17262 MB/sec Jan 30 16:01:25.683914 kernel: generic_sse : 15690 MB/sec Jan 30 16:01:25.683966 kernel: xor: using function: prefetch64-sse (17262 MB/sec) Jan 30 16:01:25.865583 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 16:01:25.883899 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 16:01:25.890676 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 16:01:25.931480 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 30 16:01:25.942049 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 16:01:25.953088 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 16:01:25.986042 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jan 30 16:01:26.035495 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 16:01:26.045792 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 16:01:26.097001 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 16:01:26.110178 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 16:01:26.157823 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 16:01:26.159196 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 16:01:26.160486 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 16:01:26.161051 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 16:01:26.167712 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 16:01:26.178153 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 16:01:26.203425 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 16:01:26.207590 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 30 16:01:26.219027 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 30 16:01:26.219162 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 16:01:26.219176 kernel: GPT:17805311 != 20971519 Jan 30 16:01:26.219191 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 16:01:26.219203 kernel: GPT:17805311 != 20971519 Jan 30 16:01:26.219213 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 16:01:26.219224 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 16:01:26.203587 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 16:01:26.205272 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 16:01:26.224677 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 16:01:26.224876 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 16:01:26.226894 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 16:01:26.233828 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 16:01:26.235138 kernel: libata version 3.00 loaded. Jan 30 16:01:26.250967 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 16:01:26.270626 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (460) Jan 30 16:01:26.270642 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Jan 30 16:01:26.270655 kernel: scsi host0: ata_piix Jan 30 16:01:26.270790 kernel: scsi host1: ata_piix Jan 30 16:01:26.270911 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 30 16:01:26.270924 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 30 16:01:26.293406 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 16:01:26.330357 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 16:01:26.332647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 16:01:26.345332 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 16:01:26.346466 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 16:01:26.357676 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 16:01:26.374742 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 16:01:26.378210 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 16:01:26.390279 disk-uuid[504]: Primary Header is updated. Jan 30 16:01:26.390279 disk-uuid[504]: Secondary Entries is updated. Jan 30 16:01:26.390279 disk-uuid[504]: Secondary Header is updated. Jan 30 16:01:26.399874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 16:01:26.406547 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 16:01:26.420372 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 16:01:27.424821 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 16:01:27.425572 disk-uuid[505]: The operation has completed successfully. Jan 30 16:01:27.503805 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 16:01:27.504098 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 16:01:27.535670 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 16:01:27.541199 sh[526]: Success Jan 30 16:01:27.560853 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 30 16:01:27.650077 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 16:01:27.651614 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 16:01:27.660591 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 16:01:27.677531 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 16:01:27.677576 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 16:01:27.677589 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 16:01:27.678801 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 16:01:27.680330 kernel: BTRFS info (device dm-0): using free space tree Jan 30 16:01:27.699009 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 16:01:27.701211 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 16:01:27.707869 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 16:01:27.717780 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 16:01:27.742601 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 16:01:27.748401 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 16:01:27.748471 kernel: BTRFS info (device vda6): using free space tree Jan 30 16:01:27.760594 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 16:01:27.779351 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 16:01:27.783531 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 16:01:27.798907 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 16:01:27.808924 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 16:01:27.854636 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 16:01:27.859670 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 16:01:27.884081 systemd-networkd[708]: lo: Link UP Jan 30 16:01:27.884092 systemd-networkd[708]: lo: Gained carrier Jan 30 16:01:27.885344 systemd-networkd[708]: Enumeration completed Jan 30 16:01:27.885976 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 16:01:27.885980 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 16:01:27.887664 systemd-networkd[708]: eth0: Link UP Jan 30 16:01:27.887668 systemd-networkd[708]: eth0: Gained carrier Jan 30 16:01:27.887678 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 16:01:27.889909 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 16:01:27.890836 systemd[1]: Reached target network.target - Network. Jan 30 16:01:27.901820 systemd-networkd[708]: eth0: DHCPv4 address 172.24.4.55/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 16:01:27.956023 ignition[640]: Ignition 2.19.0 Jan 30 16:01:27.956038 ignition[640]: Stage: fetch-offline Jan 30 16:01:27.957767 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 16:01:27.956088 ignition[640]: no configs at "/usr/lib/ignition/base.d" Jan 30 16:01:27.956099 ignition[640]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 16:01:27.956211 ignition[640]: parsed url from cmdline: "" Jan 30 16:01:27.956215 ignition[640]: no config URL provided Jan 30 16:01:27.956222 ignition[640]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 16:01:27.956232 ignition[640]: no config at "/usr/lib/ignition/user.ign" Jan 30 16:01:27.956238 ignition[640]: failed to fetch config: resource requires networking Jan 30 16:01:27.956441 ignition[640]: Ignition finished successfully Jan 30 16:01:27.963706 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 16:01:27.977996 ignition[718]: Ignition 2.19.0 Jan 30 16:01:27.978010 ignition[718]: Stage: fetch Jan 30 16:01:27.978199 ignition[718]: no configs at "/usr/lib/ignition/base.d" Jan 30 16:01:27.978214 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 16:01:27.978308 ignition[718]: parsed url from cmdline: "" Jan 30 16:01:27.978312 ignition[718]: no config URL provided Jan 30 16:01:27.978318 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 16:01:27.978327 ignition[718]: no config at "/usr/lib/ignition/user.ign" Jan 30 16:01:27.978467 ignition[718]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 30 16:01:27.978485 ignition[718]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 30 16:01:27.978500 ignition[718]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 30 16:01:28.168762 ignition[718]: GET result: OK Jan 30 16:01:28.168928 ignition[718]: parsing config with SHA512: 96a2752d4bf31108f99f9a296a1c81e68de324386525ff1e0bd5f1ea300b0ae877c411e0caf7403212460faefde4a17d4e093b684271b9e50980463c62f6d4bf Jan 30 16:01:28.181495 unknown[718]: fetched base config from "system" Jan 30 16:01:28.181576 unknown[718]: fetched base config from "system" Jan 30 16:01:28.183609 ignition[718]: fetch: fetch complete Jan 30 16:01:28.181593 unknown[718]: fetched user config from "openstack" Jan 30 16:01:28.183622 ignition[718]: fetch: fetch passed Jan 30 16:01:28.186907 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 16:01:28.183724 ignition[718]: Ignition finished successfully Jan 30 16:01:28.196937 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 16:01:28.230039 ignition[724]: Ignition 2.19.0 Jan 30 16:01:28.230064 ignition[724]: Stage: kargs Jan 30 16:01:28.230459 ignition[724]: no configs at "/usr/lib/ignition/base.d" Jan 30 16:01:28.230485 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 16:01:28.232904 ignition[724]: kargs: kargs passed Jan 30 16:01:28.235207 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 16:01:28.233004 ignition[724]: Ignition finished successfully Jan 30 16:01:28.250379 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 16:01:28.275913 ignition[730]: Ignition 2.19.0 Jan 30 16:01:28.275941 ignition[730]: Stage: disks Jan 30 16:01:28.276348 ignition[730]: no configs at "/usr/lib/ignition/base.d" Jan 30 16:01:28.276375 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 16:01:28.278883 ignition[730]: disks: disks passed Jan 30 16:01:28.280504 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 16:01:28.278986 ignition[730]: Ignition finished successfully Jan 30 16:01:28.283477 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 16:01:28.285113 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 16:01:28.287459 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 16:01:28.289724 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 16:01:28.292405 systemd[1]: Reached target basic.target - Basic System. Jan 30 16:01:28.309737 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 16:01:28.340496 systemd-fsck[738]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 16:01:28.354587 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 16:01:28.362720 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 16:01:28.545604 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 16:01:28.547674 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 16:01:28.550286 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 16:01:28.558662 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 16:01:28.562760 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 16:01:28.566334 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 16:01:28.577456 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (746) Jan 30 16:01:28.577501 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 16:01:28.577221 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 30 16:01:28.590012 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 16:01:28.590053 kernel: BTRFS info (device vda6): using free space tree Jan 30 16:01:28.589199 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 16:01:28.597901 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 16:01:28.589264 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 16:01:28.596961 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 16:01:28.598559 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 16:01:28.619673 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 16:01:28.731102 initrd-setup-root[776]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 16:01:28.738581 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory Jan 30 16:01:28.744943 initrd-setup-root[790]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 16:01:28.750237 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 16:01:28.888940 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 16:01:28.899647 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 16:01:28.906111 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 16:01:28.914128 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 16:01:28.917459 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 16:01:28.961309 ignition[864]: INFO : Ignition 2.19.0 Jan 30 16:01:28.962852 ignition[864]: INFO : Stage: mount Jan 30 16:01:28.965158 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 16:01:28.965158 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 16:01:28.963160 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 16:01:28.969651 ignition[864]: INFO : mount: mount passed Jan 30 16:01:28.969651 ignition[864]: INFO : Ignition finished successfully Jan 30 16:01:28.967763 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 16:01:29.618019 systemd-networkd[708]: eth0: Gained IPv6LL Jan 30 16:01:35.845129 coreos-metadata[748]: Jan 30 16:01:35.844 WARN failed to locate config-drive, using the metadata service API instead Jan 30 16:01:35.887073 coreos-metadata[748]: Jan 30 16:01:35.886 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 16:01:35.900701 coreos-metadata[748]: Jan 30 16:01:35.900 INFO Fetch successful Jan 30 16:01:35.903744 coreos-metadata[748]: Jan 30 16:01:35.903 INFO wrote hostname ci-4081-3-0-2-e08351c9d9.novalocal to /sysroot/etc/hostname Jan 30 16:01:35.906419 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 30 16:01:35.906702 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 30 16:01:35.917766 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 16:01:35.957402 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 16:01:35.972619 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (881) Jan 30 16:01:35.981769 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 16:01:35.981829 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 16:01:35.985954 kernel: BTRFS info (device vda6): using free space tree Jan 30 16:01:36.006387 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 16:01:36.010387 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 16:01:36.039249 ignition[900]: INFO : Ignition 2.19.0 Jan 30 16:01:36.041256 ignition[900]: INFO : Stage: files Jan 30 16:01:36.041256 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 16:01:36.041256 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 16:01:36.046709 ignition[900]: DEBUG : files: compiled without relabeling support, skipping Jan 30 16:01:36.046709 ignition[900]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 16:01:36.046709 ignition[900]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 16:01:36.051145 ignition[900]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 16:01:36.051145 ignition[900]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 16:01:36.051145 ignition[900]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 16:01:36.050307 unknown[900]: wrote ssh authorized keys file for user: core Jan 30 16:01:36.057030 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 16:01:36.057030 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 16:01:36.057030 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 16:01:36.057030 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 16:01:36.191150 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 16:01:36.515773 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 16:01:36.515773 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 16:01:36.515773 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 16:01:36.515773 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 16:01:36.515773 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 16:01:36.515773 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 16:01:36.515773 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 16:01:36.515773 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 16:01:36.532695 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 16:01:36.532695 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 16:01:36.532695 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 16:01:36.532695 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 16:01:36.532695 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 16:01:36.532695 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 16:01:36.532695 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 16:01:37.060084 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 16:01:38.655087 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 16:01:38.655087 ignition[900]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 16:01:38.659301 ignition[900]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 16:01:38.659301 ignition[900]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 16:01:38.659301 ignition[900]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 16:01:38.659301 ignition[900]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 16:01:38.659301 ignition[900]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 16:01:38.674668 ignition[900]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 16:01:38.674668 ignition[900]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 16:01:38.674668 ignition[900]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 16:01:38.674668 ignition[900]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 16:01:38.674668 ignition[900]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 16:01:38.674668 ignition[900]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 16:01:38.674668 ignition[900]: INFO : files: files passed Jan 30 16:01:38.674668 ignition[900]: INFO : Ignition finished successfully Jan 30 16:01:38.661233 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 16:01:38.672200 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 16:01:38.676690 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 16:01:38.682315 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 16:01:38.682570 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 16:01:38.699001 initrd-setup-root-after-ignition[928]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 16:01:38.700327 initrd-setup-root-after-ignition[928]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 16:01:38.701778 initrd-setup-root-after-ignition[932]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 16:01:38.707387 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 16:01:38.710481 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 16:01:38.717849 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 16:01:38.742897 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 16:01:38.744602 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 16:01:38.745917 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 16:01:38.747840 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 16:01:38.750433 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 16:01:38.759654 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 16:01:38.786804 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 16:01:38.792827 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 16:01:38.804027 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 16:01:38.804887 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 16:01:38.807289 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 16:01:38.809538 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 16:01:38.809670 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 16:01:38.812172 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 16:01:38.813313 systemd[1]: Stopped target basic.target - Basic System. Jan 30 16:01:38.815525 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 16:01:38.817440 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 16:01:38.819337 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 16:01:38.821591 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 16:01:38.823847 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 16:01:38.826120 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 16:01:38.828322 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 16:01:38.830610 systemd[1]: Stopped target swap.target - Swaps. Jan 30 16:01:38.832777 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 16:01:38.832891 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 16:01:38.835370 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 16:01:38.836678 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 16:01:38.838472 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 16:01:38.838615 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 16:01:38.840736 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 16:01:38.840862 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 16:01:38.844138 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 16:01:38.844262 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 16:01:38.845211 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 16:01:38.845331 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 16:01:38.857666 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 16:01:38.859685 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 16:01:38.861050 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 16:01:38.861995 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 16:01:38.863555 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 16:01:38.864306 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 16:01:38.874389 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 16:01:38.875236 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 16:01:38.878292 ignition[952]: INFO : Ignition 2.19.0 Jan 30 16:01:38.878292 ignition[952]: INFO : Stage: umount Jan 30 16:01:38.878292 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 16:01:38.878292 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 16:01:38.882882 ignition[952]: INFO : umount: umount passed Jan 30 16:01:38.883566 ignition[952]: INFO : Ignition finished successfully Jan 30 16:01:38.884927 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 16:01:38.885053 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 16:01:38.887302 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 16:01:38.887354 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 16:01:38.888031 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 16:01:38.888074 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 16:01:38.889623 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 16:01:38.889663 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 16:01:38.890491 systemd[1]: Stopped target network.target - Network. Jan 30 16:01:38.892123 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 16:01:38.892173 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 16:01:38.893047 systemd[1]: Stopped target paths.target - Path Units. Jan 30 16:01:38.893559 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 16:01:38.894224 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 16:01:38.896643 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 16:01:38.897239 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 16:01:38.898540 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 16:01:38.898585 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 16:01:38.899639 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 16:01:38.899675 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 16:01:38.900896 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 16:01:38.900945 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 16:01:38.902185 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 16:01:38.902223 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 16:01:38.903371 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 16:01:38.904554 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 16:01:38.906645 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 16:01:38.907155 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 16:01:38.907236 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 16:01:38.907546 systemd-networkd[708]: eth0: DHCPv6 lease lost Jan 30 16:01:38.909231 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 16:01:38.909334 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 16:01:38.910919 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 16:01:38.910970 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 16:01:38.912393 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 16:01:38.912443 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 16:01:38.919669 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 16:01:38.924029 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 16:01:38.924091 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 16:01:38.927736 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 16:01:38.929173 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 16:01:38.929264 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 16:01:38.937464 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 16:01:38.937577 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 16:01:38.939216 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 16:01:38.939274 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 16:01:38.939914 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 16:01:38.939958 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 16:01:38.941450 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 16:01:38.941689 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 16:01:38.942718 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 16:01:38.942853 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 16:01:38.945191 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 16:01:38.945282 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 16:01:38.946656 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 16:01:38.946689 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 16:01:38.947844 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 16:01:38.947905 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 16:01:38.950582 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 16:01:38.950691 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 16:01:38.953329 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 16:01:38.953433 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 16:01:38.967672 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 16:01:38.968919 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 16:01:38.968969 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 16:01:38.970300 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 16:01:38.970342 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 16:01:38.971595 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 16:01:38.971636 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 16:01:38.972277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 16:01:38.972321 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 16:01:38.973300 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 16:01:38.973399 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 16:01:38.974799 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 16:01:38.982746 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 16:01:39.002267 systemd[1]: Switching root. Jan 30 16:01:39.034548 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 30 16:01:39.034650 systemd-journald[184]: Journal stopped Jan 30 16:01:40.953298 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 16:01:40.953378 kernel: SELinux: policy capability open_perms=1 Jan 30 16:01:40.953394 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 16:01:40.953406 kernel: SELinux: policy capability always_check_network=0 Jan 30 16:01:40.953418 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 16:01:40.953432 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 16:01:40.953453 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 16:01:40.953464 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 16:01:40.953476 systemd[1]: Successfully loaded SELinux policy in 66.249ms. Jan 30 16:01:40.953496 kernel: audit: type=1403 audit(1738252899.834:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 16:01:40.954826 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.157ms. Jan 30 16:01:40.954847 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 16:01:40.954859 systemd[1]: Detected virtualization kvm. Jan 30 16:01:40.954871 systemd[1]: Detected architecture x86-64. Jan 30 16:01:40.954889 systemd[1]: Detected first boot. Jan 30 16:01:40.954901 systemd[1]: Hostname set to . Jan 30 16:01:40.954913 systemd[1]: Initializing machine ID from VM UUID. Jan 30 16:01:40.954924 zram_generator::config[1011]: No configuration found. Jan 30 16:01:40.954937 systemd[1]: Populated /etc with preset unit settings. Jan 30 16:01:40.954949 systemd[1]: Queued start job for default target multi-user.target. Jan 30 16:01:40.954962 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 16:01:40.954974 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 16:01:40.954990 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 16:01:40.955002 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 16:01:40.955014 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 16:01:40.955026 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 16:01:40.955038 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 16:01:40.955050 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 16:01:40.955062 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 16:01:40.955074 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 16:01:40.955092 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 16:01:40.955104 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 16:01:40.955116 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 16:01:40.955128 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 16:01:40.955140 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 16:01:40.955152 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 16:01:40.955168 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 16:01:40.955180 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 16:01:40.955191 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 16:01:40.955206 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 16:01:40.955219 systemd[1]: Reached target slices.target - Slice Units. Jan 30 16:01:40.955231 systemd[1]: Reached target swap.target - Swaps. Jan 30 16:01:40.955243 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 16:01:40.955258 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 16:01:40.955269 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 16:01:40.955281 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 16:01:40.955295 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 16:01:40.955307 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 16:01:40.955319 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 16:01:40.955331 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 16:01:40.955343 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 16:01:40.955356 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 16:01:40.955369 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 16:01:40.955381 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 16:01:40.955393 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 16:01:40.955407 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 16:01:40.955419 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 16:01:40.955432 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 16:01:40.955444 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 16:01:40.955456 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 16:01:40.955468 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 16:01:40.955480 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 16:01:40.955493 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 16:01:40.955525 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 16:01:40.955539 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 16:01:40.955551 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 16:01:40.955563 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 16:01:40.955576 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 16:01:40.955588 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 16:01:40.955599 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 16:01:40.955611 kernel: loop: module loaded Jan 30 16:01:40.955623 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 16:01:40.955640 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 16:01:40.955652 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 16:01:40.955663 kernel: ACPI: bus type drm_connector registered Jan 30 16:01:40.955674 kernel: fuse: init (API version 7.39) Jan 30 16:01:40.955685 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 16:01:40.955714 systemd-journald[1122]: Collecting audit messages is disabled. Jan 30 16:01:40.955744 systemd-journald[1122]: Journal started Jan 30 16:01:40.955768 systemd-journald[1122]: Runtime Journal (/run/log/journal/ce8dc1684c074b6bb590321b01b291af) is 8.0M, max 78.3M, 70.3M free. Jan 30 16:01:40.962551 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 16:01:40.968544 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 16:01:40.969796 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 16:01:40.970662 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 16:01:40.971325 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 16:01:40.971995 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 16:01:40.972671 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 16:01:40.973339 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 16:01:40.974218 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 16:01:40.975303 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 16:01:40.976354 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 16:01:40.976630 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 16:01:40.977487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 16:01:40.978013 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 16:01:40.978818 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 16:01:40.978973 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 16:01:40.979894 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 16:01:40.980106 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 16:01:40.981036 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 16:01:40.981244 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 16:01:40.982034 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 16:01:40.982299 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 16:01:40.983166 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 16:01:40.984027 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 16:01:40.985209 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 16:01:40.996808 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 16:01:41.003659 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 16:01:41.005648 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 16:01:41.006192 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 16:01:41.015759 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 16:01:41.019790 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 16:01:41.021970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 16:01:41.036190 systemd-journald[1122]: Time spent on flushing to /var/log/journal/ce8dc1684c074b6bb590321b01b291af is 26.579ms for 928 entries. Jan 30 16:01:41.036190 systemd-journald[1122]: System Journal (/var/log/journal/ce8dc1684c074b6bb590321b01b291af) is 8.0M, max 584.8M, 576.8M free. Jan 30 16:01:41.078109 systemd-journald[1122]: Received client request to flush runtime journal. Jan 30 16:01:41.037949 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 16:01:41.038903 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 16:01:41.040549 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 16:01:41.048737 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 16:01:41.058656 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 16:01:41.062714 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 16:01:41.063604 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 16:01:41.068903 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 16:01:41.082430 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 16:01:41.101921 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 16:01:41.118574 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 16:01:41.126841 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 16:01:41.132897 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jan 30 16:01:41.132918 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jan 30 16:01:41.139235 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 16:01:41.140815 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 16:01:41.148795 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 16:01:41.189925 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 16:01:41.196782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 16:01:41.212318 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 30 16:01:41.212643 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 30 16:01:41.216745 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 16:01:41.779264 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 16:01:41.790812 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 16:01:41.813377 systemd-udevd[1195]: Using default interface naming scheme 'v255'. Jan 30 16:01:41.843707 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 16:01:41.862646 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 16:01:41.906602 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1197) Jan 30 16:01:41.911021 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 16:01:41.963259 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 16:01:42.023208 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 16:01:42.038532 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 30 16:01:42.042674 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 16:01:42.064645 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 16:01:42.077601 kernel: ACPI: button: Power Button [PWRF] Jan 30 16:01:42.101217 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 16:01:42.121535 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 16:01:42.139421 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 16:01:42.139479 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 16:01:42.145743 systemd-networkd[1207]: lo: Link UP Jan 30 16:01:42.146026 kernel: Console: switching to colour dummy device 80x25 Jan 30 16:01:42.146062 systemd-networkd[1207]: lo: Gained carrier Jan 30 16:01:42.147164 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 16:01:42.147191 kernel: [drm] features: -context_init Jan 30 16:01:42.148165 systemd-networkd[1207]: Enumeration completed Jan 30 16:01:42.148973 systemd-networkd[1207]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 16:01:42.149034 systemd-networkd[1207]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 16:01:42.149668 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 16:01:42.150772 systemd-networkd[1207]: eth0: Link UP Jan 30 16:01:42.150779 systemd-networkd[1207]: eth0: Gained carrier Jan 30 16:01:42.150792 systemd-networkd[1207]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 16:01:42.153030 kernel: [drm] number of scanouts: 1 Jan 30 16:01:42.153099 kernel: [drm] number of cap sets: 0 Jan 30 16:01:42.161558 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 16:01:42.156797 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 16:01:42.162118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 16:01:42.163599 systemd-networkd[1207]: eth0: DHCPv4 address 172.24.4.55/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 16:01:42.168548 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 16:01:42.168606 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 16:01:42.180845 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 16:01:42.182952 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 16:01:42.183202 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 16:01:42.189679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 16:01:42.193881 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 16:01:42.194162 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 16:01:42.199717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 16:01:42.212990 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 16:01:42.219762 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 16:01:42.238007 lvm[1243]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 16:01:42.270713 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 16:01:42.273254 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 16:01:42.278718 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 16:01:42.283428 lvm[1246]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 16:01:42.299898 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 16:01:42.304759 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 16:01:42.307541 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 16:01:42.307667 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 16:01:42.307697 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 16:01:42.307777 systemd[1]: Reached target machines.target - Containers. Jan 30 16:01:42.309291 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 16:01:42.313653 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 16:01:42.315411 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 16:01:42.317023 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 16:01:42.319875 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 16:01:42.327876 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 16:01:42.341775 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 16:01:42.345676 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 16:01:42.364479 kernel: loop0: detected capacity change from 0 to 8 Jan 30 16:01:42.377326 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 16:01:42.384594 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 16:01:42.391063 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 16:01:42.391897 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 16:01:42.413251 kernel: loop1: detected capacity change from 0 to 142488 Jan 30 16:01:42.486633 kernel: loop2: detected capacity change from 0 to 210664 Jan 30 16:01:42.563606 kernel: loop3: detected capacity change from 0 to 140768 Jan 30 16:01:42.633762 kernel: loop4: detected capacity change from 0 to 8 Jan 30 16:01:42.642591 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 16:01:42.700915 kernel: loop6: detected capacity change from 0 to 210664 Jan 30 16:01:42.735541 kernel: loop7: detected capacity change from 0 to 140768 Jan 30 16:01:42.789478 (sd-merge)[1271]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 30 16:01:42.790589 (sd-merge)[1271]: Merged extensions into '/usr'. Jan 30 16:01:42.796089 systemd[1]: Reloading requested from client PID 1258 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 16:01:42.796183 systemd[1]: Reloading... Jan 30 16:01:42.891539 zram_generator::config[1296]: No configuration found. Jan 30 16:01:43.097183 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 16:01:43.167073 systemd[1]: Reloading finished in 369 ms. Jan 30 16:01:43.183987 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 16:01:43.195851 systemd[1]: Starting ensure-sysext.service... Jan 30 16:01:43.203749 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 16:01:43.213145 systemd[1]: Reloading requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... Jan 30 16:01:43.213168 systemd[1]: Reloading... Jan 30 16:01:43.255649 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 16:01:43.256041 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 16:01:43.256912 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 16:01:43.257227 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jan 30 16:01:43.257288 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jan 30 16:01:43.264183 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 16:01:43.264423 systemd-tmpfiles[1361]: Skipping /boot Jan 30 16:01:43.275747 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 16:01:43.275934 systemd-tmpfiles[1361]: Skipping /boot Jan 30 16:01:43.295536 zram_generator::config[1388]: No configuration found. Jan 30 16:01:43.295983 ldconfig[1254]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 16:01:43.378773 systemd-networkd[1207]: eth0: Gained IPv6LL Jan 30 16:01:43.469185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 16:01:43.536628 systemd[1]: Reloading finished in 323 ms. Jan 30 16:01:43.556282 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 16:01:43.559900 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 16:01:43.567975 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 16:01:43.577383 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 16:01:43.584745 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 16:01:43.598150 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 16:01:43.604015 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 16:01:43.618881 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 16:01:43.641458 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 16:01:43.643057 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 16:01:43.651074 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 16:01:43.666978 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 16:01:43.683287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 16:01:43.687293 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 16:01:43.692439 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 16:01:43.694767 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 16:01:43.700888 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 16:01:43.701479 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 16:01:43.707282 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 16:01:43.708970 augenrules[1485]: No rules Jan 30 16:01:43.707781 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 16:01:43.710048 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 16:01:43.715237 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 16:01:43.717876 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 16:01:43.729729 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 16:01:43.730895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 16:01:43.738648 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 16:01:43.742636 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 16:01:43.754176 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 16:01:43.757968 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 16:01:43.764149 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 16:01:43.767333 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 16:01:43.771213 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 16:01:43.775984 systemd-resolved[1468]: Positive Trust Anchors: Jan 30 16:01:43.776002 systemd-resolved[1468]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 16:01:43.776049 systemd-resolved[1468]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 16:01:43.779319 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 16:01:43.780288 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 16:01:43.782148 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 16:01:43.782305 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 16:01:43.786181 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 16:01:43.786426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 16:01:43.796480 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 16:01:43.796654 systemd-resolved[1468]: Using system hostname 'ci-4081-3-0-2-e08351c9d9.novalocal'. Jan 30 16:01:43.802699 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 16:01:43.810005 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 16:01:43.815220 systemd[1]: Reached target network.target - Network. Jan 30 16:01:43.817753 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 16:01:43.820050 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 16:01:43.820569 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 16:01:43.820757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 16:01:43.826738 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 16:01:43.837705 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 16:01:43.842760 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 16:01:43.849288 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 16:01:43.850066 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 16:01:43.850127 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 16:01:43.850152 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 16:01:43.850914 systemd[1]: Finished ensure-sysext.service. Jan 30 16:01:43.853078 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 16:01:43.853347 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 16:01:43.857675 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 16:01:43.857878 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 16:01:43.859875 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 16:01:43.860087 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 16:01:43.861488 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 16:01:43.861777 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 16:01:43.867177 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 16:01:43.867567 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 16:01:43.872636 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 16:01:43.934014 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 16:01:43.936309 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 16:01:43.939909 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 16:01:43.942195 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 16:01:43.944593 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 16:01:43.946973 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 16:01:43.947193 systemd[1]: Reached target paths.target - Path Units. Jan 30 16:01:43.949404 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 16:01:43.951644 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 16:01:43.953781 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 16:01:43.955457 systemd[1]: Reached target timers.target - Timer Units. Jan 30 16:01:43.958387 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 16:01:43.962525 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 16:01:43.967618 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 16:01:43.973155 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 16:01:43.975823 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 16:01:43.976574 systemd[1]: Reached target basic.target - Basic System. Jan 30 16:01:43.977344 systemd[1]: System is tainted: cgroupsv1 Jan 30 16:01:43.977401 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 16:01:43.977426 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 16:01:43.986590 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 16:01:43.993765 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 16:01:44.002642 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 16:01:44.010967 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 16:01:44.017716 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 16:01:44.020081 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 16:01:44.026952 jq[1539]: false Jan 30 16:01:44.033650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 16:01:44.037296 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 16:01:44.444102 systemd-timesyncd[1531]: Contacted time server 82.64.100.180:123 (0.flatcar.pool.ntp.org). Jan 30 16:01:44.444208 systemd-timesyncd[1531]: Initial clock synchronization to Thu 2025-01-30 16:01:44.443819 UTC. Jan 30 16:01:44.444274 systemd-resolved[1468]: Clock change detected. Flushing caches. Jan 30 16:01:44.451393 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 16:01:44.465672 extend-filesystems[1542]: Found loop4 Jan 30 16:01:44.472265 extend-filesystems[1542]: Found loop5 Jan 30 16:01:44.472265 extend-filesystems[1542]: Found loop6 Jan 30 16:01:44.472265 extend-filesystems[1542]: Found loop7 Jan 30 16:01:44.472265 extend-filesystems[1542]: Found vda Jan 30 16:01:44.472265 extend-filesystems[1542]: Found vda1 Jan 30 16:01:44.472265 extend-filesystems[1542]: Found vda2 Jan 30 16:01:44.472265 extend-filesystems[1542]: Found vda3 Jan 30 16:01:44.472265 extend-filesystems[1542]: Found usr Jan 30 16:01:44.472265 extend-filesystems[1542]: Found vda4 Jan 30 16:01:44.472265 extend-filesystems[1542]: Found vda6 Jan 30 16:01:44.472265 extend-filesystems[1542]: Found vda7 Jan 30 16:01:44.472265 extend-filesystems[1542]: Found vda9 Jan 30 16:01:44.472265 extend-filesystems[1542]: Checking size of /dev/vda9 Jan 30 16:01:44.471526 dbus-daemon[1538]: [system] SELinux support is enabled Jan 30 16:01:44.468145 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 16:01:44.512760 extend-filesystems[1542]: Resized partition /dev/vda9 Jan 30 16:01:44.485210 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 16:01:44.525005 extend-filesystems[1566]: resize2fs 1.47.1 (20-May-2024) Jan 30 16:01:44.612162 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 30 16:01:44.612209 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 30 16:01:44.612228 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1203) Jan 30 16:01:44.500230 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 16:01:44.520199 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 16:01:44.538002 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 16:01:44.543101 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 16:01:44.556547 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 16:01:44.612979 jq[1574]: true Jan 30 16:01:44.568661 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 16:01:44.581403 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 16:01:44.581673 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 16:01:44.596340 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 16:01:44.596628 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 16:01:44.597734 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 16:01:44.603927 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 16:01:44.604224 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 16:01:44.622206 extend-filesystems[1566]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 16:01:44.622206 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 16:01:44.622206 extend-filesystems[1566]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 30 16:01:44.642724 update_engine[1572]: I20250130 16:01:44.619189 1572 main.cc:92] Flatcar Update Engine starting Jan 30 16:01:44.642724 update_engine[1572]: I20250130 16:01:44.636338 1572 update_check_scheduler.cc:74] Next update check in 3m17s Jan 30 16:01:44.627718 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 16:01:44.651130 extend-filesystems[1542]: Resized filesystem in /dev/vda9 Jan 30 16:01:44.627998 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 16:01:44.648426 (ntainerd)[1586]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 16:01:44.671257 tar[1583]: linux-amd64/helm Jan 30 16:01:44.682650 jq[1584]: true Jan 30 16:01:44.680134 systemd[1]: Started update-engine.service - Update Engine. Jan 30 16:01:44.683267 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 16:01:44.683294 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 16:01:44.684872 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 16:01:44.684891 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 16:01:44.688950 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 16:01:44.700181 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 16:01:44.785792 systemd-logind[1567]: New seat seat0. Jan 30 16:01:44.789722 systemd-logind[1567]: Watching system buttons on /dev/input/event2 (Power Button) Jan 30 16:01:44.789743 systemd-logind[1567]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 16:01:44.789960 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 16:01:44.881417 bash[1616]: Updated "/home/core/.ssh/authorized_keys" Jan 30 16:01:44.871520 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 16:01:44.886326 systemd[1]: Starting sshkeys.service... Jan 30 16:01:44.916426 locksmithd[1599]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 16:01:44.921393 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 16:01:44.933653 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 16:01:45.234916 containerd[1586]: time="2025-01-30T16:01:45.234785674Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 16:01:45.244048 sshd_keygen[1577]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 16:01:45.286001 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 16:01:45.300560 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 16:01:45.304077 containerd[1586]: time="2025-01-30T16:01:45.304037225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 16:01:45.308987 containerd[1586]: time="2025-01-30T16:01:45.307129765Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 16:01:45.308987 containerd[1586]: time="2025-01-30T16:01:45.307192463Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 16:01:45.308987 containerd[1586]: time="2025-01-30T16:01:45.307220044Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 16:01:45.308987 containerd[1586]: time="2025-01-30T16:01:45.307444345Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 16:01:45.308987 containerd[1586]: time="2025-01-30T16:01:45.307497304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 16:01:45.308987 containerd[1586]: time="2025-01-30T16:01:45.307568959Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 16:01:45.308987 containerd[1586]: time="2025-01-30T16:01:45.307587033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 16:01:45.308987 containerd[1586]: time="2025-01-30T16:01:45.307848193Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 16:01:45.308987 containerd[1586]: time="2025-01-30T16:01:45.307868471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 16:01:45.308987 containerd[1586]: time="2025-01-30T16:01:45.307885272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 16:01:45.308987 containerd[1586]: time="2025-01-30T16:01:45.307900160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 16:01:45.309563 containerd[1586]: time="2025-01-30T16:01:45.307989788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 16:01:45.309563 containerd[1586]: time="2025-01-30T16:01:45.308289310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 16:01:45.309779 containerd[1586]: time="2025-01-30T16:01:45.309727567Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 16:01:45.309811 containerd[1586]: time="2025-01-30T16:01:45.309777000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 16:01:45.309995 containerd[1586]: time="2025-01-30T16:01:45.309966065Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 16:01:45.310100 containerd[1586]: time="2025-01-30T16:01:45.310070981Z" level=info msg="metadata content store policy set" policy=shared Jan 30 16:01:45.320116 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 16:01:45.320360 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 16:01:45.331382 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 16:01:45.337053 containerd[1586]: time="2025-01-30T16:01:45.335059728Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 16:01:45.337053 containerd[1586]: time="2025-01-30T16:01:45.335154005Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 16:01:45.337053 containerd[1586]: time="2025-01-30T16:01:45.335181306Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 16:01:45.337053 containerd[1586]: time="2025-01-30T16:01:45.335205140Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 16:01:45.337053 containerd[1586]: time="2025-01-30T16:01:45.335238553Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 16:01:45.337053 containerd[1586]: time="2025-01-30T16:01:45.335468765Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 16:01:45.337053 containerd[1586]: time="2025-01-30T16:01:45.335857233Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 16:01:45.337053 containerd[1586]: time="2025-01-30T16:01:45.335959876Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 16:01:45.337053 containerd[1586]: time="2025-01-30T16:01:45.335978431Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 16:01:45.337053 containerd[1586]: time="2025-01-30T16:01:45.335993639Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 16:01:45.337053 containerd[1586]: time="2025-01-30T16:01:45.336009439Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 16:01:45.337053 containerd[1586]: time="2025-01-30T16:01:45.336054594Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 16:01:45.337053 containerd[1586]: time="2025-01-30T16:01:45.336068930Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 16:01:45.337053 containerd[1586]: time="2025-01-30T16:01:45.336084500Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 16:01:45.337454 containerd[1586]: time="2025-01-30T16:01:45.336100740Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 16:01:45.337454 containerd[1586]: time="2025-01-30T16:01:45.336115378Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 16:01:45.337454 containerd[1586]: time="2025-01-30T16:01:45.336129755Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 16:01:45.337454 containerd[1586]: time="2025-01-30T16:01:45.336146165Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 16:01:45.337454 containerd[1586]: time="2025-01-30T16:01:45.336175340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337454 containerd[1586]: time="2025-01-30T16:01:45.336191130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337454 containerd[1586]: time="2025-01-30T16:01:45.336206268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337454 containerd[1586]: time="2025-01-30T16:01:45.336222258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337454 containerd[1586]: time="2025-01-30T16:01:45.336235723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337454 containerd[1586]: time="2025-01-30T16:01:45.336250150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337454 containerd[1586]: time="2025-01-30T16:01:45.336262544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337454 containerd[1586]: time="2025-01-30T16:01:45.336278013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337454 containerd[1586]: time="2025-01-30T16:01:45.336292319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337454 containerd[1586]: time="2025-01-30T16:01:45.336308720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337745 containerd[1586]: time="2025-01-30T16:01:45.336322526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337745 containerd[1586]: time="2025-01-30T16:01:45.336335661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337745 containerd[1586]: time="2025-01-30T16:01:45.336353013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337745 containerd[1586]: time="2025-01-30T16:01:45.336370506Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 16:01:45.337745 containerd[1586]: time="2025-01-30T16:01:45.336394020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337745 containerd[1586]: time="2025-01-30T16:01:45.336407115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337745 containerd[1586]: time="2025-01-30T16:01:45.336420740Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 16:01:45.337745 containerd[1586]: time="2025-01-30T16:01:45.336461006Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 16:01:45.337745 containerd[1586]: time="2025-01-30T16:01:45.336486704Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 16:01:45.337745 containerd[1586]: time="2025-01-30T16:01:45.336498747Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 16:01:45.337745 containerd[1586]: time="2025-01-30T16:01:45.336512332Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 16:01:45.337745 containerd[1586]: time="2025-01-30T16:01:45.336522922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.337745 containerd[1586]: time="2025-01-30T16:01:45.336542659Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 16:01:45.337745 containerd[1586]: time="2025-01-30T16:01:45.336557407Z" level=info msg="NRI interface is disabled by configuration." Jan 30 16:01:45.338066 containerd[1586]: time="2025-01-30T16:01:45.336569820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 16:01:45.338090 containerd[1586]: time="2025-01-30T16:01:45.336856768Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 16:01:45.338090 containerd[1586]: time="2025-01-30T16:01:45.336927651Z" level=info msg="Connect containerd service" Jan 30 16:01:45.338090 containerd[1586]: time="2025-01-30T16:01:45.336966985Z" level=info msg="using legacy CRI server" Jan 30 16:01:45.338090 containerd[1586]: time="2025-01-30T16:01:45.336975290Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 16:01:45.338461 containerd[1586]: time="2025-01-30T16:01:45.338442362Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 16:01:45.342166 containerd[1586]: time="2025-01-30T16:01:45.339174485Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 16:01:45.342166 containerd[1586]: time="2025-01-30T16:01:45.339494425Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 16:01:45.342166 containerd[1586]: time="2025-01-30T16:01:45.339538257Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 16:01:45.342166 containerd[1586]: time="2025-01-30T16:01:45.339576489Z" level=info msg="Start subscribing containerd event" Jan 30 16:01:45.342166 containerd[1586]: time="2025-01-30T16:01:45.339616103Z" level=info msg="Start recovering state" Jan 30 16:01:45.342166 containerd[1586]: time="2025-01-30T16:01:45.339669113Z" level=info msg="Start event monitor" Jan 30 16:01:45.342166 containerd[1586]: time="2025-01-30T16:01:45.339681356Z" level=info msg="Start snapshots syncer" Jan 30 16:01:45.342166 containerd[1586]: time="2025-01-30T16:01:45.339691445Z" level=info msg="Start cni network conf syncer for default" Jan 30 16:01:45.342166 containerd[1586]: time="2025-01-30T16:01:45.339699429Z" level=info msg="Start streaming server" Jan 30 16:01:45.342166 containerd[1586]: time="2025-01-30T16:01:45.339759893Z" level=info msg="containerd successfully booted in 0.108065s" Jan 30 16:01:45.340171 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 16:01:45.346443 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 16:01:45.358482 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 16:01:45.368956 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 16:01:45.371659 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 16:01:45.508312 tar[1583]: linux-amd64/LICENSE Jan 30 16:01:45.508312 tar[1583]: linux-amd64/README.md Jan 30 16:01:45.522541 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 16:01:46.579339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 16:01:46.587893 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 16:01:46.764579 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 16:01:46.780948 systemd[1]: Started sshd@0-172.24.4.55:22-172.24.4.1:45268.service - OpenSSH per-connection server daemon (172.24.4.1:45268). Jan 30 16:01:47.763657 sshd[1672]: Accepted publickey for core from 172.24.4.1 port 45268 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:01:47.769377 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:01:47.794202 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 16:01:47.806461 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 16:01:47.812267 systemd-logind[1567]: New session 1 of user core. Jan 30 16:01:47.831186 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 16:01:47.843380 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 16:01:47.854569 (systemd)[1684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 16:01:47.969001 kubelet[1671]: E0130 16:01:47.968917 1671 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 16:01:47.969788 systemd[1684]: Queued start job for default target default.target. Jan 30 16:01:47.970910 systemd[1684]: Created slice app.slice - User Application Slice. Jan 30 16:01:47.970939 systemd[1684]: Reached target paths.target - Paths. Jan 30 16:01:47.970955 systemd[1684]: Reached target timers.target - Timers. Jan 30 16:01:47.973038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 16:01:47.973211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 16:01:47.980159 systemd[1684]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 16:01:47.993165 systemd[1684]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 16:01:47.993232 systemd[1684]: Reached target sockets.target - Sockets. Jan 30 16:01:47.993247 systemd[1684]: Reached target basic.target - Basic System. Jan 30 16:01:47.993314 systemd[1684]: Reached target default.target - Main User Target. Jan 30 16:01:47.993344 systemd[1684]: Startup finished in 132ms. Jan 30 16:01:47.993444 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 16:01:47.999366 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 16:01:48.490702 systemd[1]: Started sshd@1-172.24.4.55:22-172.24.4.1:45274.service - OpenSSH per-connection server daemon (172.24.4.1:45274). Jan 30 16:01:50.050130 sshd[1700]: Accepted publickey for core from 172.24.4.1 port 45274 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:01:50.052913 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:01:50.062776 systemd-logind[1567]: New session 2 of user core. Jan 30 16:01:50.075844 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 16:01:50.431665 login[1655]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 16:01:50.432595 login[1653]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 16:01:50.443055 systemd-logind[1567]: New session 4 of user core. Jan 30 16:01:50.456818 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 16:01:50.463709 systemd-logind[1567]: New session 3 of user core. Jan 30 16:01:50.473808 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 16:01:50.786370 sshd[1700]: pam_unix(sshd:session): session closed for user core Jan 30 16:01:50.803784 systemd[1]: Started sshd@2-172.24.4.55:22-172.24.4.1:45286.service - OpenSSH per-connection server daemon (172.24.4.1:45286). Jan 30 16:01:50.807542 systemd[1]: sshd@1-172.24.4.55:22-172.24.4.1:45274.service: Deactivated successfully. Jan 30 16:01:50.815085 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 16:01:50.817688 systemd-logind[1567]: Session 2 logged out. Waiting for processes to exit. Jan 30 16:01:50.823185 systemd-logind[1567]: Removed session 2. Jan 30 16:01:51.506528 coreos-metadata[1536]: Jan 30 16:01:51.506 WARN failed to locate config-drive, using the metadata service API instead Jan 30 16:01:51.550987 coreos-metadata[1536]: Jan 30 16:01:51.550 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 30 16:01:51.796381 coreos-metadata[1536]: Jan 30 16:01:51.796 INFO Fetch successful Jan 30 16:01:51.796381 coreos-metadata[1536]: Jan 30 16:01:51.796 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 16:01:51.810315 coreos-metadata[1536]: Jan 30 16:01:51.810 INFO Fetch successful Jan 30 16:01:51.810315 coreos-metadata[1536]: Jan 30 16:01:51.810 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 30 16:01:51.822916 coreos-metadata[1536]: Jan 30 16:01:51.822 INFO Fetch successful Jan 30 16:01:51.822916 coreos-metadata[1536]: Jan 30 16:01:51.822 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 30 16:01:51.838180 coreos-metadata[1536]: Jan 30 16:01:51.838 INFO Fetch successful Jan 30 16:01:51.838180 coreos-metadata[1536]: Jan 30 16:01:51.838 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 30 16:01:51.854060 coreos-metadata[1536]: Jan 30 16:01:51.853 INFO Fetch successful Jan 30 16:01:51.854060 coreos-metadata[1536]: Jan 30 16:01:51.853 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 30 16:01:51.870920 coreos-metadata[1536]: Jan 30 16:01:51.870 INFO Fetch successful Jan 30 16:01:51.906193 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 16:01:51.910143 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 16:01:52.023372 coreos-metadata[1625]: Jan 30 16:01:52.023 WARN failed to locate config-drive, using the metadata service API instead Jan 30 16:01:52.066387 coreos-metadata[1625]: Jan 30 16:01:52.066 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 30 16:01:52.082097 coreos-metadata[1625]: Jan 30 16:01:52.081 INFO Fetch successful Jan 30 16:01:52.082097 coreos-metadata[1625]: Jan 30 16:01:52.081 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 16:01:52.095769 coreos-metadata[1625]: Jan 30 16:01:52.095 INFO Fetch successful Jan 30 16:01:52.101682 unknown[1625]: wrote ssh authorized keys file for user: core Jan 30 16:01:52.141400 sshd[1731]: Accepted publickey for core from 172.24.4.1 port 45286 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:01:52.144310 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:01:52.152718 update-ssh-keys[1749]: Updated "/home/core/.ssh/authorized_keys" Jan 30 16:01:52.160262 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 16:01:52.160376 systemd-logind[1567]: New session 5 of user core. Jan 30 16:01:52.169992 systemd[1]: Finished sshkeys.service. Jan 30 16:01:52.187868 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 16:01:52.188254 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 16:01:52.189515 systemd[1]: Startup finished in 16.270s (kernel) + 12.018s (userspace) = 28.289s. Jan 30 16:01:52.707454 sshd[1731]: pam_unix(sshd:session): session closed for user core Jan 30 16:01:52.715370 systemd[1]: sshd@2-172.24.4.55:22-172.24.4.1:45286.service: Deactivated successfully. Jan 30 16:01:52.720243 systemd-logind[1567]: Session 5 logged out. Waiting for processes to exit. Jan 30 16:01:52.721430 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 16:01:52.723747 systemd-logind[1567]: Removed session 5. Jan 30 16:01:58.224545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 16:01:58.236526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 16:01:58.554433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 16:01:58.573778 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 16:01:58.658095 kubelet[1774]: E0130 16:01:58.657957 1774 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 16:01:58.660837 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 16:01:58.661440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 16:02:02.721643 systemd[1]: Started sshd@3-172.24.4.55:22-172.24.4.1:39746.service - OpenSSH per-connection server daemon (172.24.4.1:39746). Jan 30 16:02:04.027847 sshd[1783]: Accepted publickey for core from 172.24.4.1 port 39746 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:02:04.030688 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:02:04.039775 systemd-logind[1567]: New session 6 of user core. Jan 30 16:02:04.047545 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 16:02:04.608350 sshd[1783]: pam_unix(sshd:session): session closed for user core Jan 30 16:02:04.621809 systemd[1]: Started sshd@4-172.24.4.55:22-172.24.4.1:51778.service - OpenSSH per-connection server daemon (172.24.4.1:51778). Jan 30 16:02:04.622923 systemd[1]: sshd@3-172.24.4.55:22-172.24.4.1:39746.service: Deactivated successfully. Jan 30 16:02:04.636480 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 16:02:04.638640 systemd-logind[1567]: Session 6 logged out. Waiting for processes to exit. Jan 30 16:02:04.641329 systemd-logind[1567]: Removed session 6. Jan 30 16:02:05.983847 sshd[1788]: Accepted publickey for core from 172.24.4.1 port 51778 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:02:05.986569 sshd[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:02:05.997113 systemd-logind[1567]: New session 7 of user core. Jan 30 16:02:06.006556 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 16:02:06.608332 sshd[1788]: pam_unix(sshd:session): session closed for user core Jan 30 16:02:06.619631 systemd[1]: Started sshd@5-172.24.4.55:22-172.24.4.1:51786.service - OpenSSH per-connection server daemon (172.24.4.1:51786). Jan 30 16:02:06.620724 systemd[1]: sshd@4-172.24.4.55:22-172.24.4.1:51778.service: Deactivated successfully. Jan 30 16:02:06.634872 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 16:02:06.639705 systemd-logind[1567]: Session 7 logged out. Waiting for processes to exit. Jan 30 16:02:06.642326 systemd-logind[1567]: Removed session 7. Jan 30 16:02:07.717486 sshd[1796]: Accepted publickey for core from 172.24.4.1 port 51786 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:02:07.720084 sshd[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:02:07.729641 systemd-logind[1567]: New session 8 of user core. Jan 30 16:02:07.738506 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 16:02:08.356893 sshd[1796]: pam_unix(sshd:session): session closed for user core Jan 30 16:02:08.369739 systemd[1]: Started sshd@6-172.24.4.55:22-172.24.4.1:51802.service - OpenSSH per-connection server daemon (172.24.4.1:51802). Jan 30 16:02:08.370787 systemd[1]: sshd@5-172.24.4.55:22-172.24.4.1:51786.service: Deactivated successfully. Jan 30 16:02:08.386922 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 16:02:08.390899 systemd-logind[1567]: Session 8 logged out. Waiting for processes to exit. Jan 30 16:02:08.394635 systemd-logind[1567]: Removed session 8. Jan 30 16:02:08.717787 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 16:02:08.726393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 16:02:09.048343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 16:02:09.068661 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 16:02:09.143849 kubelet[1821]: E0130 16:02:09.143811 1821 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 16:02:09.147331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 16:02:09.147846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 16:02:09.753986 sshd[1804]: Accepted publickey for core from 172.24.4.1 port 51802 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:02:09.756545 sshd[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:02:09.766180 systemd-logind[1567]: New session 9 of user core. Jan 30 16:02:09.779499 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 16:02:10.247624 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 16:02:10.248317 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 16:02:10.264897 sudo[1832]: pam_unix(sudo:session): session closed for user root Jan 30 16:02:10.491409 sshd[1804]: pam_unix(sshd:session): session closed for user core Jan 30 16:02:10.501095 systemd[1]: Started sshd@7-172.24.4.55:22-172.24.4.1:51804.service - OpenSSH per-connection server daemon (172.24.4.1:51804). Jan 30 16:02:10.506355 systemd[1]: sshd@6-172.24.4.55:22-172.24.4.1:51802.service: Deactivated successfully. Jan 30 16:02:10.513851 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 16:02:10.515859 systemd-logind[1567]: Session 9 logged out. Waiting for processes to exit. Jan 30 16:02:10.518815 systemd-logind[1567]: Removed session 9. Jan 30 16:02:11.790365 sshd[1834]: Accepted publickey for core from 172.24.4.1 port 51804 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:02:11.792981 sshd[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:02:11.803447 systemd-logind[1567]: New session 10 of user core. Jan 30 16:02:11.813536 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 16:02:12.255346 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 16:02:12.255990 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 16:02:12.263869 sudo[1842]: pam_unix(sudo:session): session closed for user root Jan 30 16:02:12.274704 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 16:02:12.275984 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 16:02:12.303270 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 16:02:12.308848 auditctl[1845]: No rules Jan 30 16:02:12.309958 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 16:02:12.310515 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 16:02:12.323850 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 16:02:12.379461 augenrules[1864]: No rules Jan 30 16:02:12.380624 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 16:02:12.384363 sudo[1841]: pam_unix(sudo:session): session closed for user root Jan 30 16:02:12.562344 sshd[1834]: pam_unix(sshd:session): session closed for user core Jan 30 16:02:12.575212 systemd[1]: Started sshd@8-172.24.4.55:22-172.24.4.1:51820.service - OpenSSH per-connection server daemon (172.24.4.1:51820). Jan 30 16:02:12.576802 systemd[1]: sshd@7-172.24.4.55:22-172.24.4.1:51804.service: Deactivated successfully. Jan 30 16:02:12.583394 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 16:02:12.585113 systemd-logind[1567]: Session 10 logged out. Waiting for processes to exit. Jan 30 16:02:12.590400 systemd-logind[1567]: Removed session 10. Jan 30 16:02:13.765158 sshd[1871]: Accepted publickey for core from 172.24.4.1 port 51820 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:02:13.767721 sshd[1871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:02:13.780143 systemd-logind[1567]: New session 11 of user core. Jan 30 16:02:13.789584 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 16:02:14.282399 sudo[1877]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 16:02:14.283838 sudo[1877]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 16:02:14.958222 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 16:02:14.968336 (dockerd)[1893]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 16:02:15.620246 dockerd[1893]: time="2025-01-30T16:02:15.619812143Z" level=info msg="Starting up" Jan 30 16:02:16.041123 dockerd[1893]: time="2025-01-30T16:02:16.040995603Z" level=info msg="Loading containers: start." Jan 30 16:02:16.222147 kernel: Initializing XFRM netlink socket Jan 30 16:02:16.308440 systemd-networkd[1207]: docker0: Link UP Jan 30 16:02:16.332335 dockerd[1893]: time="2025-01-30T16:02:16.332100650Z" level=info msg="Loading containers: done." Jan 30 16:02:16.351467 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1848088399-merged.mount: Deactivated successfully. Jan 30 16:02:16.353836 dockerd[1893]: time="2025-01-30T16:02:16.353233435Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 16:02:16.353836 dockerd[1893]: time="2025-01-30T16:02:16.353407401Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 16:02:16.353836 dockerd[1893]: time="2025-01-30T16:02:16.353602998Z" level=info msg="Daemon has completed initialization" Jan 30 16:02:16.425741 dockerd[1893]: time="2025-01-30T16:02:16.425645804Z" level=info msg="API listen on /run/docker.sock" Jan 30 16:02:16.426272 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 16:02:18.244833 containerd[1586]: time="2025-01-30T16:02:18.244762482Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 16:02:18.983772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4178104980.mount: Deactivated successfully. Jan 30 16:02:19.216891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 16:02:19.226271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 16:02:19.593419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 16:02:19.606658 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 16:02:19.695519 kubelet[2062]: E0130 16:02:19.695480 2062 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 16:02:19.698624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 16:02:19.698785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 16:02:21.224031 containerd[1586]: time="2025-01-30T16:02:21.221769871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:21.224031 containerd[1586]: time="2025-01-30T16:02:21.225604423Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 30 16:02:21.229304 containerd[1586]: time="2025-01-30T16:02:21.228293446Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:21.233427 containerd[1586]: time="2025-01-30T16:02:21.233366643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:21.234720 containerd[1586]: time="2025-01-30T16:02:21.234511711Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.989706668s" Jan 30 16:02:21.234720 containerd[1586]: time="2025-01-30T16:02:21.234570012Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 16:02:21.265441 containerd[1586]: time="2025-01-30T16:02:21.264643363Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 16:02:23.536756 containerd[1586]: time="2025-01-30T16:02:23.536642109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:23.538149 containerd[1586]: time="2025-01-30T16:02:23.538096069Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 30 16:02:23.540394 containerd[1586]: time="2025-01-30T16:02:23.540304036Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:23.543728 containerd[1586]: time="2025-01-30T16:02:23.543657728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:23.545129 containerd[1586]: time="2025-01-30T16:02:23.544952337Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.280268458s" Jan 30 16:02:23.545129 containerd[1586]: time="2025-01-30T16:02:23.544986282Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 16:02:23.568198 containerd[1586]: time="2025-01-30T16:02:23.568163773Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 16:02:25.168365 containerd[1586]: time="2025-01-30T16:02:25.168302118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:25.169885 containerd[1586]: time="2025-01-30T16:02:25.169727221Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 30 16:02:25.171087 containerd[1586]: time="2025-01-30T16:02:25.171029241Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:25.174269 containerd[1586]: time="2025-01-30T16:02:25.174202607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:25.175593 containerd[1586]: time="2025-01-30T16:02:25.175482856Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.607067578s" Jan 30 16:02:25.175593 containerd[1586]: time="2025-01-30T16:02:25.175515068Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 16:02:25.197098 containerd[1586]: time="2025-01-30T16:02:25.197045420Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 16:02:26.570880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110862501.mount: Deactivated successfully. Jan 30 16:02:27.297486 containerd[1586]: time="2025-01-30T16:02:27.297338364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:27.299602 containerd[1586]: time="2025-01-30T16:02:27.299275571Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 30 16:02:27.301075 containerd[1586]: time="2025-01-30T16:02:27.300931789Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:27.305891 containerd[1586]: time="2025-01-30T16:02:27.305774762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:27.307964 containerd[1586]: time="2025-01-30T16:02:27.307682333Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.110569885s" Jan 30 16:02:27.307964 containerd[1586]: time="2025-01-30T16:02:27.307765781Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 16:02:27.359433 containerd[1586]: time="2025-01-30T16:02:27.359261979Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 16:02:28.041769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount942856.mount: Deactivated successfully. Jan 30 16:02:29.133649 containerd[1586]: time="2025-01-30T16:02:29.133588188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:29.134959 containerd[1586]: time="2025-01-30T16:02:29.134871519Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 30 16:02:29.136370 containerd[1586]: time="2025-01-30T16:02:29.136320162Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:29.139634 containerd[1586]: time="2025-01-30T16:02:29.139587807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:29.141067 containerd[1586]: time="2025-01-30T16:02:29.140869024Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.781540569s" Jan 30 16:02:29.141067 containerd[1586]: time="2025-01-30T16:02:29.140909620Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 16:02:29.162490 containerd[1586]: time="2025-01-30T16:02:29.162458599Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 16:02:29.729122 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 16:02:29.738512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 16:02:29.769827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3097823075.mount: Deactivated successfully. Jan 30 16:02:29.778894 containerd[1586]: time="2025-01-30T16:02:29.778705445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:29.783319 containerd[1586]: time="2025-01-30T16:02:29.783120854Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 30 16:02:29.787070 containerd[1586]: time="2025-01-30T16:02:29.786011417Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:29.852892 containerd[1586]: time="2025-01-30T16:02:29.851986690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:29.858454 containerd[1586]: time="2025-01-30T16:02:29.858394117Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 695.749597ms" Jan 30 16:02:29.858665 containerd[1586]: time="2025-01-30T16:02:29.858629091Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 16:02:29.914210 update_engine[1572]: I20250130 16:02:29.914136 1572 update_attempter.cc:509] Updating boot flags... Jan 30 16:02:29.924131 containerd[1586]: time="2025-01-30T16:02:29.923669201Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 16:02:30.257314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 16:02:30.267057 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2217) Jan 30 16:02:30.270340 (kubelet)[2225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 16:02:30.336048 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2212) Jan 30 16:02:30.361362 kubelet[2225]: E0130 16:02:30.361334 2225 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 16:02:30.365578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 16:02:30.365774 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 16:02:30.398043 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2212) Jan 30 16:02:30.929930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount263604967.mount: Deactivated successfully. Jan 30 16:02:34.586129 containerd[1586]: time="2025-01-30T16:02:34.586073863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:34.587887 containerd[1586]: time="2025-01-30T16:02:34.587853745Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 30 16:02:34.588582 containerd[1586]: time="2025-01-30T16:02:34.588561348Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:34.592224 containerd[1586]: time="2025-01-30T16:02:34.592201774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:02:34.593717 containerd[1586]: time="2025-01-30T16:02:34.593667504Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.669957285s" Jan 30 16:02:34.593769 containerd[1586]: time="2025-01-30T16:02:34.593718610Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 16:02:39.916427 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 16:02:39.924601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 16:02:39.979965 systemd[1]: Reloading requested from client PID 2343 ('systemctl') (unit session-11.scope)... Jan 30 16:02:39.979983 systemd[1]: Reloading... Jan 30 16:02:40.062105 zram_generator::config[2378]: No configuration found. Jan 30 16:02:40.252649 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 16:02:40.339323 systemd[1]: Reloading finished in 358 ms. Jan 30 16:02:40.403177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 16:02:40.406213 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 16:02:40.409908 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 16:02:40.410237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 16:02:40.417200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 16:02:40.534253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 16:02:40.540837 (kubelet)[2464]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 16:02:40.614979 kubelet[2464]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:02:40.614979 kubelet[2464]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 16:02:40.614979 kubelet[2464]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:02:40.825369 kubelet[2464]: I0130 16:02:40.824773 2464 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 16:02:41.141211 kubelet[2464]: I0130 16:02:41.140996 2464 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 16:02:41.141211 kubelet[2464]: I0130 16:02:41.141041 2464 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 16:02:41.141521 kubelet[2464]: I0130 16:02:41.141336 2464 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 16:02:41.158188 kubelet[2464]: I0130 16:02:41.158120 2464 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 16:02:41.166067 kubelet[2464]: E0130 16:02:41.165720 2464 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:41.181087 kubelet[2464]: I0130 16:02:41.181003 2464 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 16:02:41.182128 kubelet[2464]: I0130 16:02:41.182072 2464 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 16:02:41.183090 kubelet[2464]: I0130 16:02:41.182343 2464 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-2-e08351c9d9.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 16:02:41.183090 kubelet[2464]: I0130 16:02:41.182805 2464 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 16:02:41.183090 kubelet[2464]: I0130 16:02:41.182832 2464 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 16:02:41.183331 kubelet[2464]: I0130 16:02:41.183126 2464 state_mem.go:36] "Initialized new in-memory state store" Jan 30 16:02:41.185223 kubelet[2464]: I0130 16:02:41.185193 2464 kubelet.go:400] "Attempting to sync node with API server" Jan 30 16:02:41.185290 kubelet[2464]: I0130 16:02:41.185241 2464 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 16:02:41.185290 kubelet[2464]: I0130 16:02:41.185289 2464 kubelet.go:312] "Adding apiserver pod source" Jan 30 16:02:41.185343 kubelet[2464]: I0130 16:02:41.185324 2464 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 16:02:41.195044 kubelet[2464]: W0130 16:02:41.194812 2464 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:41.195044 kubelet[2464]: E0130 16:02:41.194968 2464 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:41.196034 kubelet[2464]: I0130 16:02:41.195165 2464 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 16:02:41.199147 kubelet[2464]: I0130 16:02:41.198676 2464 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 16:02:41.199147 kubelet[2464]: W0130 16:02:41.198782 2464 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 16:02:41.199394 kubelet[2464]: W0130 16:02:41.199352 2464 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-e08351c9d9.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:41.199482 kubelet[2464]: E0130 16:02:41.199469 2464 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-e08351c9d9.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:41.200208 kubelet[2464]: I0130 16:02:41.200169 2464 server.go:1264] "Started kubelet" Jan 30 16:02:41.200971 kubelet[2464]: I0130 16:02:41.200294 2464 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 16:02:41.202341 kubelet[2464]: I0130 16:02:41.202321 2464 server.go:455] "Adding debug handlers to kubelet server" Jan 30 16:02:41.205734 kubelet[2464]: I0130 16:02:41.205719 2464 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 16:02:41.209321 kubelet[2464]: I0130 16:02:41.209276 2464 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 16:02:41.209573 kubelet[2464]: I0130 16:02:41.209560 2464 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 16:02:41.213438 kubelet[2464]: E0130 16:02:41.213181 2464 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.55:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.55:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-2-e08351c9d9.novalocal.181f83dbca1a8ef2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-2-e08351c9d9.novalocal,UID:ci-4081-3-0-2-e08351c9d9.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-2-e08351c9d9.novalocal,},FirstTimestamp:2025-01-30 16:02:41.200115442 +0000 UTC m=+0.652185207,LastTimestamp:2025-01-30 16:02:41.200115442 +0000 UTC m=+0.652185207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-2-e08351c9d9.novalocal,}" Jan 30 16:02:41.214038 kubelet[2464]: I0130 16:02:41.213989 2464 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 16:02:41.215448 kubelet[2464]: I0130 16:02:41.214232 2464 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 16:02:41.215448 kubelet[2464]: I0130 16:02:41.214387 2464 reconciler.go:26] "Reconciler: start to sync state" Jan 30 16:02:41.215448 kubelet[2464]: W0130 16:02:41.215066 2464 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:41.215448 kubelet[2464]: E0130 16:02:41.215161 2464 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:41.216516 kubelet[2464]: E0130 16:02:41.216446 2464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-e08351c9d9.novalocal?timeout=10s\": dial tcp 172.24.4.55:6443: connect: connection refused" interval="200ms" Jan 30 16:02:41.216675 kubelet[2464]: E0130 16:02:41.216636 2464 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 16:02:41.217366 kubelet[2464]: I0130 16:02:41.217331 2464 factory.go:221] Registration of the systemd container factory successfully Jan 30 16:02:41.217553 kubelet[2464]: I0130 16:02:41.217513 2464 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 16:02:41.219258 kubelet[2464]: I0130 16:02:41.219225 2464 factory.go:221] Registration of the containerd container factory successfully Jan 30 16:02:41.229220 kubelet[2464]: I0130 16:02:41.229166 2464 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 16:02:41.230608 kubelet[2464]: I0130 16:02:41.230337 2464 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 16:02:41.230608 kubelet[2464]: I0130 16:02:41.230360 2464 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 16:02:41.230608 kubelet[2464]: I0130 16:02:41.230380 2464 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 16:02:41.230608 kubelet[2464]: E0130 16:02:41.230416 2464 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 16:02:41.250889 kubelet[2464]: W0130 16:02:41.250278 2464 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:41.250889 kubelet[2464]: E0130 16:02:41.250339 2464 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:41.262273 kubelet[2464]: I0130 16:02:41.262252 2464 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 16:02:41.262391 kubelet[2464]: I0130 16:02:41.262381 2464 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 16:02:41.262449 kubelet[2464]: I0130 16:02:41.262441 2464 state_mem.go:36] "Initialized new in-memory state store" Jan 30 16:02:41.268485 kubelet[2464]: I0130 16:02:41.268472 2464 policy_none.go:49] "None policy: Start" Jan 30 16:02:41.269298 kubelet[2464]: I0130 16:02:41.269246 2464 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 16:02:41.269348 kubelet[2464]: I0130 16:02:41.269303 2464 state_mem.go:35] "Initializing new in-memory state store" Jan 30 16:02:41.275346 kubelet[2464]: I0130 16:02:41.275313 2464 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 16:02:41.276634 kubelet[2464]: I0130 16:02:41.275517 2464 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 16:02:41.276634 kubelet[2464]: I0130 16:02:41.275645 2464 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 16:02:41.278092 kubelet[2464]: E0130 16:02:41.278002 2464 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-2-e08351c9d9.novalocal\" not found" Jan 30 16:02:41.317216 kubelet[2464]: I0130 16:02:41.317151 2464 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.317911 kubelet[2464]: E0130 16:02:41.317848 2464 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.55:6443/api/v1/nodes\": dial tcp 172.24.4.55:6443: connect: connection refused" node="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.331528 kubelet[2464]: I0130 16:02:41.331333 2464 topology_manager.go:215] "Topology Admit Handler" podUID="205fd169881b072f9967d08ea1077909" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.334734 kubelet[2464]: I0130 16:02:41.334420 2464 topology_manager.go:215] "Topology Admit Handler" podUID="210f140ba815c5745a68410f6827aa92" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.339136 kubelet[2464]: I0130 16:02:41.338137 2464 topology_manager.go:215] "Topology Admit Handler" podUID="8180d2cb998f3d19f16f8ef1122f6a27" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.356510 kubelet[2464]: E0130 16:02:41.356296 2464 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.55:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.55:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-2-e08351c9d9.novalocal.181f83dbca1a8ef2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-2-e08351c9d9.novalocal,UID:ci-4081-3-0-2-e08351c9d9.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-2-e08351c9d9.novalocal,},FirstTimestamp:2025-01-30 16:02:41.200115442 +0000 UTC m=+0.652185207,LastTimestamp:2025-01-30 16:02:41.200115442 +0000 UTC m=+0.652185207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-2-e08351c9d9.novalocal,}" Jan 30 16:02:41.419135 kubelet[2464]: E0130 16:02:41.417926 2464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-e08351c9d9.novalocal?timeout=10s\": dial tcp 172.24.4.55:6443: connect: connection refused" interval="400ms" Jan 30 16:02:41.516156 kubelet[2464]: I0130 16:02:41.515511 2464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/210f140ba815c5745a68410f6827aa92-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"210f140ba815c5745a68410f6827aa92\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.516156 kubelet[2464]: I0130 16:02:41.515594 2464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/205fd169881b072f9967d08ea1077909-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"205fd169881b072f9967d08ea1077909\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.516156 kubelet[2464]: I0130 16:02:41.515644 2464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/205fd169881b072f9967d08ea1077909-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"205fd169881b072f9967d08ea1077909\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.516156 kubelet[2464]: I0130 16:02:41.515695 2464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/210f140ba815c5745a68410f6827aa92-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"210f140ba815c5745a68410f6827aa92\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.516550 kubelet[2464]: I0130 16:02:41.515744 2464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/210f140ba815c5745a68410f6827aa92-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"210f140ba815c5745a68410f6827aa92\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.516550 kubelet[2464]: I0130 16:02:41.515791 2464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8180d2cb998f3d19f16f8ef1122f6a27-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"8180d2cb998f3d19f16f8ef1122f6a27\") " pod="kube-system/kube-scheduler-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.516550 kubelet[2464]: I0130 16:02:41.515836 2464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/205fd169881b072f9967d08ea1077909-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"205fd169881b072f9967d08ea1077909\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.516550 kubelet[2464]: I0130 16:02:41.515878 2464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/210f140ba815c5745a68410f6827aa92-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"210f140ba815c5745a68410f6827aa92\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.516793 kubelet[2464]: I0130 16:02:41.515936 2464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/210f140ba815c5745a68410f6827aa92-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"210f140ba815c5745a68410f6827aa92\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.521566 kubelet[2464]: I0130 16:02:41.521512 2464 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.522427 kubelet[2464]: E0130 16:02:41.522323 2464 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.55:6443/api/v1/nodes\": dial tcp 172.24.4.55:6443: connect: connection refused" node="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.650416 containerd[1586]: time="2025-01-30T16:02:41.650215022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal,Uid:210f140ba815c5745a68410f6827aa92,Namespace:kube-system,Attempt:0,}" Jan 30 16:02:41.654858 containerd[1586]: time="2025-01-30T16:02:41.654791427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal,Uid:205fd169881b072f9967d08ea1077909,Namespace:kube-system,Attempt:0,}" Jan 30 16:02:41.655913 containerd[1586]: time="2025-01-30T16:02:41.655510400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-2-e08351c9d9.novalocal,Uid:8180d2cb998f3d19f16f8ef1122f6a27,Namespace:kube-system,Attempt:0,}" Jan 30 16:02:41.819314 kubelet[2464]: E0130 16:02:41.819207 2464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-e08351c9d9.novalocal?timeout=10s\": dial tcp 172.24.4.55:6443: connect: connection refused" interval="800ms" Jan 30 16:02:41.925937 kubelet[2464]: I0130 16:02:41.925852 2464 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:41.926673 kubelet[2464]: E0130 16:02:41.926560 2464 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.55:6443/api/v1/nodes\": dial tcp 172.24.4.55:6443: connect: connection refused" node="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:42.295956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560634984.mount: Deactivated successfully. Jan 30 16:02:42.309205 containerd[1586]: time="2025-01-30T16:02:42.308868462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 16:02:42.311298 containerd[1586]: time="2025-01-30T16:02:42.311200517Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 16:02:42.313584 containerd[1586]: time="2025-01-30T16:02:42.313494781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 16:02:42.314124 containerd[1586]: time="2025-01-30T16:02:42.313956559Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 16:02:42.315096 containerd[1586]: time="2025-01-30T16:02:42.314852954Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 16:02:42.316634 kubelet[2464]: W0130 16:02:42.316535 2464 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:42.316946 kubelet[2464]: E0130 16:02:42.316897 2464 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:42.319105 containerd[1586]: time="2025-01-30T16:02:42.318303862Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 16:02:42.322599 containerd[1586]: time="2025-01-30T16:02:42.322505082Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 16:02:42.327141 containerd[1586]: time="2025-01-30T16:02:42.327083531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 16:02:42.333713 containerd[1586]: time="2025-01-30T16:02:42.333657040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 677.98683ms" Jan 30 16:02:42.339912 containerd[1586]: time="2025-01-30T16:02:42.339833072Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 689.202069ms" Jan 30 16:02:42.340512 containerd[1586]: time="2025-01-30T16:02:42.340407401Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 685.433972ms" Jan 30 16:02:42.344291 kubelet[2464]: W0130 16:02:42.344162 2464 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-e08351c9d9.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:42.344421 kubelet[2464]: E0130 16:02:42.344335 2464 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-e08351c9d9.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:42.458258 kubelet[2464]: W0130 16:02:42.458151 2464 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:42.458258 kubelet[2464]: E0130 16:02:42.458208 2464 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:42.510425 kubelet[2464]: W0130 16:02:42.510231 2464 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:42.510425 kubelet[2464]: E0130 16:02:42.510380 2464 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.55:6443: connect: connection refused Jan 30 16:02:42.580916 containerd[1586]: time="2025-01-30T16:02:42.580048055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 16:02:42.580916 containerd[1586]: time="2025-01-30T16:02:42.580139797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 16:02:42.580916 containerd[1586]: time="2025-01-30T16:02:42.580155988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:02:42.583283 containerd[1586]: time="2025-01-30T16:02:42.581997060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 16:02:42.583283 containerd[1586]: time="2025-01-30T16:02:42.582943980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 16:02:42.583283 containerd[1586]: time="2025-01-30T16:02:42.582994084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:02:42.584274 containerd[1586]: time="2025-01-30T16:02:42.584158433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:02:42.584431 containerd[1586]: time="2025-01-30T16:02:42.583891041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:02:42.585889 containerd[1586]: time="2025-01-30T16:02:42.585748783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 16:02:42.586436 containerd[1586]: time="2025-01-30T16:02:42.586117977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 16:02:42.586436 containerd[1586]: time="2025-01-30T16:02:42.586199049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:02:42.586850 containerd[1586]: time="2025-01-30T16:02:42.586664004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:02:42.621998 kubelet[2464]: E0130 16:02:42.621862 2464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-e08351c9d9.novalocal?timeout=10s\": dial tcp 172.24.4.55:6443: connect: connection refused" interval="1.6s" Jan 30 16:02:42.685241 containerd[1586]: time="2025-01-30T16:02:42.685180976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal,Uid:205fd169881b072f9967d08ea1077909,Namespace:kube-system,Attempt:0,} returns sandbox id \"187b254e7609a87897dafab2e281dabf155c7195db2b83aef1b00b5f9f72d649\"" Jan 30 16:02:42.694448 containerd[1586]: time="2025-01-30T16:02:42.693975501Z" level=info msg="CreateContainer within sandbox \"187b254e7609a87897dafab2e281dabf155c7195db2b83aef1b00b5f9f72d649\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 16:02:42.695413 containerd[1586]: time="2025-01-30T16:02:42.695277158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-2-e08351c9d9.novalocal,Uid:8180d2cb998f3d19f16f8ef1122f6a27,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e2cc9d450362a78b1624cac25c991c9dd62752f07a7920767378993de77f206\"" Jan 30 16:02:42.698071 containerd[1586]: time="2025-01-30T16:02:42.697988236Z" level=info msg="CreateContainer within sandbox \"8e2cc9d450362a78b1624cac25c991c9dd62752f07a7920767378993de77f206\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 16:02:42.703780 containerd[1586]: time="2025-01-30T16:02:42.703708832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal,Uid:210f140ba815c5745a68410f6827aa92,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c834f8fd42a6e29b911d9286553df189300e0dcaff81ec12291959b2d737fdd\"" Jan 30 16:02:42.707523 containerd[1586]: time="2025-01-30T16:02:42.707491114Z" level=info msg="CreateContainer within sandbox \"9c834f8fd42a6e29b911d9286553df189300e0dcaff81ec12291959b2d737fdd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 16:02:42.723101 containerd[1586]: time="2025-01-30T16:02:42.722992189Z" level=info msg="CreateContainer within sandbox \"187b254e7609a87897dafab2e281dabf155c7195db2b83aef1b00b5f9f72d649\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4b9a082be90e3f513d5e6457a183a8fb81d1a1a756f47e36ab1ae33023557aaf\"" Jan 30 16:02:42.724250 containerd[1586]: time="2025-01-30T16:02:42.723637572Z" level=info msg="StartContainer for \"4b9a082be90e3f513d5e6457a183a8fb81d1a1a756f47e36ab1ae33023557aaf\"" Jan 30 16:02:42.729567 kubelet[2464]: I0130 16:02:42.729328 2464 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:42.730345 kubelet[2464]: E0130 16:02:42.730121 2464 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.55:6443/api/v1/nodes\": dial tcp 172.24.4.55:6443: connect: connection refused" node="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:42.737112 containerd[1586]: time="2025-01-30T16:02:42.736997782Z" level=info msg="CreateContainer within sandbox \"9c834f8fd42a6e29b911d9286553df189300e0dcaff81ec12291959b2d737fdd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ce0fa697e52ac0fb3d4acbee27ad7a34b377068829fda6e415388181dccf5651\"" Jan 30 16:02:42.738079 containerd[1586]: time="2025-01-30T16:02:42.738009545Z" level=info msg="StartContainer for \"ce0fa697e52ac0fb3d4acbee27ad7a34b377068829fda6e415388181dccf5651\"" Jan 30 16:02:42.744451 containerd[1586]: time="2025-01-30T16:02:42.744400962Z" level=info msg="CreateContainer within sandbox \"8e2cc9d450362a78b1624cac25c991c9dd62752f07a7920767378993de77f206\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"87fd3921f1bf5212556e557a825049cc20072de6c554a8e41637ccde96547a07\"" Jan 30 16:02:42.745349 containerd[1586]: time="2025-01-30T16:02:42.745161100Z" level=info msg="StartContainer for \"87fd3921f1bf5212556e557a825049cc20072de6c554a8e41637ccde96547a07\"" Jan 30 16:02:42.877079 containerd[1586]: time="2025-01-30T16:02:42.876526865Z" level=info msg="StartContainer for \"4b9a082be90e3f513d5e6457a183a8fb81d1a1a756f47e36ab1ae33023557aaf\" returns successfully" Jan 30 16:02:42.877079 containerd[1586]: time="2025-01-30T16:02:42.876686745Z" level=info msg="StartContainer for \"ce0fa697e52ac0fb3d4acbee27ad7a34b377068829fda6e415388181dccf5651\" returns successfully" Jan 30 16:02:42.893566 containerd[1586]: time="2025-01-30T16:02:42.893090698Z" level=info msg="StartContainer for \"87fd3921f1bf5212556e557a825049cc20072de6c554a8e41637ccde96547a07\" returns successfully" Jan 30 16:02:44.335952 kubelet[2464]: I0130 16:02:44.335476 2464 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:44.923192 kubelet[2464]: E0130 16:02:44.923139 2464 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-2-e08351c9d9.novalocal\" not found" node="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:45.043158 kubelet[2464]: I0130 16:02:45.043064 2464 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:45.196398 kubelet[2464]: I0130 16:02:45.196153 2464 apiserver.go:52] "Watching apiserver" Jan 30 16:02:45.215053 kubelet[2464]: I0130 16:02:45.215008 2464 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 16:02:45.410186 kubelet[2464]: E0130 16:02:45.410115 2464 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:47.336174 systemd[1]: Reloading requested from client PID 2736 ('systemctl') (unit session-11.scope)... Jan 30 16:02:47.336206 systemd[1]: Reloading... Jan 30 16:02:47.440103 zram_generator::config[2774]: No configuration found. Jan 30 16:02:47.590945 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 16:02:47.704615 systemd[1]: Reloading finished in 367 ms. Jan 30 16:02:47.737687 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 16:02:47.738341 kubelet[2464]: E0130 16:02:47.738104 2464 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081-3-0-2-e08351c9d9.novalocal.181f83dbca1a8ef2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-2-e08351c9d9.novalocal,UID:ci-4081-3-0-2-e08351c9d9.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-2-e08351c9d9.novalocal,},FirstTimestamp:2025-01-30 16:02:41.200115442 +0000 UTC m=+0.652185207,LastTimestamp:2025-01-30 16:02:41.200115442 +0000 UTC m=+0.652185207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-2-e08351c9d9.novalocal,}" Jan 30 16:02:47.749343 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 16:02:47.749646 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 16:02:47.756674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 16:02:48.095201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 16:02:48.111128 (kubelet)[2849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 16:02:48.186750 kubelet[2849]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:02:48.186750 kubelet[2849]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 16:02:48.186750 kubelet[2849]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:02:48.186750 kubelet[2849]: I0130 16:02:48.186160 2849 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 16:02:48.198193 kubelet[2849]: I0130 16:02:48.198135 2849 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 16:02:48.198357 kubelet[2849]: I0130 16:02:48.198347 2849 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 16:02:48.198674 kubelet[2849]: I0130 16:02:48.198659 2849 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 16:02:48.201497 kubelet[2849]: I0130 16:02:48.201388 2849 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 16:02:48.203429 kubelet[2849]: I0130 16:02:48.203412 2849 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 16:02:48.214645 kubelet[2849]: I0130 16:02:48.214520 2849 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 16:02:48.215240 kubelet[2849]: I0130 16:02:48.215198 2849 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 16:02:48.215440 kubelet[2849]: I0130 16:02:48.215242 2849 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-2-e08351c9d9.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 16:02:48.215553 kubelet[2849]: I0130 16:02:48.215450 2849 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 16:02:48.215553 kubelet[2849]: I0130 16:02:48.215463 2849 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 16:02:48.215553 kubelet[2849]: I0130 16:02:48.215497 2849 state_mem.go:36] "Initialized new in-memory state store" Jan 30 16:02:48.215902 kubelet[2849]: I0130 16:02:48.215869 2849 kubelet.go:400] "Attempting to sync node with API server" Jan 30 16:02:48.216273 kubelet[2849]: I0130 16:02:48.216252 2849 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 16:02:48.216317 kubelet[2849]: I0130 16:02:48.216291 2849 kubelet.go:312] "Adding apiserver pod source" Jan 30 16:02:48.216317 kubelet[2849]: I0130 16:02:48.216308 2849 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 16:02:48.221678 kubelet[2849]: I0130 16:02:48.221255 2849 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 16:02:48.221678 kubelet[2849]: I0130 16:02:48.221416 2849 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 16:02:48.224355 kubelet[2849]: I0130 16:02:48.224304 2849 server.go:1264] "Started kubelet" Jan 30 16:02:48.234658 kubelet[2849]: I0130 16:02:48.234513 2849 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 16:02:48.238608 kubelet[2849]: I0130 16:02:48.238548 2849 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 16:02:48.240063 kubelet[2849]: I0130 16:02:48.239864 2849 server.go:455] "Adding debug handlers to kubelet server" Jan 30 16:02:48.241051 kubelet[2849]: I0130 16:02:48.240964 2849 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 16:02:48.241227 kubelet[2849]: I0130 16:02:48.241207 2849 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 16:02:48.244586 kubelet[2849]: I0130 16:02:48.244143 2849 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 16:02:48.245176 kubelet[2849]: I0130 16:02:48.245153 2849 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 16:02:48.245323 kubelet[2849]: I0130 16:02:48.245304 2849 reconciler.go:26] "Reconciler: start to sync state" Jan 30 16:02:48.254170 kubelet[2849]: I0130 16:02:48.254132 2849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 16:02:48.255580 kubelet[2849]: I0130 16:02:48.255461 2849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 16:02:48.255580 kubelet[2849]: I0130 16:02:48.255587 2849 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 16:02:48.255699 kubelet[2849]: I0130 16:02:48.255612 2849 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 16:02:48.255699 kubelet[2849]: E0130 16:02:48.255652 2849 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 16:02:48.257141 kubelet[2849]: I0130 16:02:48.257118 2849 factory.go:221] Registration of the containerd container factory successfully Jan 30 16:02:48.257141 kubelet[2849]: I0130 16:02:48.257133 2849 factory.go:221] Registration of the systemd container factory successfully Jan 30 16:02:48.257216 kubelet[2849]: I0130 16:02:48.257201 2849 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 16:02:48.264820 kubelet[2849]: E0130 16:02:48.263947 2849 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 16:02:48.338564 kubelet[2849]: I0130 16:02:48.338537 2849 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 16:02:48.338564 kubelet[2849]: I0130 16:02:48.338555 2849 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 16:02:48.338564 kubelet[2849]: I0130 16:02:48.338572 2849 state_mem.go:36] "Initialized new in-memory state store" Jan 30 16:02:48.338749 kubelet[2849]: I0130 16:02:48.338712 2849 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 16:02:48.338749 kubelet[2849]: I0130 16:02:48.338723 2849 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 16:02:48.338749 kubelet[2849]: I0130 16:02:48.338741 2849 policy_none.go:49] "None policy: Start" Jan 30 16:02:48.340638 kubelet[2849]: I0130 16:02:48.340584 2849 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 16:02:48.340638 kubelet[2849]: I0130 16:02:48.340603 2849 state_mem.go:35] "Initializing new in-memory state store" Jan 30 16:02:48.340802 kubelet[2849]: I0130 16:02:48.340767 2849 state_mem.go:75] "Updated machine memory state" Jan 30 16:02:48.341973 kubelet[2849]: I0130 16:02:48.341918 2849 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 16:02:48.343116 kubelet[2849]: I0130 16:02:48.342121 2849 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 16:02:48.343116 kubelet[2849]: I0130 16:02:48.342216 2849 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 16:02:48.348184 kubelet[2849]: I0130 16:02:48.348052 2849 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:48.356723 kubelet[2849]: I0130 16:02:48.356691 2849 topology_manager.go:215] "Topology Admit Handler" podUID="205fd169881b072f9967d08ea1077909" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:48.357119 kubelet[2849]: I0130 16:02:48.357101 2849 topology_manager.go:215] "Topology Admit Handler" podUID="210f140ba815c5745a68410f6827aa92" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:48.357240 kubelet[2849]: I0130 16:02:48.357227 2849 topology_manager.go:215] "Topology Admit Handler" podUID="8180d2cb998f3d19f16f8ef1122f6a27" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:48.373078 kubelet[2849]: W0130 16:02:48.373029 2849 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 16:02:48.374375 kubelet[2849]: W0130 16:02:48.373126 2849 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 16:02:48.374375 kubelet[2849]: W0130 16:02:48.373161 2849 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 16:02:48.374375 kubelet[2849]: I0130 16:02:48.373650 2849 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:48.374375 kubelet[2849]: I0130 16:02:48.373726 2849 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:48.448601 kubelet[2849]: I0130 16:02:48.448299 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/205fd169881b072f9967d08ea1077909-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"205fd169881b072f9967d08ea1077909\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:48.448601 kubelet[2849]: I0130 16:02:48.448338 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/205fd169881b072f9967d08ea1077909-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"205fd169881b072f9967d08ea1077909\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:48.448601 kubelet[2849]: I0130 16:02:48.448362 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/210f140ba815c5745a68410f6827aa92-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"210f140ba815c5745a68410f6827aa92\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:48.448601 kubelet[2849]: I0130 16:02:48.448382 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8180d2cb998f3d19f16f8ef1122f6a27-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"8180d2cb998f3d19f16f8ef1122f6a27\") " pod="kube-system/kube-scheduler-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:48.448601 kubelet[2849]: I0130 16:02:48.448404 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/205fd169881b072f9967d08ea1077909-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"205fd169881b072f9967d08ea1077909\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:48.448837 kubelet[2849]: I0130 16:02:48.448422 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/210f140ba815c5745a68410f6827aa92-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"210f140ba815c5745a68410f6827aa92\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:48.448837 kubelet[2849]: I0130 16:02:48.448443 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/210f140ba815c5745a68410f6827aa92-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"210f140ba815c5745a68410f6827aa92\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:48.448837 kubelet[2849]: I0130 16:02:48.448462 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/210f140ba815c5745a68410f6827aa92-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"210f140ba815c5745a68410f6827aa92\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:48.448837 kubelet[2849]: I0130 16:02:48.448485 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/210f140ba815c5745a68410f6827aa92-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal\" (UID: \"210f140ba815c5745a68410f6827aa92\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:49.218480 kubelet[2849]: I0130 16:02:49.217976 2849 apiserver.go:52] "Watching apiserver" Jan 30 16:02:49.246447 kubelet[2849]: I0130 16:02:49.246324 2849 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 16:02:49.307448 kubelet[2849]: W0130 16:02:49.307226 2849 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 16:02:49.308130 kubelet[2849]: E0130 16:02:49.307790 2849 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:02:49.370773 kubelet[2849]: I0130 16:02:49.370707 2849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-2-e08351c9d9.novalocal" podStartSLOduration=1.370684819 podStartE2EDuration="1.370684819s" podCreationTimestamp="2025-01-30 16:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 16:02:49.350948255 +0000 UTC m=+1.231331143" watchObservedRunningTime="2025-01-30 16:02:49.370684819 +0000 UTC m=+1.251067618" Jan 30 16:02:49.450672 kubelet[2849]: I0130 16:02:49.450598 2849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-2-e08351c9d9.novalocal" podStartSLOduration=1.450559514 podStartE2EDuration="1.450559514s" podCreationTimestamp="2025-01-30 16:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 16:02:49.371001875 +0000 UTC m=+1.251384663" watchObservedRunningTime="2025-01-30 16:02:49.450559514 +0000 UTC m=+1.330942312" Jan 30 16:02:49.451055 kubelet[2849]: I0130 16:02:49.450922 2849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-2-e08351c9d9.novalocal" podStartSLOduration=1.450815704 podStartE2EDuration="1.450815704s" podCreationTimestamp="2025-01-30 16:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 16:02:49.446885583 +0000 UTC m=+1.327268371" watchObservedRunningTime="2025-01-30 16:02:49.450815704 +0000 UTC m=+1.331198493" Jan 30 16:02:54.149365 sudo[1877]: pam_unix(sudo:session): session closed for user root Jan 30 16:02:54.313896 sshd[1871]: pam_unix(sshd:session): session closed for user core Jan 30 16:02:54.325614 systemd[1]: sshd@8-172.24.4.55:22-172.24.4.1:51820.service: Deactivated successfully. Jan 30 16:02:54.332161 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 16:02:54.332645 systemd-logind[1567]: Session 11 logged out. Waiting for processes to exit. Jan 30 16:02:54.336327 systemd-logind[1567]: Removed session 11. Jan 30 16:03:01.074466 kubelet[2849]: I0130 16:03:01.074041 2849 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 16:03:01.078337 kubelet[2849]: I0130 16:03:01.076346 2849 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 16:03:01.078591 containerd[1586]: time="2025-01-30T16:03:01.075280093Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 16:03:01.982155 kubelet[2849]: I0130 16:03:01.979912 2849 topology_manager.go:215] "Topology Admit Handler" podUID="305d944b-81cd-4890-9866-5c23c22ba575" podNamespace="kube-system" podName="kube-proxy-l5cb7" Jan 30 16:03:02.107509 kubelet[2849]: I0130 16:03:02.107006 2849 topology_manager.go:215] "Topology Admit Handler" podUID="bf9ba131-1acc-41e9-a6b8-3037653e3ce0" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-gw6jp" Jan 30 16:03:02.152694 kubelet[2849]: I0130 16:03:02.152369 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/305d944b-81cd-4890-9866-5c23c22ba575-xtables-lock\") pod \"kube-proxy-l5cb7\" (UID: \"305d944b-81cd-4890-9866-5c23c22ba575\") " pod="kube-system/kube-proxy-l5cb7" Jan 30 16:03:02.152906 kubelet[2849]: I0130 16:03:02.152791 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/305d944b-81cd-4890-9866-5c23c22ba575-lib-modules\") pod \"kube-proxy-l5cb7\" (UID: \"305d944b-81cd-4890-9866-5c23c22ba575\") " pod="kube-system/kube-proxy-l5cb7" Jan 30 16:03:02.153115 kubelet[2849]: I0130 16:03:02.152996 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxrtc\" (UniqueName: \"kubernetes.io/projected/305d944b-81cd-4890-9866-5c23c22ba575-kube-api-access-dxrtc\") pod \"kube-proxy-l5cb7\" (UID: \"305d944b-81cd-4890-9866-5c23c22ba575\") " pod="kube-system/kube-proxy-l5cb7" Jan 30 16:03:02.153115 kubelet[2849]: I0130 16:03:02.153077 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/305d944b-81cd-4890-9866-5c23c22ba575-kube-proxy\") pod \"kube-proxy-l5cb7\" (UID: \"305d944b-81cd-4890-9866-5c23c22ba575\") " pod="kube-system/kube-proxy-l5cb7" Jan 30 16:03:02.253948 kubelet[2849]: I0130 16:03:02.253341 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpmdx\" (UniqueName: \"kubernetes.io/projected/bf9ba131-1acc-41e9-a6b8-3037653e3ce0-kube-api-access-dpmdx\") pod \"tigera-operator-7bc55997bb-gw6jp\" (UID: \"bf9ba131-1acc-41e9-a6b8-3037653e3ce0\") " pod="tigera-operator/tigera-operator-7bc55997bb-gw6jp" Jan 30 16:03:02.253948 kubelet[2849]: I0130 16:03:02.253443 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bf9ba131-1acc-41e9-a6b8-3037653e3ce0-var-lib-calico\") pod \"tigera-operator-7bc55997bb-gw6jp\" (UID: \"bf9ba131-1acc-41e9-a6b8-3037653e3ce0\") " pod="tigera-operator/tigera-operator-7bc55997bb-gw6jp" Jan 30 16:03:02.307817 containerd[1586]: time="2025-01-30T16:03:02.307052104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l5cb7,Uid:305d944b-81cd-4890-9866-5c23c22ba575,Namespace:kube-system,Attempt:0,}" Jan 30 16:03:02.366521 containerd[1586]: time="2025-01-30T16:03:02.366379800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 16:03:02.366904 containerd[1586]: time="2025-01-30T16:03:02.366844522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 16:03:02.367210 containerd[1586]: time="2025-01-30T16:03:02.367150897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:02.368346 containerd[1586]: time="2025-01-30T16:03:02.368277612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:02.415900 containerd[1586]: time="2025-01-30T16:03:02.415856528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-gw6jp,Uid:bf9ba131-1acc-41e9-a6b8-3037653e3ce0,Namespace:tigera-operator,Attempt:0,}" Jan 30 16:03:02.427919 containerd[1586]: time="2025-01-30T16:03:02.427880584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l5cb7,Uid:305d944b-81cd-4890-9866-5c23c22ba575,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c48b2d6ed0e77aeca8c7485a5c1f0503eae2050aaa364a14af19178ba76933a\"" Jan 30 16:03:02.433714 containerd[1586]: time="2025-01-30T16:03:02.433677003Z" level=info msg="CreateContainer within sandbox \"4c48b2d6ed0e77aeca8c7485a5c1f0503eae2050aaa364a14af19178ba76933a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 16:03:02.456466 containerd[1586]: time="2025-01-30T16:03:02.456309449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 16:03:02.456466 containerd[1586]: time="2025-01-30T16:03:02.456410138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 16:03:02.456726 containerd[1586]: time="2025-01-30T16:03:02.456433592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:02.457345 containerd[1586]: time="2025-01-30T16:03:02.457245335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:02.467397 containerd[1586]: time="2025-01-30T16:03:02.467352344Z" level=info msg="CreateContainer within sandbox \"4c48b2d6ed0e77aeca8c7485a5c1f0503eae2050aaa364a14af19178ba76933a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d01e861063d03919038bbb77e9f5c7700ea4b41c569e31a68884900b7a518973\"" Jan 30 16:03:02.469049 containerd[1586]: time="2025-01-30T16:03:02.468751710Z" level=info msg="StartContainer for \"d01e861063d03919038bbb77e9f5c7700ea4b41c569e31a68884900b7a518973\"" Jan 30 16:03:02.529027 containerd[1586]: time="2025-01-30T16:03:02.528855874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-gw6jp,Uid:bf9ba131-1acc-41e9-a6b8-3037653e3ce0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2b799800986ba43457eeefa3d65ab3a4f9b348e8a6a52b059e0f011cd3f5e00d\"" Jan 30 16:03:02.533360 containerd[1586]: time="2025-01-30T16:03:02.533122842Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 16:03:02.562540 containerd[1586]: time="2025-01-30T16:03:02.562412092Z" level=info msg="StartContainer for \"d01e861063d03919038bbb77e9f5c7700ea4b41c569e31a68884900b7a518973\" returns successfully" Jan 30 16:03:03.375428 kubelet[2849]: I0130 16:03:03.372616 2849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l5cb7" podStartSLOduration=2.372583455 podStartE2EDuration="2.372583455s" podCreationTimestamp="2025-01-30 16:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 16:03:03.372201539 +0000 UTC m=+15.252584387" watchObservedRunningTime="2025-01-30 16:03:03.372583455 +0000 UTC m=+15.252966294" Jan 30 16:03:04.640396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519849970.mount: Deactivated successfully. Jan 30 16:03:06.106800 containerd[1586]: time="2025-01-30T16:03:06.106303566Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:06.107752 containerd[1586]: time="2025-01-30T16:03:06.107681552Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 16:03:06.110055 containerd[1586]: time="2025-01-30T16:03:06.109132284Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:06.112359 containerd[1586]: time="2025-01-30T16:03:06.112272387Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:06.115117 containerd[1586]: time="2025-01-30T16:03:06.114489226Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.58130541s" Jan 30 16:03:06.115117 containerd[1586]: time="2025-01-30T16:03:06.114563135Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 16:03:06.120628 containerd[1586]: time="2025-01-30T16:03:06.120380030Z" level=info msg="CreateContainer within sandbox \"2b799800986ba43457eeefa3d65ab3a4f9b348e8a6a52b059e0f011cd3f5e00d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 16:03:06.161688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471794013.mount: Deactivated successfully. Jan 30 16:03:06.164318 containerd[1586]: time="2025-01-30T16:03:06.163513881Z" level=info msg="CreateContainer within sandbox \"2b799800986ba43457eeefa3d65ab3a4f9b348e8a6a52b059e0f011cd3f5e00d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"639b8c4c0a98e324d5c1c76730aee1be17a87a45fc0106bdf79903e05a2cc694\"" Jan 30 16:03:06.168214 containerd[1586]: time="2025-01-30T16:03:06.168133600Z" level=info msg="StartContainer for \"639b8c4c0a98e324d5c1c76730aee1be17a87a45fc0106bdf79903e05a2cc694\"" Jan 30 16:03:06.223586 systemd[1]: run-containerd-runc-k8s.io-639b8c4c0a98e324d5c1c76730aee1be17a87a45fc0106bdf79903e05a2cc694-runc.yt5vlI.mount: Deactivated successfully. Jan 30 16:03:06.258279 containerd[1586]: time="2025-01-30T16:03:06.258245505Z" level=info msg="StartContainer for \"639b8c4c0a98e324d5c1c76730aee1be17a87a45fc0106bdf79903e05a2cc694\" returns successfully" Jan 30 16:03:06.390012 kubelet[2849]: I0130 16:03:06.389781 2849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-gw6jp" podStartSLOduration=0.803382213 podStartE2EDuration="4.389750514s" podCreationTimestamp="2025-01-30 16:03:02 +0000 UTC" firstStartedPulling="2025-01-30 16:03:02.531169726 +0000 UTC m=+14.411552514" lastFinishedPulling="2025-01-30 16:03:06.117537967 +0000 UTC m=+17.997920815" observedRunningTime="2025-01-30 16:03:06.389307152 +0000 UTC m=+18.269690020" watchObservedRunningTime="2025-01-30 16:03:06.389750514 +0000 UTC m=+18.270133352" Jan 30 16:03:09.547472 kubelet[2849]: I0130 16:03:09.546950 2849 topology_manager.go:215] "Topology Admit Handler" podUID="38db8b1d-7128-41d0-8067-7fb59ecf7a0f" podNamespace="calico-system" podName="calico-typha-7999777c77-spggr" Jan 30 16:03:09.606256 kubelet[2849]: I0130 16:03:09.606119 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38db8b1d-7128-41d0-8067-7fb59ecf7a0f-tigera-ca-bundle\") pod \"calico-typha-7999777c77-spggr\" (UID: \"38db8b1d-7128-41d0-8067-7fb59ecf7a0f\") " pod="calico-system/calico-typha-7999777c77-spggr" Jan 30 16:03:09.606256 kubelet[2849]: I0130 16:03:09.606165 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/38db8b1d-7128-41d0-8067-7fb59ecf7a0f-typha-certs\") pod \"calico-typha-7999777c77-spggr\" (UID: \"38db8b1d-7128-41d0-8067-7fb59ecf7a0f\") " pod="calico-system/calico-typha-7999777c77-spggr" Jan 30 16:03:09.606256 kubelet[2849]: I0130 16:03:09.606191 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5s2d\" (UniqueName: \"kubernetes.io/projected/38db8b1d-7128-41d0-8067-7fb59ecf7a0f-kube-api-access-v5s2d\") pod \"calico-typha-7999777c77-spggr\" (UID: \"38db8b1d-7128-41d0-8067-7fb59ecf7a0f\") " pod="calico-system/calico-typha-7999777c77-spggr" Jan 30 16:03:09.761674 kubelet[2849]: I0130 16:03:09.759176 2849 topology_manager.go:215] "Topology Admit Handler" podUID="b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a" podNamespace="calico-system" podName="calico-node-gfwdg" Jan 30 16:03:09.808413 kubelet[2849]: I0130 16:03:09.808284 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a-var-run-calico\") pod \"calico-node-gfwdg\" (UID: \"b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a\") " pod="calico-system/calico-node-gfwdg" Jan 30 16:03:09.808413 kubelet[2849]: I0130 16:03:09.808327 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a-var-lib-calico\") pod \"calico-node-gfwdg\" (UID: \"b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a\") " pod="calico-system/calico-node-gfwdg" Jan 30 16:03:09.808413 kubelet[2849]: I0130 16:03:09.808351 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a-cni-log-dir\") pod \"calico-node-gfwdg\" (UID: \"b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a\") " pod="calico-system/calico-node-gfwdg" Jan 30 16:03:09.808413 kubelet[2849]: I0130 16:03:09.808374 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a-tigera-ca-bundle\") pod \"calico-node-gfwdg\" (UID: \"b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a\") " pod="calico-system/calico-node-gfwdg" Jan 30 16:03:09.808413 kubelet[2849]: I0130 16:03:09.808393 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a-node-certs\") pod \"calico-node-gfwdg\" (UID: \"b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a\") " pod="calico-system/calico-node-gfwdg" Jan 30 16:03:09.808624 kubelet[2849]: I0130 16:03:09.808413 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a-flexvol-driver-host\") pod \"calico-node-gfwdg\" (UID: \"b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a\") " pod="calico-system/calico-node-gfwdg" Jan 30 16:03:09.808624 kubelet[2849]: I0130 16:03:09.808431 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a-cni-bin-dir\") pod \"calico-node-gfwdg\" (UID: \"b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a\") " pod="calico-system/calico-node-gfwdg" Jan 30 16:03:09.808624 kubelet[2849]: I0130 16:03:09.808456 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a-xtables-lock\") pod \"calico-node-gfwdg\" (UID: \"b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a\") " pod="calico-system/calico-node-gfwdg" Jan 30 16:03:09.808624 kubelet[2849]: I0130 16:03:09.808473 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a-policysync\") pod \"calico-node-gfwdg\" (UID: \"b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a\") " pod="calico-system/calico-node-gfwdg" Jan 30 16:03:09.808624 kubelet[2849]: I0130 16:03:09.808490 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrmnf\" (UniqueName: \"kubernetes.io/projected/b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a-kube-api-access-zrmnf\") pod \"calico-node-gfwdg\" (UID: \"b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a\") " pod="calico-system/calico-node-gfwdg" Jan 30 16:03:09.808757 kubelet[2849]: I0130 16:03:09.808509 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a-lib-modules\") pod \"calico-node-gfwdg\" (UID: \"b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a\") " pod="calico-system/calico-node-gfwdg" Jan 30 16:03:09.808757 kubelet[2849]: I0130 16:03:09.808526 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a-cni-net-dir\") pod \"calico-node-gfwdg\" (UID: \"b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a\") " pod="calico-system/calico-node-gfwdg" Jan 30 16:03:09.863216 containerd[1586]: time="2025-01-30T16:03:09.863159818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7999777c77-spggr,Uid:38db8b1d-7128-41d0-8067-7fb59ecf7a0f,Namespace:calico-system,Attempt:0,}" Jan 30 16:03:09.915201 kubelet[2849]: E0130 16:03:09.915045 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.915201 kubelet[2849]: W0130 16:03:09.915076 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.915201 kubelet[2849]: E0130 16:03:09.915117 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.916113 kubelet[2849]: E0130 16:03:09.915630 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.916113 kubelet[2849]: W0130 16:03:09.915641 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.916113 kubelet[2849]: E0130 16:03:09.915652 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.917161 kubelet[2849]: E0130 16:03:09.916621 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.917161 kubelet[2849]: W0130 16:03:09.916632 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.917161 kubelet[2849]: E0130 16:03:09.916907 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.919526 kubelet[2849]: E0130 16:03:09.919492 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.919526 kubelet[2849]: W0130 16:03:09.919510 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.924652 kubelet[2849]: E0130 16:03:09.919789 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.924652 kubelet[2849]: E0130 16:03:09.919991 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.924652 kubelet[2849]: W0130 16:03:09.920001 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.924652 kubelet[2849]: E0130 16:03:09.920267 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.924652 kubelet[2849]: E0130 16:03:09.920529 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.924652 kubelet[2849]: W0130 16:03:09.920538 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.924652 kubelet[2849]: E0130 16:03:09.920703 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.924652 kubelet[2849]: E0130 16:03:09.920904 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.924652 kubelet[2849]: W0130 16:03:09.920914 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.924652 kubelet[2849]: E0130 16:03:09.921042 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.924919 kubelet[2849]: E0130 16:03:09.922785 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.924919 kubelet[2849]: W0130 16:03:09.922797 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.924919 kubelet[2849]: E0130 16:03:09.922903 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.924919 kubelet[2849]: E0130 16:03:09.923359 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.924919 kubelet[2849]: W0130 16:03:09.923369 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.924919 kubelet[2849]: E0130 16:03:09.923868 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.924919 kubelet[2849]: E0130 16:03:09.924230 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.924919 kubelet[2849]: W0130 16:03:09.924241 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.924919 kubelet[2849]: E0130 16:03:09.924555 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.930782 kubelet[2849]: E0130 16:03:09.925349 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.930782 kubelet[2849]: W0130 16:03:09.925359 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.930782 kubelet[2849]: E0130 16:03:09.925663 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.930782 kubelet[2849]: E0130 16:03:09.926201 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.930782 kubelet[2849]: W0130 16:03:09.926211 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.930782 kubelet[2849]: E0130 16:03:09.926615 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.930782 kubelet[2849]: E0130 16:03:09.926817 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.930782 kubelet[2849]: W0130 16:03:09.926829 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.930782 kubelet[2849]: E0130 16:03:09.927172 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.930782 kubelet[2849]: E0130 16:03:09.927481 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.933799 kubelet[2849]: W0130 16:03:09.927493 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.933799 kubelet[2849]: E0130 16:03:09.927621 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.933799 kubelet[2849]: E0130 16:03:09.927961 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.933799 kubelet[2849]: W0130 16:03:09.927970 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.933799 kubelet[2849]: E0130 16:03:09.928144 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.933799 kubelet[2849]: E0130 16:03:09.928382 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.933799 kubelet[2849]: W0130 16:03:09.928392 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.933799 kubelet[2849]: E0130 16:03:09.928631 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.933799 kubelet[2849]: E0130 16:03:09.928810 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.933799 kubelet[2849]: W0130 16:03:09.928819 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.934067 kubelet[2849]: E0130 16:03:09.928968 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.934067 kubelet[2849]: E0130 16:03:09.929211 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.934067 kubelet[2849]: W0130 16:03:09.929219 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.934067 kubelet[2849]: E0130 16:03:09.929368 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.934067 kubelet[2849]: E0130 16:03:09.929633 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.934067 kubelet[2849]: W0130 16:03:09.929642 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.934067 kubelet[2849]: E0130 16:03:09.929793 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.934067 kubelet[2849]: E0130 16:03:09.930108 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.934067 kubelet[2849]: W0130 16:03:09.930117 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.934067 kubelet[2849]: E0130 16:03:09.930270 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.934292 kubelet[2849]: E0130 16:03:09.933150 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.934292 kubelet[2849]: W0130 16:03:09.933181 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.934292 kubelet[2849]: E0130 16:03:09.933265 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.934292 kubelet[2849]: E0130 16:03:09.933475 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.934292 kubelet[2849]: W0130 16:03:09.933483 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.934292 kubelet[2849]: E0130 16:03:09.933611 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.940498 kubelet[2849]: E0130 16:03:09.936090 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.940498 kubelet[2849]: W0130 16:03:09.936103 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.940498 kubelet[2849]: E0130 16:03:09.938722 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.940498 kubelet[2849]: W0130 16:03:09.938788 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.940498 kubelet[2849]: E0130 16:03:09.938761 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.940498 kubelet[2849]: E0130 16:03:09.938848 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.940498 kubelet[2849]: E0130 16:03:09.939692 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.940498 kubelet[2849]: W0130 16:03:09.939701 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.940498 kubelet[2849]: E0130 16:03:09.939750 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.940801 containerd[1586]: time="2025-01-30T16:03:09.925940313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 16:03:09.940801 containerd[1586]: time="2025-01-30T16:03:09.926907378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 16:03:09.940801 containerd[1586]: time="2025-01-30T16:03:09.926923138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:09.940801 containerd[1586]: time="2025-01-30T16:03:09.927566545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:09.943150 kubelet[2849]: I0130 16:03:09.941731 2849 topology_manager.go:215] "Topology Admit Handler" podUID="d9d99818-5b34-43e4-ab32-bbd2d570ff1b" podNamespace="calico-system" podName="csi-node-driver-pt8fx" Jan 30 16:03:09.950160 kubelet[2849]: E0130 16:03:09.944912 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.950328 kubelet[2849]: W0130 16:03:09.950311 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.950888 kubelet[2849]: E0130 16:03:09.950876 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.951221 kubelet[2849]: W0130 16:03:09.951070 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.952322 kubelet[2849]: E0130 16:03:09.952309 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.952650 kubelet[2849]: W0130 16:03:09.952486 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.953477 kubelet[2849]: E0130 16:03:09.953212 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.953477 kubelet[2849]: W0130 16:03:09.953224 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.953897 kubelet[2849]: E0130 16:03:09.953875 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.955562 kubelet[2849]: W0130 16:03:09.955375 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.955562 kubelet[2849]: E0130 16:03:09.954293 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.955562 kubelet[2849]: E0130 16:03:09.952306 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pt8fx" podUID="d9d99818-5b34-43e4-ab32-bbd2d570ff1b" Jan 30 16:03:09.955562 kubelet[2849]: E0130 16:03:09.954300 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.955711 kubelet[2849]: E0130 16:03:09.954307 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.955711 kubelet[2849]: E0130 16:03:09.954275 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.955711 kubelet[2849]: E0130 16:03:09.955679 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.957590 kubelet[2849]: E0130 16:03:09.956829 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.957590 kubelet[2849]: W0130 16:03:09.956841 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.957590 kubelet[2849]: E0130 16:03:09.956886 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.959388 kubelet[2849]: E0130 16:03:09.958405 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.959388 kubelet[2849]: W0130 16:03:09.958417 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.959388 kubelet[2849]: E0130 16:03:09.959040 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.961176 kubelet[2849]: E0130 16:03:09.961087 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.962213 kubelet[2849]: W0130 16:03:09.962195 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.962595 kubelet[2849]: E0130 16:03:09.962580 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.963343 kubelet[2849]: E0130 16:03:09.963238 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.963343 kubelet[2849]: W0130 16:03:09.963251 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.963343 kubelet[2849]: E0130 16:03:09.963291 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.966255 kubelet[2849]: E0130 16:03:09.966239 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.966527 kubelet[2849]: W0130 16:03:09.966396 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.966813 kubelet[2849]: E0130 16:03:09.966719 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.967164 kubelet[2849]: E0130 16:03:09.967073 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.967164 kubelet[2849]: W0130 16:03:09.967086 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.968703 kubelet[2849]: E0130 16:03:09.967908 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.970849 kubelet[2849]: E0130 16:03:09.969864 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.970849 kubelet[2849]: W0130 16:03:09.969880 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.970979 kubelet[2849]: E0130 16:03:09.970963 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.972484 kubelet[2849]: E0130 16:03:09.972469 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.972577 kubelet[2849]: W0130 16:03:09.972555 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.972780 kubelet[2849]: E0130 16:03:09.972748 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.973710 kubelet[2849]: E0130 16:03:09.972885 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.973710 kubelet[2849]: W0130 16:03:09.972895 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.973710 kubelet[2849]: E0130 16:03:09.972972 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.974140 kubelet[2849]: E0130 16:03:09.974128 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.974215 kubelet[2849]: W0130 16:03:09.974203 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.974482 kubelet[2849]: E0130 16:03:09.974449 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.975301 kubelet[2849]: E0130 16:03:09.974621 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.975301 kubelet[2849]: W0130 16:03:09.974633 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.975301 kubelet[2849]: E0130 16:03:09.974643 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.975533 kubelet[2849]: E0130 16:03:09.975522 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.975596 kubelet[2849]: W0130 16:03:09.975585 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.975655 kubelet[2849]: E0130 16:03:09.975645 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:09.992862 kubelet[2849]: E0130 16:03:09.992819 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:09.993091 kubelet[2849]: W0130 16:03:09.993006 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:09.993194 kubelet[2849]: E0130 16:03:09.993180 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.011430 kubelet[2849]: E0130 16:03:10.011239 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.012083 kubelet[2849]: W0130 16:03:10.012064 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.012185 kubelet[2849]: E0130 16:03:10.012172 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.014111 kubelet[2849]: E0130 16:03:10.014095 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.014186 kubelet[2849]: W0130 16:03:10.014175 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.014262 kubelet[2849]: E0130 16:03:10.014251 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.015573 kubelet[2849]: E0130 16:03:10.015552 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.015970 kubelet[2849]: W0130 16:03:10.015858 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.016532 kubelet[2849]: E0130 16:03:10.016448 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.018119 kubelet[2849]: E0130 16:03:10.017440 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.018119 kubelet[2849]: W0130 16:03:10.017476 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.018119 kubelet[2849]: E0130 16:03:10.017488 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.018557 kubelet[2849]: E0130 16:03:10.018440 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.018557 kubelet[2849]: W0130 16:03:10.018468 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.018557 kubelet[2849]: E0130 16:03:10.018480 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.018932 kubelet[2849]: E0130 16:03:10.018763 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.018932 kubelet[2849]: W0130 16:03:10.018773 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.018932 kubelet[2849]: E0130 16:03:10.018782 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.019237 kubelet[2849]: E0130 16:03:10.019088 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.019237 kubelet[2849]: W0130 16:03:10.019099 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.019237 kubelet[2849]: E0130 16:03:10.019109 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.019878 kubelet[2849]: E0130 16:03:10.019729 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.019878 kubelet[2849]: W0130 16:03:10.019770 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.019878 kubelet[2849]: E0130 16:03:10.019782 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.020878 kubelet[2849]: E0130 16:03:10.020865 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.021063 kubelet[2849]: W0130 16:03:10.020973 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.021063 kubelet[2849]: E0130 16:03:10.020989 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.021389 kubelet[2849]: E0130 16:03:10.021364 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.021389 kubelet[2849]: W0130 16:03:10.021435 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.021389 kubelet[2849]: E0130 16:03:10.021448 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.021712 kubelet[2849]: E0130 16:03:10.021701 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.021793 kubelet[2849]: W0130 16:03:10.021783 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.021882 kubelet[2849]: E0130 16:03:10.021870 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.022267 kubelet[2849]: E0130 16:03:10.022134 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.022267 kubelet[2849]: W0130 16:03:10.022163 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.022267 kubelet[2849]: E0130 16:03:10.022173 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.022426 kubelet[2849]: E0130 16:03:10.022416 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.022879 kubelet[2849]: W0130 16:03:10.022498 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.023396 kubelet[2849]: E0130 16:03:10.022940 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.025043 kubelet[2849]: E0130 16:03:10.023877 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.025043 kubelet[2849]: W0130 16:03:10.023890 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.025043 kubelet[2849]: E0130 16:03:10.023901 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.025411 kubelet[2849]: E0130 16:03:10.025211 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.025411 kubelet[2849]: W0130 16:03:10.025224 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.025411 kubelet[2849]: E0130 16:03:10.025255 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.026164 kubelet[2849]: E0130 16:03:10.026151 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.026474 kubelet[2849]: W0130 16:03:10.026460 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.026823 kubelet[2849]: E0130 16:03:10.026778 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.027954 kubelet[2849]: E0130 16:03:10.027756 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.028408 kubelet[2849]: W0130 16:03:10.028038 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.028773 kubelet[2849]: E0130 16:03:10.028499 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.030128 kubelet[2849]: E0130 16:03:10.030115 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.030192 kubelet[2849]: W0130 16:03:10.030182 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.030327 kubelet[2849]: E0130 16:03:10.030246 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.030542 kubelet[2849]: E0130 16:03:10.030531 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.031954 kubelet[2849]: W0130 16:03:10.031923 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.032079 kubelet[2849]: E0130 16:03:10.032066 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.032463 kubelet[2849]: E0130 16:03:10.032371 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.032463 kubelet[2849]: W0130 16:03:10.032383 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.032463 kubelet[2849]: E0130 16:03:10.032392 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.064764 containerd[1586]: time="2025-01-30T16:03:10.064523246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gfwdg,Uid:b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a,Namespace:calico-system,Attempt:0,}" Jan 30 16:03:10.105891 containerd[1586]: time="2025-01-30T16:03:10.105736283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7999777c77-spggr,Uid:38db8b1d-7128-41d0-8067-7fb59ecf7a0f,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8f56d67616438bd1eea1f2a04cff113648b0e4e9956cd63e233eda5038b1821\"" Jan 30 16:03:10.110581 kubelet[2849]: E0130 16:03:10.110513 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.110581 kubelet[2849]: W0130 16:03:10.110531 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.111321 kubelet[2849]: E0130 16:03:10.110549 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.111321 kubelet[2849]: I0130 16:03:10.111147 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k6mq\" (UniqueName: \"kubernetes.io/projected/d9d99818-5b34-43e4-ab32-bbd2d570ff1b-kube-api-access-9k6mq\") pod \"csi-node-driver-pt8fx\" (UID: \"d9d99818-5b34-43e4-ab32-bbd2d570ff1b\") " pod="calico-system/csi-node-driver-pt8fx" Jan 30 16:03:10.111421 containerd[1586]: time="2025-01-30T16:03:10.110902937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 16:03:10.113530 kubelet[2849]: E0130 16:03:10.113505 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.113530 kubelet[2849]: W0130 16:03:10.113528 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.113648 kubelet[2849]: E0130 16:03:10.113556 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.114180 kubelet[2849]: E0130 16:03:10.114158 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.114180 kubelet[2849]: W0130 16:03:10.114172 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.114625 kubelet[2849]: E0130 16:03:10.114522 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.114625 kubelet[2849]: W0130 16:03:10.114532 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.114945 kubelet[2849]: E0130 16:03:10.114820 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.114945 kubelet[2849]: E0130 16:03:10.114544 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.115152 kubelet[2849]: I0130 16:03:10.114944 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9d99818-5b34-43e4-ab32-bbd2d570ff1b-kubelet-dir\") pod \"csi-node-driver-pt8fx\" (UID: \"d9d99818-5b34-43e4-ab32-bbd2d570ff1b\") " pod="calico-system/csi-node-driver-pt8fx" Jan 30 16:03:10.116647 kubelet[2849]: E0130 16:03:10.116630 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.116647 kubelet[2849]: W0130 16:03:10.116645 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.116863 kubelet[2849]: E0130 16:03:10.116663 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.116863 kubelet[2849]: I0130 16:03:10.116686 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d9d99818-5b34-43e4-ab32-bbd2d570ff1b-registration-dir\") pod \"csi-node-driver-pt8fx\" (UID: \"d9d99818-5b34-43e4-ab32-bbd2d570ff1b\") " pod="calico-system/csi-node-driver-pt8fx" Jan 30 16:03:10.117278 kubelet[2849]: E0130 16:03:10.117136 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.117278 kubelet[2849]: W0130 16:03:10.117152 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.117278 kubelet[2849]: E0130 16:03:10.117232 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.118307 kubelet[2849]: E0130 16:03:10.118037 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.118307 kubelet[2849]: W0130 16:03:10.118050 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.118307 kubelet[2849]: E0130 16:03:10.118065 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.118941 kubelet[2849]: E0130 16:03:10.118800 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.118941 kubelet[2849]: W0130 16:03:10.118812 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.118941 kubelet[2849]: E0130 16:03:10.118829 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.118941 kubelet[2849]: I0130 16:03:10.118893 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d9d99818-5b34-43e4-ab32-bbd2d570ff1b-varrun\") pod \"csi-node-driver-pt8fx\" (UID: \"d9d99818-5b34-43e4-ab32-bbd2d570ff1b\") " pod="calico-system/csi-node-driver-pt8fx" Jan 30 16:03:10.119738 kubelet[2849]: E0130 16:03:10.119270 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.119738 kubelet[2849]: W0130 16:03:10.119713 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.120248 kubelet[2849]: E0130 16:03:10.119873 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.120662 kubelet[2849]: E0130 16:03:10.120637 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.120830 kubelet[2849]: W0130 16:03:10.120712 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.120830 kubelet[2849]: E0130 16:03:10.120755 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.120830 kubelet[2849]: I0130 16:03:10.120777 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d9d99818-5b34-43e4-ab32-bbd2d570ff1b-socket-dir\") pod \"csi-node-driver-pt8fx\" (UID: \"d9d99818-5b34-43e4-ab32-bbd2d570ff1b\") " pod="calico-system/csi-node-driver-pt8fx" Jan 30 16:03:10.122326 kubelet[2849]: E0130 16:03:10.122189 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.122326 kubelet[2849]: W0130 16:03:10.122215 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.122326 kubelet[2849]: E0130 16:03:10.122231 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.123059 kubelet[2849]: E0130 16:03:10.122875 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.123059 kubelet[2849]: W0130 16:03:10.122926 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.123059 kubelet[2849]: E0130 16:03:10.122939 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.123737 kubelet[2849]: E0130 16:03:10.123642 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.123737 kubelet[2849]: W0130 16:03:10.123653 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.123737 kubelet[2849]: E0130 16:03:10.123701 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.124189 kubelet[2849]: E0130 16:03:10.124118 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.124189 kubelet[2849]: W0130 16:03:10.124129 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.124189 kubelet[2849]: E0130 16:03:10.124138 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.124747 kubelet[2849]: E0130 16:03:10.124710 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.124747 kubelet[2849]: W0130 16:03:10.124720 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.124747 kubelet[2849]: E0130 16:03:10.124729 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.137948 containerd[1586]: time="2025-01-30T16:03:10.136309768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 16:03:10.137948 containerd[1586]: time="2025-01-30T16:03:10.137471888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 16:03:10.137948 containerd[1586]: time="2025-01-30T16:03:10.137502285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:10.137948 containerd[1586]: time="2025-01-30T16:03:10.137612903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:10.199925 containerd[1586]: time="2025-01-30T16:03:10.199884271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gfwdg,Uid:b8dcce03-1d16-4f65-b1f3-f1c4dc0cf34a,Namespace:calico-system,Attempt:0,} returns sandbox id \"5913f9d7cafa8446d4bb1921763a6a5e2a74e0aeb54674abfddf76e30b1d4260\"" Jan 30 16:03:10.224468 kubelet[2849]: E0130 16:03:10.224304 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.224468 kubelet[2849]: W0130 16:03:10.224356 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.224468 kubelet[2849]: E0130 16:03:10.224378 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.225050 kubelet[2849]: E0130 16:03:10.225039 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.225220 kubelet[2849]: W0130 16:03:10.225099 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.225220 kubelet[2849]: E0130 16:03:10.225142 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.225734 kubelet[2849]: E0130 16:03:10.225588 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.225734 kubelet[2849]: W0130 16:03:10.225667 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.225734 kubelet[2849]: E0130 16:03:10.225701 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.227187 kubelet[2849]: E0130 16:03:10.226773 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.227187 kubelet[2849]: W0130 16:03:10.226786 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.227187 kubelet[2849]: E0130 16:03:10.226797 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.227559 kubelet[2849]: E0130 16:03:10.227421 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.227559 kubelet[2849]: W0130 16:03:10.227432 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.227559 kubelet[2849]: E0130 16:03:10.227470 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.228398 kubelet[2849]: E0130 16:03:10.228250 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.229050 kubelet[2849]: W0130 16:03:10.228656 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.229446 kubelet[2849]: E0130 16:03:10.229433 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.229749 kubelet[2849]: E0130 16:03:10.229695 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.229749 kubelet[2849]: W0130 16:03:10.229706 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.230066 kubelet[2849]: E0130 16:03:10.229870 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.230846 kubelet[2849]: E0130 16:03:10.230810 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.230846 kubelet[2849]: W0130 16:03:10.230822 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.231253 kubelet[2849]: E0130 16:03:10.231142 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.231610 kubelet[2849]: E0130 16:03:10.231506 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.231610 kubelet[2849]: W0130 16:03:10.231517 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.231734 kubelet[2849]: E0130 16:03:10.231721 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.231942 kubelet[2849]: E0130 16:03:10.231892 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.231942 kubelet[2849]: W0130 16:03:10.231902 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.231942 kubelet[2849]: E0130 16:03:10.231936 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.232776 kubelet[2849]: E0130 16:03:10.232678 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.232776 kubelet[2849]: W0130 16:03:10.232688 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.233206 kubelet[2849]: E0130 16:03:10.233114 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.233206 kubelet[2849]: E0130 16:03:10.233136 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.233206 kubelet[2849]: W0130 16:03:10.233145 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.233206 kubelet[2849]: E0130 16:03:10.233199 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.233655 kubelet[2849]: E0130 16:03:10.233531 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.233655 kubelet[2849]: W0130 16:03:10.233542 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.233655 kubelet[2849]: E0130 16:03:10.233635 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.233935 kubelet[2849]: E0130 16:03:10.233855 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.233935 kubelet[2849]: W0130 16:03:10.233864 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.234597 kubelet[2849]: E0130 16:03:10.234343 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.235443 kubelet[2849]: E0130 16:03:10.235296 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.235443 kubelet[2849]: W0130 16:03:10.235310 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.235443 kubelet[2849]: E0130 16:03:10.235349 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.235723 kubelet[2849]: E0130 16:03:10.235713 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.235865 kubelet[2849]: W0130 16:03:10.235796 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.235865 kubelet[2849]: E0130 16:03:10.235858 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.236695 kubelet[2849]: E0130 16:03:10.236442 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.236695 kubelet[2849]: W0130 16:03:10.236460 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.236695 kubelet[2849]: E0130 16:03:10.236496 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.237174 kubelet[2849]: E0130 16:03:10.237010 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.237174 kubelet[2849]: W0130 16:03:10.237075 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.237332 kubelet[2849]: E0130 16:03:10.237209 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.237503 kubelet[2849]: E0130 16:03:10.237471 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.237503 kubelet[2849]: W0130 16:03:10.237481 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.237801 kubelet[2849]: E0130 16:03:10.237778 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.238539 kubelet[2849]: E0130 16:03:10.238201 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.238539 kubelet[2849]: W0130 16:03:10.238213 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.238539 kubelet[2849]: E0130 16:03:10.238239 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.239401 kubelet[2849]: E0130 16:03:10.239043 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.239401 kubelet[2849]: W0130 16:03:10.239056 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.239401 kubelet[2849]: E0130 16:03:10.239084 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.239818 kubelet[2849]: E0130 16:03:10.239807 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.239922 kubelet[2849]: W0130 16:03:10.239874 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.239922 kubelet[2849]: E0130 16:03:10.239904 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.240494 kubelet[2849]: E0130 16:03:10.240147 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.240494 kubelet[2849]: W0130 16:03:10.240156 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.240494 kubelet[2849]: E0130 16:03:10.240181 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.241590 kubelet[2849]: E0130 16:03:10.240979 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.241590 kubelet[2849]: W0130 16:03:10.240990 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.241590 kubelet[2849]: E0130 16:03:10.241035 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.242315 kubelet[2849]: E0130 16:03:10.242044 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.242315 kubelet[2849]: W0130 16:03:10.242056 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.242315 kubelet[2849]: E0130 16:03:10.242067 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:10.253655 kubelet[2849]: E0130 16:03:10.253583 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:10.253655 kubelet[2849]: W0130 16:03:10.253601 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:10.253655 kubelet[2849]: E0130 16:03:10.253618 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:11.256915 kubelet[2849]: E0130 16:03:11.256839 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pt8fx" podUID="d9d99818-5b34-43e4-ab32-bbd2d570ff1b" Jan 30 16:03:11.895148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3354155158.mount: Deactivated successfully. Jan 30 16:03:13.018744 containerd[1586]: time="2025-01-30T16:03:13.018700560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:13.020069 containerd[1586]: time="2025-01-30T16:03:13.020034864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 16:03:13.020858 containerd[1586]: time="2025-01-30T16:03:13.020836408Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:13.023333 containerd[1586]: time="2025-01-30T16:03:13.023290161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:13.024234 containerd[1586]: time="2025-01-30T16:03:13.023980787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.913046861s" Jan 30 16:03:13.024234 containerd[1586]: time="2025-01-30T16:03:13.024042693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 16:03:13.026677 containerd[1586]: time="2025-01-30T16:03:13.026649184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 16:03:13.040443 containerd[1586]: time="2025-01-30T16:03:13.040407105Z" level=info msg="CreateContainer within sandbox \"f8f56d67616438bd1eea1f2a04cff113648b0e4e9956cd63e233eda5038b1821\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 16:03:13.062816 containerd[1586]: time="2025-01-30T16:03:13.062774981Z" level=info msg="CreateContainer within sandbox \"f8f56d67616438bd1eea1f2a04cff113648b0e4e9956cd63e233eda5038b1821\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"896f7d81b0828cfc86af755582822803aa8768a68ba109ed362d74690f49c370\"" Jan 30 16:03:13.064720 containerd[1586]: time="2025-01-30T16:03:13.064415278Z" level=info msg="StartContainer for \"896f7d81b0828cfc86af755582822803aa8768a68ba109ed362d74690f49c370\"" Jan 30 16:03:13.150065 containerd[1586]: time="2025-01-30T16:03:13.148137402Z" level=info msg="StartContainer for \"896f7d81b0828cfc86af755582822803aa8768a68ba109ed362d74690f49c370\" returns successfully" Jan 30 16:03:13.256605 kubelet[2849]: E0130 16:03:13.256208 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pt8fx" podUID="d9d99818-5b34-43e4-ab32-bbd2d570ff1b" Jan 30 16:03:13.421580 kubelet[2849]: I0130 16:03:13.419212 2849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7999777c77-spggr" podStartSLOduration=1.503610841 podStartE2EDuration="4.419178262s" podCreationTimestamp="2025-01-30 16:03:09 +0000 UTC" firstStartedPulling="2025-01-30 16:03:10.109566229 +0000 UTC m=+21.989949027" lastFinishedPulling="2025-01-30 16:03:13.02513365 +0000 UTC m=+24.905516448" observedRunningTime="2025-01-30 16:03:13.418742144 +0000 UTC m=+25.299124982" watchObservedRunningTime="2025-01-30 16:03:13.419178262 +0000 UTC m=+25.299561100" Jan 30 16:03:13.461123 kubelet[2849]: E0130 16:03:13.460824 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.461123 kubelet[2849]: W0130 16:03:13.461064 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.461631 kubelet[2849]: E0130 16:03:13.461570 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.462679 kubelet[2849]: E0130 16:03:13.462623 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.468600 kubelet[2849]: W0130 16:03:13.462711 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.468600 kubelet[2849]: E0130 16:03:13.462743 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.468600 kubelet[2849]: E0130 16:03:13.464116 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.468600 kubelet[2849]: W0130 16:03:13.464140 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.468600 kubelet[2849]: E0130 16:03:13.464165 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.468600 kubelet[2849]: E0130 16:03:13.464688 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.468600 kubelet[2849]: W0130 16:03:13.464770 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.468600 kubelet[2849]: E0130 16:03:13.464799 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.468600 kubelet[2849]: E0130 16:03:13.465701 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.468600 kubelet[2849]: W0130 16:03:13.465726 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.469257 kubelet[2849]: E0130 16:03:13.465753 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.469257 kubelet[2849]: E0130 16:03:13.466526 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.469257 kubelet[2849]: W0130 16:03:13.466553 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.469257 kubelet[2849]: E0130 16:03:13.466631 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.469257 kubelet[2849]: E0130 16:03:13.467437 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.469257 kubelet[2849]: W0130 16:03:13.467465 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.469257 kubelet[2849]: E0130 16:03:13.467546 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.469257 kubelet[2849]: E0130 16:03:13.469126 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.469257 kubelet[2849]: W0130 16:03:13.469162 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.469833 kubelet[2849]: E0130 16:03:13.469314 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.476347 kubelet[2849]: E0130 16:03:13.470234 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.476347 kubelet[2849]: W0130 16:03:13.470271 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.476347 kubelet[2849]: E0130 16:03:13.470296 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.476347 kubelet[2849]: E0130 16:03:13.470637 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.476347 kubelet[2849]: W0130 16:03:13.470653 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.476347 kubelet[2849]: E0130 16:03:13.470832 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.476347 kubelet[2849]: E0130 16:03:13.471370 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.476347 kubelet[2849]: W0130 16:03:13.471388 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.476347 kubelet[2849]: E0130 16:03:13.471407 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.476347 kubelet[2849]: E0130 16:03:13.472568 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.476664 kubelet[2849]: W0130 16:03:13.472614 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.476664 kubelet[2849]: E0130 16:03:13.472635 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.476664 kubelet[2849]: E0130 16:03:13.472917 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.476664 kubelet[2849]: W0130 16:03:13.473091 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.476664 kubelet[2849]: E0130 16:03:13.473124 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.476664 kubelet[2849]: E0130 16:03:13.473725 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.476664 kubelet[2849]: W0130 16:03:13.473748 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.476664 kubelet[2849]: E0130 16:03:13.473768 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.476664 kubelet[2849]: E0130 16:03:13.475093 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.476664 kubelet[2849]: W0130 16:03:13.475104 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.479466 kubelet[2849]: E0130 16:03:13.475115 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.479466 kubelet[2849]: E0130 16:03:13.475378 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.479466 kubelet[2849]: W0130 16:03:13.475390 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.479466 kubelet[2849]: E0130 16:03:13.475415 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.479466 kubelet[2849]: E0130 16:03:13.475563 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.479466 kubelet[2849]: W0130 16:03:13.475571 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.479466 kubelet[2849]: E0130 16:03:13.475580 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.479466 kubelet[2849]: E0130 16:03:13.475730 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.479466 kubelet[2849]: W0130 16:03:13.475738 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.479466 kubelet[2849]: E0130 16:03:13.475750 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.479692 kubelet[2849]: E0130 16:03:13.475902 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.479692 kubelet[2849]: W0130 16:03:13.475911 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.479692 kubelet[2849]: E0130 16:03:13.475920 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.479692 kubelet[2849]: E0130 16:03:13.476135 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.479692 kubelet[2849]: W0130 16:03:13.476144 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.479692 kubelet[2849]: E0130 16:03:13.476156 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.479692 kubelet[2849]: E0130 16:03:13.476283 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.479692 kubelet[2849]: W0130 16:03:13.476292 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.479692 kubelet[2849]: E0130 16:03:13.476301 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.479692 kubelet[2849]: E0130 16:03:13.476879 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.479920 kubelet[2849]: W0130 16:03:13.476891 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.479920 kubelet[2849]: E0130 16:03:13.476901 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.479920 kubelet[2849]: E0130 16:03:13.477315 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.479920 kubelet[2849]: W0130 16:03:13.477324 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.479920 kubelet[2849]: E0130 16:03:13.477333 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.479920 kubelet[2849]: E0130 16:03:13.477493 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.479920 kubelet[2849]: W0130 16:03:13.477501 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.479920 kubelet[2849]: E0130 16:03:13.477511 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.479920 kubelet[2849]: E0130 16:03:13.477638 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.479920 kubelet[2849]: W0130 16:03:13.477647 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.485773 kubelet[2849]: E0130 16:03:13.477656 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.485773 kubelet[2849]: E0130 16:03:13.477778 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.485773 kubelet[2849]: W0130 16:03:13.477786 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.485773 kubelet[2849]: E0130 16:03:13.477795 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.485773 kubelet[2849]: E0130 16:03:13.477937 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.485773 kubelet[2849]: W0130 16:03:13.477945 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.485773 kubelet[2849]: E0130 16:03:13.477954 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.485773 kubelet[2849]: E0130 16:03:13.478257 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.485773 kubelet[2849]: W0130 16:03:13.478265 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.485773 kubelet[2849]: E0130 16:03:13.478275 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.486006 kubelet[2849]: E0130 16:03:13.478416 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.486006 kubelet[2849]: W0130 16:03:13.478425 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.486006 kubelet[2849]: E0130 16:03:13.478434 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.486006 kubelet[2849]: E0130 16:03:13.478563 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.486006 kubelet[2849]: W0130 16:03:13.478572 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.486006 kubelet[2849]: E0130 16:03:13.478581 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.486006 kubelet[2849]: E0130 16:03:13.478698 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.486006 kubelet[2849]: W0130 16:03:13.478706 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.486006 kubelet[2849]: E0130 16:03:13.478715 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.486006 kubelet[2849]: E0130 16:03:13.478859 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.487283 kubelet[2849]: W0130 16:03:13.478867 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.487283 kubelet[2849]: E0130 16:03:13.478877 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:13.487283 kubelet[2849]: E0130 16:03:13.479140 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:13.487283 kubelet[2849]: W0130 16:03:13.479149 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:13.487283 kubelet[2849]: E0130 16:03:13.479158 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.391733 kubelet[2849]: I0130 16:03:14.391304 2849 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:03:14.484510 kubelet[2849]: E0130 16:03:14.484445 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.484510 kubelet[2849]: W0130 16:03:14.484483 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.484510 kubelet[2849]: E0130 16:03:14.484512 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.484954 kubelet[2849]: E0130 16:03:14.484857 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.484954 kubelet[2849]: W0130 16:03:14.484874 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.484954 kubelet[2849]: E0130 16:03:14.484893 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.485592 kubelet[2849]: E0130 16:03:14.485240 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.485592 kubelet[2849]: W0130 16:03:14.485258 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.485592 kubelet[2849]: E0130 16:03:14.485277 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.485773 kubelet[2849]: E0130 16:03:14.485613 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.485773 kubelet[2849]: W0130 16:03:14.485631 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.485773 kubelet[2849]: E0130 16:03:14.485651 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.486068 kubelet[2849]: E0130 16:03:14.485986 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.486068 kubelet[2849]: W0130 16:03:14.486043 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.486068 kubelet[2849]: E0130 16:03:14.486064 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.486413 kubelet[2849]: E0130 16:03:14.486334 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.486413 kubelet[2849]: W0130 16:03:14.486353 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.486413 kubelet[2849]: E0130 16:03:14.486372 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.486813 kubelet[2849]: E0130 16:03:14.486654 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.486813 kubelet[2849]: W0130 16:03:14.486671 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.486813 kubelet[2849]: E0130 16:03:14.486688 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.487204 kubelet[2849]: E0130 16:03:14.487075 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.487204 kubelet[2849]: W0130 16:03:14.487096 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.487204 kubelet[2849]: E0130 16:03:14.487115 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.487584 kubelet[2849]: E0130 16:03:14.487419 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.487584 kubelet[2849]: W0130 16:03:14.487437 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.487584 kubelet[2849]: E0130 16:03:14.487456 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.487838 kubelet[2849]: E0130 16:03:14.487730 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.487838 kubelet[2849]: W0130 16:03:14.487748 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.487838 kubelet[2849]: E0130 16:03:14.487768 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.488204 kubelet[2849]: E0130 16:03:14.488172 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.488204 kubelet[2849]: W0130 16:03:14.488201 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.488385 kubelet[2849]: E0130 16:03:14.488235 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.488586 kubelet[2849]: E0130 16:03:14.488535 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.488586 kubelet[2849]: W0130 16:03:14.488564 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.488586 kubelet[2849]: E0130 16:03:14.488582 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.488983 kubelet[2849]: E0130 16:03:14.488875 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.488983 kubelet[2849]: W0130 16:03:14.488893 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.488983 kubelet[2849]: E0130 16:03:14.488910 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.489304 kubelet[2849]: E0130 16:03:14.489258 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.489304 kubelet[2849]: W0130 16:03:14.489276 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.489304 kubelet[2849]: E0130 16:03:14.489294 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.489726 kubelet[2849]: E0130 16:03:14.489653 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.489726 kubelet[2849]: W0130 16:03:14.489673 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.489726 kubelet[2849]: E0130 16:03:14.489693 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.585263 kubelet[2849]: E0130 16:03:14.584995 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.585263 kubelet[2849]: W0130 16:03:14.585055 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.585263 kubelet[2849]: E0130 16:03:14.585081 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.586231 kubelet[2849]: E0130 16:03:14.586135 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.586231 kubelet[2849]: W0130 16:03:14.586146 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.586231 kubelet[2849]: E0130 16:03:14.586164 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.586458 kubelet[2849]: E0130 16:03:14.586425 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.586458 kubelet[2849]: W0130 16:03:14.586443 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.587070 kubelet[2849]: E0130 16:03:14.587051 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.587293 kubelet[2849]: E0130 16:03:14.587274 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.587293 kubelet[2849]: W0130 16:03:14.587287 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.587362 kubelet[2849]: E0130 16:03:14.587315 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.587514 kubelet[2849]: E0130 16:03:14.587499 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.587514 kubelet[2849]: W0130 16:03:14.587512 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.587623 kubelet[2849]: E0130 16:03:14.587601 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.588071 kubelet[2849]: E0130 16:03:14.588056 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.588071 kubelet[2849]: W0130 16:03:14.588070 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.588226 kubelet[2849]: E0130 16:03:14.588136 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.588226 kubelet[2849]: E0130 16:03:14.588220 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.588274 kubelet[2849]: W0130 16:03:14.588229 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.588709 kubelet[2849]: E0130 16:03:14.588691 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.588826 kubelet[2849]: E0130 16:03:14.588811 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.588883 kubelet[2849]: W0130 16:03:14.588826 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.588883 kubelet[2849]: E0130 16:03:14.588840 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.589149 kubelet[2849]: E0130 16:03:14.589138 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.589282 kubelet[2849]: W0130 16:03:14.589207 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.589282 kubelet[2849]: E0130 16:03:14.589228 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.589611 kubelet[2849]: E0130 16:03:14.589506 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.589611 kubelet[2849]: W0130 16:03:14.589533 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.589611 kubelet[2849]: E0130 16:03:14.589549 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.589934 kubelet[2849]: E0130 16:03:14.589803 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.589934 kubelet[2849]: W0130 16:03:14.589814 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.589934 kubelet[2849]: E0130 16:03:14.589824 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.590184 kubelet[2849]: E0130 16:03:14.590103 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.590184 kubelet[2849]: W0130 16:03:14.590114 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.590184 kubelet[2849]: E0130 16:03:14.590138 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.592183 kubelet[2849]: E0130 16:03:14.592077 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.592183 kubelet[2849]: W0130 16:03:14.592095 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.592599 kubelet[2849]: E0130 16:03:14.592476 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.592599 kubelet[2849]: E0130 16:03:14.592491 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.592599 kubelet[2849]: W0130 16:03:14.592508 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.593061 kubelet[2849]: E0130 16:03:14.592795 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.593061 kubelet[2849]: E0130 16:03:14.592856 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.593061 kubelet[2849]: W0130 16:03:14.592864 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.593061 kubelet[2849]: E0130 16:03:14.592889 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.594061 kubelet[2849]: E0130 16:03:14.593928 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.594061 kubelet[2849]: W0130 16:03:14.593940 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.594061 kubelet[2849]: E0130 16:03:14.593950 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.594430 kubelet[2849]: E0130 16:03:14.594219 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.594430 kubelet[2849]: W0130 16:03:14.594230 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.594430 kubelet[2849]: E0130 16:03:14.594240 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:14.594696 kubelet[2849]: E0130 16:03:14.594685 2849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 16:03:14.594758 kubelet[2849]: W0130 16:03:14.594748 2849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 16:03:14.594815 kubelet[2849]: E0130 16:03:14.594805 2849 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 16:03:15.032383 containerd[1586]: time="2025-01-30T16:03:15.032295867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:15.046155 containerd[1586]: time="2025-01-30T16:03:15.045928793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 16:03:15.074494 containerd[1586]: time="2025-01-30T16:03:15.074329574Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:15.091669 containerd[1586]: time="2025-01-30T16:03:15.091535354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:15.094174 containerd[1586]: time="2025-01-30T16:03:15.093886014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.066967465s" Jan 30 16:03:15.094174 containerd[1586]: time="2025-01-30T16:03:15.093965112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 16:03:15.099512 containerd[1586]: time="2025-01-30T16:03:15.099407512Z" level=info msg="CreateContainer within sandbox \"5913f9d7cafa8446d4bb1921763a6a5e2a74e0aeb54674abfddf76e30b1d4260\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 16:03:15.135971 containerd[1586]: time="2025-01-30T16:03:15.135665154Z" level=info msg="CreateContainer within sandbox \"5913f9d7cafa8446d4bb1921763a6a5e2a74e0aeb54674abfddf76e30b1d4260\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cbce8f928777d9e1e1acb4bbe9b56fc4cca4f6a0a101be411d6fe97b5fbe0f01\"" Jan 30 16:03:15.137503 containerd[1586]: time="2025-01-30T16:03:15.137157864Z" level=info msg="StartContainer for \"cbce8f928777d9e1e1acb4bbe9b56fc4cca4f6a0a101be411d6fe97b5fbe0f01\"" Jan 30 16:03:15.231801 containerd[1586]: time="2025-01-30T16:03:15.231460531Z" level=info msg="StartContainer for \"cbce8f928777d9e1e1acb4bbe9b56fc4cca4f6a0a101be411d6fe97b5fbe0f01\" returns successfully" Jan 30 16:03:15.257064 kubelet[2849]: E0130 16:03:15.256058 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pt8fx" podUID="d9d99818-5b34-43e4-ab32-bbd2d570ff1b" Jan 30 16:03:15.266399 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbce8f928777d9e1e1acb4bbe9b56fc4cca4f6a0a101be411d6fe97b5fbe0f01-rootfs.mount: Deactivated successfully. Jan 30 16:03:16.031495 containerd[1586]: time="2025-01-30T16:03:16.031385285Z" level=info msg="shim disconnected" id=cbce8f928777d9e1e1acb4bbe9b56fc4cca4f6a0a101be411d6fe97b5fbe0f01 namespace=k8s.io Jan 30 16:03:16.031495 containerd[1586]: time="2025-01-30T16:03:16.031481416Z" level=warning msg="cleaning up after shim disconnected" id=cbce8f928777d9e1e1acb4bbe9b56fc4cca4f6a0a101be411d6fe97b5fbe0f01 namespace=k8s.io Jan 30 16:03:16.031495 containerd[1586]: time="2025-01-30T16:03:16.031504288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 16:03:16.407918 containerd[1586]: time="2025-01-30T16:03:16.407629318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 16:03:17.258961 kubelet[2849]: E0130 16:03:17.256788 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pt8fx" podUID="d9d99818-5b34-43e4-ab32-bbd2d570ff1b" Jan 30 16:03:19.256290 kubelet[2849]: E0130 16:03:19.255882 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pt8fx" podUID="d9d99818-5b34-43e4-ab32-bbd2d570ff1b" Jan 30 16:03:21.256109 kubelet[2849]: E0130 16:03:21.256037 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pt8fx" podUID="d9d99818-5b34-43e4-ab32-bbd2d570ff1b" Jan 30 16:03:21.922962 containerd[1586]: time="2025-01-30T16:03:21.922899504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:21.924202 containerd[1586]: time="2025-01-30T16:03:21.924003151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 16:03:21.925200 containerd[1586]: time="2025-01-30T16:03:21.925168355Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:21.929715 containerd[1586]: time="2025-01-30T16:03:21.928587821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:21.931086 containerd[1586]: time="2025-01-30T16:03:21.931053425Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.523317806s" Jan 30 16:03:21.931180 containerd[1586]: time="2025-01-30T16:03:21.931162892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 16:03:21.935886 containerd[1586]: time="2025-01-30T16:03:21.935300457Z" level=info msg="CreateContainer within sandbox \"5913f9d7cafa8446d4bb1921763a6a5e2a74e0aeb54674abfddf76e30b1d4260\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 16:03:21.956810 containerd[1586]: time="2025-01-30T16:03:21.956729544Z" level=info msg="CreateContainer within sandbox \"5913f9d7cafa8446d4bb1921763a6a5e2a74e0aeb54674abfddf76e30b1d4260\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"662d8e831c141b1da5eebf03fe30ffc822e2d39322ab6196347fb819b1f723fb\"" Jan 30 16:03:21.960081 containerd[1586]: time="2025-01-30T16:03:21.957388261Z" level=info msg="StartContainer for \"662d8e831c141b1da5eebf03fe30ffc822e2d39322ab6196347fb819b1f723fb\"" Jan 30 16:03:22.023280 containerd[1586]: time="2025-01-30T16:03:22.023237107Z" level=info msg="StartContainer for \"662d8e831c141b1da5eebf03fe30ffc822e2d39322ab6196347fb819b1f723fb\" returns successfully" Jan 30 16:03:23.257420 kubelet[2849]: E0130 16:03:23.256366 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pt8fx" podUID="d9d99818-5b34-43e4-ab32-bbd2d570ff1b" Jan 30 16:03:23.513010 containerd[1586]: time="2025-01-30T16:03:23.512771392Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 16:03:23.553649 kubelet[2849]: I0130 16:03:23.553589 2849 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 16:03:23.572732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-662d8e831c141b1da5eebf03fe30ffc822e2d39322ab6196347fb819b1f723fb-rootfs.mount: Deactivated successfully. Jan 30 16:03:23.767291 kubelet[2849]: I0130 16:03:23.766976 2849 topology_manager.go:215] "Topology Admit Handler" podUID="ebe91c93-3db1-469e-ba8a-da3a8ddda05d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-thhwz" Jan 30 16:03:23.828306 kubelet[2849]: I0130 16:03:23.827258 2849 topology_manager.go:215] "Topology Admit Handler" podUID="5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2" podNamespace="calico-apiserver" podName="calico-apiserver-9985758dc-6trrb" Jan 30 16:03:23.832833 kubelet[2849]: I0130 16:03:23.830488 2849 topology_manager.go:215] "Topology Admit Handler" podUID="9de676ba-fb3c-4eb2-bb01-cf41724693f5" podNamespace="calico-apiserver" podName="calico-apiserver-9985758dc-qxvp4" Jan 30 16:03:23.849520 kubelet[2849]: I0130 16:03:23.848912 2849 topology_manager.go:215] "Topology Admit Handler" podUID="14c2adf1-67af-4345-a606-62d6d8c2f702" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7chv9" Jan 30 16:03:23.850459 kubelet[2849]: I0130 16:03:23.850302 2849 topology_manager.go:215] "Topology Admit Handler" podUID="40fbe7ed-2398-4d81-b72d-863ad9e6ffb9" podNamespace="calico-system" podName="calico-kube-controllers-85dd67d8df-dv7gp" Jan 30 16:03:23.867072 kubelet[2849]: I0130 16:03:23.865777 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebe91c93-3db1-469e-ba8a-da3a8ddda05d-config-volume\") pod \"coredns-7db6d8ff4d-thhwz\" (UID: \"ebe91c93-3db1-469e-ba8a-da3a8ddda05d\") " pod="kube-system/coredns-7db6d8ff4d-thhwz" Jan 30 16:03:23.867072 kubelet[2849]: I0130 16:03:23.865873 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhns\" (UniqueName: \"kubernetes.io/projected/ebe91c93-3db1-469e-ba8a-da3a8ddda05d-kube-api-access-lqhns\") pod \"coredns-7db6d8ff4d-thhwz\" (UID: \"ebe91c93-3db1-469e-ba8a-da3a8ddda05d\") " pod="kube-system/coredns-7db6d8ff4d-thhwz" Jan 30 16:03:23.867072 kubelet[2849]: I0130 16:03:23.865951 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmnrw\" (UniqueName: \"kubernetes.io/projected/5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2-kube-api-access-qmnrw\") pod \"calico-apiserver-9985758dc-6trrb\" (UID: \"5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2\") " pod="calico-apiserver/calico-apiserver-9985758dc-6trrb" Jan 30 16:03:23.867072 kubelet[2849]: I0130 16:03:23.866010 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cg5l\" (UniqueName: \"kubernetes.io/projected/40fbe7ed-2398-4d81-b72d-863ad9e6ffb9-kube-api-access-9cg5l\") pod \"calico-kube-controllers-85dd67d8df-dv7gp\" (UID: \"40fbe7ed-2398-4d81-b72d-863ad9e6ffb9\") " pod="calico-system/calico-kube-controllers-85dd67d8df-dv7gp" Jan 30 16:03:23.867072 kubelet[2849]: I0130 16:03:23.866141 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lrxq\" (UniqueName: \"kubernetes.io/projected/9de676ba-fb3c-4eb2-bb01-cf41724693f5-kube-api-access-2lrxq\") pod \"calico-apiserver-9985758dc-qxvp4\" (UID: \"9de676ba-fb3c-4eb2-bb01-cf41724693f5\") " pod="calico-apiserver/calico-apiserver-9985758dc-qxvp4" Jan 30 16:03:23.867683 kubelet[2849]: I0130 16:03:23.866189 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14c2adf1-67af-4345-a606-62d6d8c2f702-config-volume\") pod \"coredns-7db6d8ff4d-7chv9\" (UID: \"14c2adf1-67af-4345-a606-62d6d8c2f702\") " pod="kube-system/coredns-7db6d8ff4d-7chv9" Jan 30 16:03:23.867683 kubelet[2849]: I0130 16:03:23.866237 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40fbe7ed-2398-4d81-b72d-863ad9e6ffb9-tigera-ca-bundle\") pod \"calico-kube-controllers-85dd67d8df-dv7gp\" (UID: \"40fbe7ed-2398-4d81-b72d-863ad9e6ffb9\") " pod="calico-system/calico-kube-controllers-85dd67d8df-dv7gp" Jan 30 16:03:23.867683 kubelet[2849]: I0130 16:03:23.866302 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2-calico-apiserver-certs\") pod \"calico-apiserver-9985758dc-6trrb\" (UID: \"5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2\") " pod="calico-apiserver/calico-apiserver-9985758dc-6trrb" Jan 30 16:03:23.867683 kubelet[2849]: I0130 16:03:23.866347 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwzzj\" (UniqueName: \"kubernetes.io/projected/14c2adf1-67af-4345-a606-62d6d8c2f702-kube-api-access-rwzzj\") pod \"coredns-7db6d8ff4d-7chv9\" (UID: \"14c2adf1-67af-4345-a606-62d6d8c2f702\") " pod="kube-system/coredns-7db6d8ff4d-7chv9" Jan 30 16:03:23.867683 kubelet[2849]: I0130 16:03:23.866393 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9de676ba-fb3c-4eb2-bb01-cf41724693f5-calico-apiserver-certs\") pod \"calico-apiserver-9985758dc-qxvp4\" (UID: \"9de676ba-fb3c-4eb2-bb01-cf41724693f5\") " pod="calico-apiserver/calico-apiserver-9985758dc-qxvp4" Jan 30 16:03:24.379809 containerd[1586]: time="2025-01-30T16:03:24.379756711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-thhwz,Uid:ebe91c93-3db1-469e-ba8a-da3a8ddda05d,Namespace:kube-system,Attempt:0,}" Jan 30 16:03:24.445971 containerd[1586]: time="2025-01-30T16:03:24.445916531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9985758dc-6trrb,Uid:5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2,Namespace:calico-apiserver,Attempt:0,}" Jan 30 16:03:24.462963 containerd[1586]: time="2025-01-30T16:03:24.462500224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85dd67d8df-dv7gp,Uid:40fbe7ed-2398-4d81-b72d-863ad9e6ffb9,Namespace:calico-system,Attempt:0,}" Jan 30 16:03:24.464898 containerd[1586]: time="2025-01-30T16:03:24.464622956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9985758dc-qxvp4,Uid:9de676ba-fb3c-4eb2-bb01-cf41724693f5,Namespace:calico-apiserver,Attempt:0,}" Jan 30 16:03:24.477768 containerd[1586]: time="2025-01-30T16:03:24.476590744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7chv9,Uid:14c2adf1-67af-4345-a606-62d6d8c2f702,Namespace:kube-system,Attempt:0,}" Jan 30 16:03:24.480943 containerd[1586]: time="2025-01-30T16:03:24.480881584Z" level=info msg="shim disconnected" id=662d8e831c141b1da5eebf03fe30ffc822e2d39322ab6196347fb819b1f723fb namespace=k8s.io Jan 30 16:03:24.480943 containerd[1586]: time="2025-01-30T16:03:24.480938812Z" level=warning msg="cleaning up after shim disconnected" id=662d8e831c141b1da5eebf03fe30ffc822e2d39322ab6196347fb819b1f723fb namespace=k8s.io Jan 30 16:03:24.480943 containerd[1586]: time="2025-01-30T16:03:24.480951466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 16:03:24.733050 containerd[1586]: time="2025-01-30T16:03:24.731160585Z" level=error msg="Failed to destroy network for sandbox \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.734340 containerd[1586]: time="2025-01-30T16:03:24.734294097Z" level=error msg="encountered an error cleaning up failed sandbox \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.734393 containerd[1586]: time="2025-01-30T16:03:24.734360322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-thhwz,Uid:ebe91c93-3db1-469e-ba8a-da3a8ddda05d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.735223 kubelet[2849]: E0130 16:03:24.735180 2849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.737444 kubelet[2849]: E0130 16:03:24.735251 2849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-thhwz" Jan 30 16:03:24.737444 kubelet[2849]: E0130 16:03:24.735275 2849 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-thhwz" Jan 30 16:03:24.737444 kubelet[2849]: E0130 16:03:24.735327 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-thhwz_kube-system(ebe91c93-3db1-469e-ba8a-da3a8ddda05d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-thhwz_kube-system(ebe91c93-3db1-469e-ba8a-da3a8ddda05d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-thhwz" podUID="ebe91c93-3db1-469e-ba8a-da3a8ddda05d" Jan 30 16:03:24.735816 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557-shm.mount: Deactivated successfully. Jan 30 16:03:24.739761 containerd[1586]: time="2025-01-30T16:03:24.738650150Z" level=error msg="Failed to destroy network for sandbox \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.743001 containerd[1586]: time="2025-01-30T16:03:24.742069913Z" level=error msg="encountered an error cleaning up failed sandbox \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.742928 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed-shm.mount: Deactivated successfully. Jan 30 16:03:24.743139 containerd[1586]: time="2025-01-30T16:03:24.743039436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7chv9,Uid:14c2adf1-67af-4345-a606-62d6d8c2f702,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.744259 kubelet[2849]: E0130 16:03:24.743463 2849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.744259 kubelet[2849]: E0130 16:03:24.743520 2849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7chv9" Jan 30 16:03:24.744259 kubelet[2849]: E0130 16:03:24.743542 2849 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7chv9" Jan 30 16:03:24.744384 kubelet[2849]: E0130 16:03:24.743581 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7chv9_kube-system(14c2adf1-67af-4345-a606-62d6d8c2f702)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7chv9_kube-system(14c2adf1-67af-4345-a606-62d6d8c2f702)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7chv9" podUID="14c2adf1-67af-4345-a606-62d6d8c2f702" Jan 30 16:03:24.755048 containerd[1586]: time="2025-01-30T16:03:24.754543769Z" level=error msg="Failed to destroy network for sandbox \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.758058 containerd[1586]: time="2025-01-30T16:03:24.755973079Z" level=error msg="encountered an error cleaning up failed sandbox \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.758058 containerd[1586]: time="2025-01-30T16:03:24.756053242Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9985758dc-6trrb,Uid:5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.758345 kubelet[2849]: E0130 16:03:24.758309 2849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.758670 kubelet[2849]: E0130 16:03:24.758476 2849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9985758dc-6trrb" Jan 30 16:03:24.758670 kubelet[2849]: E0130 16:03:24.758524 2849 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9985758dc-6trrb" Jan 30 16:03:24.758670 kubelet[2849]: E0130 16:03:24.758592 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9985758dc-6trrb_calico-apiserver(5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9985758dc-6trrb_calico-apiserver(5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9985758dc-6trrb" podUID="5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2" Jan 30 16:03:24.758751 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2-shm.mount: Deactivated successfully. Jan 30 16:03:24.762757 containerd[1586]: time="2025-01-30T16:03:24.762705854Z" level=error msg="Failed to destroy network for sandbox \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.763189 containerd[1586]: time="2025-01-30T16:03:24.763162598Z" level=error msg="encountered an error cleaning up failed sandbox \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.763603 containerd[1586]: time="2025-01-30T16:03:24.763577402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9985758dc-qxvp4,Uid:9de676ba-fb3c-4eb2-bb01-cf41724693f5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.765583 kubelet[2849]: E0130 16:03:24.765320 2849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.765583 kubelet[2849]: E0130 16:03:24.765371 2849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9985758dc-qxvp4" Jan 30 16:03:24.765583 kubelet[2849]: E0130 16:03:24.765439 2849 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9985758dc-qxvp4" Jan 30 16:03:24.765847 kubelet[2849]: E0130 16:03:24.765530 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9985758dc-qxvp4_calico-apiserver(9de676ba-fb3c-4eb2-bb01-cf41724693f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9985758dc-qxvp4_calico-apiserver(9de676ba-fb3c-4eb2-bb01-cf41724693f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9985758dc-qxvp4" podUID="9de676ba-fb3c-4eb2-bb01-cf41724693f5" Jan 30 16:03:24.771226 containerd[1586]: time="2025-01-30T16:03:24.771161135Z" level=error msg="Failed to destroy network for sandbox \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.771562 containerd[1586]: time="2025-01-30T16:03:24.771537626Z" level=error msg="encountered an error cleaning up failed sandbox \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.771728 containerd[1586]: time="2025-01-30T16:03:24.771702268Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85dd67d8df-dv7gp,Uid:40fbe7ed-2398-4d81-b72d-863ad9e6ffb9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.772059 kubelet[2849]: E0130 16:03:24.772001 2849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:24.772153 kubelet[2849]: E0130 16:03:24.772078 2849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85dd67d8df-dv7gp" Jan 30 16:03:24.772153 kubelet[2849]: E0130 16:03:24.772100 2849 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85dd67d8df-dv7gp" Jan 30 16:03:24.772288 kubelet[2849]: E0130 16:03:24.772147 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85dd67d8df-dv7gp_calico-system(40fbe7ed-2398-4d81-b72d-863ad9e6ffb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85dd67d8df-dv7gp_calico-system(40fbe7ed-2398-4d81-b72d-863ad9e6ffb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85dd67d8df-dv7gp" podUID="40fbe7ed-2398-4d81-b72d-863ad9e6ffb9" Jan 30 16:03:25.258988 containerd[1586]: time="2025-01-30T16:03:25.258550234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pt8fx,Uid:d9d99818-5b34-43e4-ab32-bbd2d570ff1b,Namespace:calico-system,Attempt:0,}" Jan 30 16:03:25.368813 containerd[1586]: time="2025-01-30T16:03:25.368648734Z" level=error msg="Failed to destroy network for sandbox \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:25.370294 containerd[1586]: time="2025-01-30T16:03:25.369968708Z" level=error msg="encountered an error cleaning up failed sandbox \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:25.370294 containerd[1586]: time="2025-01-30T16:03:25.370113140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pt8fx,Uid:d9d99818-5b34-43e4-ab32-bbd2d570ff1b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:25.371010 kubelet[2849]: E0130 16:03:25.370397 2849 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:25.371010 kubelet[2849]: E0130 16:03:25.370455 2849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pt8fx" Jan 30 16:03:25.371010 kubelet[2849]: E0130 16:03:25.370477 2849 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pt8fx" Jan 30 16:03:25.371345 kubelet[2849]: E0130 16:03:25.370524 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pt8fx_calico-system(d9d99818-5b34-43e4-ab32-bbd2d570ff1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pt8fx_calico-system(d9d99818-5b34-43e4-ab32-bbd2d570ff1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pt8fx" podUID="d9d99818-5b34-43e4-ab32-bbd2d570ff1b" Jan 30 16:03:25.454619 kubelet[2849]: I0130 16:03:25.454499 2849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Jan 30 16:03:25.457640 containerd[1586]: time="2025-01-30T16:03:25.456552465Z" level=info msg="StopPodSandbox for \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\"" Jan 30 16:03:25.457640 containerd[1586]: time="2025-01-30T16:03:25.456898077Z" level=info msg="Ensure that sandbox 822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9 in task-service has been cleanup successfully" Jan 30 16:03:25.460153 kubelet[2849]: I0130 16:03:25.459989 2849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Jan 30 16:03:25.464316 containerd[1586]: time="2025-01-30T16:03:25.464260079Z" level=info msg="StopPodSandbox for \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\"" Jan 30 16:03:25.465831 containerd[1586]: time="2025-01-30T16:03:25.465567259Z" level=info msg="Ensure that sandbox bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a in task-service has been cleanup successfully" Jan 30 16:03:25.472948 kubelet[2849]: I0130 16:03:25.470777 2849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Jan 30 16:03:25.475137 containerd[1586]: time="2025-01-30T16:03:25.475084202Z" level=info msg="StopPodSandbox for \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\"" Jan 30 16:03:25.475629 containerd[1586]: time="2025-01-30T16:03:25.475581622Z" level=info msg="Ensure that sandbox 026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca in task-service has been cleanup successfully" Jan 30 16:03:25.496367 kubelet[2849]: I0130 16:03:25.496300 2849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Jan 30 16:03:25.507145 containerd[1586]: time="2025-01-30T16:03:25.507078437Z" level=info msg="StopPodSandbox for \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\"" Jan 30 16:03:25.507789 containerd[1586]: time="2025-01-30T16:03:25.507729718Z" level=info msg="Ensure that sandbox efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2 in task-service has been cleanup successfully" Jan 30 16:03:25.520238 kubelet[2849]: I0130 16:03:25.520118 2849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Jan 30 16:03:25.520949 containerd[1586]: time="2025-01-30T16:03:25.520924460Z" level=info msg="StopPodSandbox for \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\"" Jan 30 16:03:25.521441 containerd[1586]: time="2025-01-30T16:03:25.521225730Z" level=info msg="Ensure that sandbox 36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed in task-service has been cleanup successfully" Jan 30 16:03:25.522994 kubelet[2849]: I0130 16:03:25.522975 2849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Jan 30 16:03:25.525873 containerd[1586]: time="2025-01-30T16:03:25.525831011Z" level=info msg="StopPodSandbox for \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\"" Jan 30 16:03:25.527089 containerd[1586]: time="2025-01-30T16:03:25.527068169Z" level=info msg="Ensure that sandbox dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557 in task-service has been cleanup successfully" Jan 30 16:03:25.542268 containerd[1586]: time="2025-01-30T16:03:25.542221062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 16:03:25.569010 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca-shm.mount: Deactivated successfully. Jan 30 16:03:25.569199 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a-shm.mount: Deactivated successfully. Jan 30 16:03:25.614587 containerd[1586]: time="2025-01-30T16:03:25.614430056Z" level=error msg="StopPodSandbox for \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\" failed" error="failed to destroy network for sandbox \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:25.615263 kubelet[2849]: E0130 16:03:25.615221 2849 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Jan 30 16:03:25.615349 kubelet[2849]: E0130 16:03:25.615285 2849 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed"} Jan 30 16:03:25.615394 kubelet[2849]: E0130 16:03:25.615350 2849 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"14c2adf1-67af-4345-a606-62d6d8c2f702\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 16:03:25.615394 kubelet[2849]: E0130 16:03:25.615379 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"14c2adf1-67af-4345-a606-62d6d8c2f702\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7chv9" podUID="14c2adf1-67af-4345-a606-62d6d8c2f702" Jan 30 16:03:25.624375 containerd[1586]: time="2025-01-30T16:03:25.624324864Z" level=error msg="StopPodSandbox for \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\" failed" error="failed to destroy network for sandbox \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:25.624823 kubelet[2849]: E0130 16:03:25.624622 2849 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Jan 30 16:03:25.624823 kubelet[2849]: E0130 16:03:25.624721 2849 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557"} Jan 30 16:03:25.624823 kubelet[2849]: E0130 16:03:25.624755 2849 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ebe91c93-3db1-469e-ba8a-da3a8ddda05d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 16:03:25.624823 kubelet[2849]: E0130 16:03:25.624781 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ebe91c93-3db1-469e-ba8a-da3a8ddda05d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-thhwz" podUID="ebe91c93-3db1-469e-ba8a-da3a8ddda05d" Jan 30 16:03:25.636323 containerd[1586]: time="2025-01-30T16:03:25.636254676Z" level=error msg="StopPodSandbox for \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\" failed" error="failed to destroy network for sandbox \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:25.636807 kubelet[2849]: E0130 16:03:25.636748 2849 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Jan 30 16:03:25.636996 kubelet[2849]: E0130 16:03:25.636824 2849 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a"} Jan 30 16:03:25.636996 kubelet[2849]: E0130 16:03:25.636864 2849 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9de676ba-fb3c-4eb2-bb01-cf41724693f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 16:03:25.636996 kubelet[2849]: E0130 16:03:25.636900 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9de676ba-fb3c-4eb2-bb01-cf41724693f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9985758dc-qxvp4" podUID="9de676ba-fb3c-4eb2-bb01-cf41724693f5" Jan 30 16:03:25.637318 containerd[1586]: time="2025-01-30T16:03:25.637242382Z" level=error msg="StopPodSandbox for \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\" failed" error="failed to destroy network for sandbox \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:25.638475 kubelet[2849]: E0130 16:03:25.637477 2849 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Jan 30 16:03:25.638475 kubelet[2849]: E0130 16:03:25.637513 2849 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2"} Jan 30 16:03:25.638475 kubelet[2849]: E0130 16:03:25.637541 2849 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 16:03:25.638475 kubelet[2849]: E0130 16:03:25.637562 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9985758dc-6trrb" podUID="5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2" Jan 30 16:03:25.640624 containerd[1586]: time="2025-01-30T16:03:25.640594346Z" level=error msg="StopPodSandbox for \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\" failed" error="failed to destroy network for sandbox \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:25.641103 kubelet[2849]: E0130 16:03:25.641070 2849 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Jan 30 16:03:25.641150 kubelet[2849]: E0130 16:03:25.641107 2849 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9"} Jan 30 16:03:25.641150 kubelet[2849]: E0130 16:03:25.641133 2849 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9d99818-5b34-43e4-ab32-bbd2d570ff1b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 16:03:25.641233 kubelet[2849]: E0130 16:03:25.641156 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9d99818-5b34-43e4-ab32-bbd2d570ff1b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pt8fx" podUID="d9d99818-5b34-43e4-ab32-bbd2d570ff1b" Jan 30 16:03:25.642182 containerd[1586]: time="2025-01-30T16:03:25.642141099Z" level=error msg="StopPodSandbox for \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\" failed" error="failed to destroy network for sandbox \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 16:03:25.642331 kubelet[2849]: E0130 16:03:25.642303 2849 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Jan 30 16:03:25.642371 kubelet[2849]: E0130 16:03:25.642334 2849 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca"} Jan 30 16:03:25.642371 kubelet[2849]: E0130 16:03:25.642359 2849 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40fbe7ed-2398-4d81-b72d-863ad9e6ffb9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 16:03:25.642534 kubelet[2849]: E0130 16:03:25.642381 2849 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40fbe7ed-2398-4d81-b72d-863ad9e6ffb9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85dd67d8df-dv7gp" podUID="40fbe7ed-2398-4d81-b72d-863ad9e6ffb9" Jan 30 16:03:28.710032 kubelet[2849]: I0130 16:03:28.709130 2849 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:03:34.351844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount711017203.mount: Deactivated successfully. Jan 30 16:03:34.751104 containerd[1586]: time="2025-01-30T16:03:34.750956654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:34.754348 containerd[1586]: time="2025-01-30T16:03:34.754247593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 16:03:34.756878 containerd[1586]: time="2025-01-30T16:03:34.756788847Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:34.763421 containerd[1586]: time="2025-01-30T16:03:34.763335487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:34.765464 containerd[1586]: time="2025-01-30T16:03:34.765125093Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.222848897s" Jan 30 16:03:34.765464 containerd[1586]: time="2025-01-30T16:03:34.765198352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 16:03:34.803831 containerd[1586]: time="2025-01-30T16:03:34.803754949Z" level=info msg="CreateContainer within sandbox \"5913f9d7cafa8446d4bb1921763a6a5e2a74e0aeb54674abfddf76e30b1d4260\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 16:03:34.844632 containerd[1586]: time="2025-01-30T16:03:34.844491258Z" level=info msg="CreateContainer within sandbox \"5913f9d7cafa8446d4bb1921763a6a5e2a74e0aeb54674abfddf76e30b1d4260\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"276e9cd267982b933ba8518ba933fabee47f2ed323d2ab38968c380a13ca09b8\"" Jan 30 16:03:34.846236 containerd[1586]: time="2025-01-30T16:03:34.845399060Z" level=info msg="StartContainer for \"276e9cd267982b933ba8518ba933fabee47f2ed323d2ab38968c380a13ca09b8\"" Jan 30 16:03:34.846091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount574744758.mount: Deactivated successfully. Jan 30 16:03:34.907618 containerd[1586]: time="2025-01-30T16:03:34.907573333Z" level=info msg="StartContainer for \"276e9cd267982b933ba8518ba933fabee47f2ed323d2ab38968c380a13ca09b8\" returns successfully" Jan 30 16:03:34.982450 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 16:03:34.982553 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 16:03:35.626165 kubelet[2849]: I0130 16:03:35.625954 2849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gfwdg" podStartSLOduration=2.06170898 podStartE2EDuration="26.62592388s" podCreationTimestamp="2025-01-30 16:03:09 +0000 UTC" firstStartedPulling="2025-01-30 16:03:10.20297553 +0000 UTC m=+22.083358328" lastFinishedPulling="2025-01-30 16:03:34.76719039 +0000 UTC m=+46.647573228" observedRunningTime="2025-01-30 16:03:35.624809618 +0000 UTC m=+47.505192466" watchObservedRunningTime="2025-01-30 16:03:35.62592388 +0000 UTC m=+47.506306718" Jan 30 16:03:36.267773 containerd[1586]: time="2025-01-30T16:03:36.267407886Z" level=info msg="StopPodSandbox for \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\"" Jan 30 16:03:36.278676 containerd[1586]: time="2025-01-30T16:03:36.273808447Z" level=info msg="StopPodSandbox for \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\"" Jan 30 16:03:36.278933 containerd[1586]: time="2025-01-30T16:03:36.278661510Z" level=info msg="StopPodSandbox for \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\"" Jan 30 16:03:36.752113 systemd[1]: run-containerd-runc-k8s.io-276e9cd267982b933ba8518ba933fabee47f2ed323d2ab38968c380a13ca09b8-runc.PUArgg.mount: Deactivated successfully. Jan 30 16:03:36.770062 containerd[1586]: 2025-01-30 16:03:36.545 [INFO][4156] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Jan 30 16:03:36.770062 containerd[1586]: 2025-01-30 16:03:36.552 [INFO][4156] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" iface="eth0" netns="/var/run/netns/cni-ee94b9e5-c38c-6e50-2853-efe6fd62a5cd" Jan 30 16:03:36.770062 containerd[1586]: 2025-01-30 16:03:36.553 [INFO][4156] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" iface="eth0" netns="/var/run/netns/cni-ee94b9e5-c38c-6e50-2853-efe6fd62a5cd" Jan 30 16:03:36.770062 containerd[1586]: 2025-01-30 16:03:36.558 [INFO][4156] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" iface="eth0" netns="/var/run/netns/cni-ee94b9e5-c38c-6e50-2853-efe6fd62a5cd" Jan 30 16:03:36.770062 containerd[1586]: 2025-01-30 16:03:36.559 [INFO][4156] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Jan 30 16:03:36.770062 containerd[1586]: 2025-01-30 16:03:36.559 [INFO][4156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Jan 30 16:03:36.770062 containerd[1586]: 2025-01-30 16:03:36.698 [INFO][4222] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" HandleID="k8s-pod-network.bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:36.770062 containerd[1586]: 2025-01-30 16:03:36.701 [INFO][4222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:36.770062 containerd[1586]: 2025-01-30 16:03:36.703 [INFO][4222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:36.770062 containerd[1586]: 2025-01-30 16:03:36.724 [WARNING][4222] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" HandleID="k8s-pod-network.bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:36.770062 containerd[1586]: 2025-01-30 16:03:36.724 [INFO][4222] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" HandleID="k8s-pod-network.bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:36.770062 containerd[1586]: 2025-01-30 16:03:36.728 [INFO][4222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:36.770062 containerd[1586]: 2025-01-30 16:03:36.761 [INFO][4156] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Jan 30 16:03:36.773935 containerd[1586]: time="2025-01-30T16:03:36.771969620Z" level=info msg="TearDown network for sandbox \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\" successfully" Jan 30 16:03:36.773935 containerd[1586]: time="2025-01-30T16:03:36.772000509Z" level=info msg="StopPodSandbox for \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\" returns successfully" Jan 30 16:03:36.776484 systemd[1]: run-netns-cni\x2dee94b9e5\x2dc38c\x2d6e50\x2d2853\x2defe6fd62a5cd.mount: Deactivated successfully. Jan 30 16:03:36.828164 containerd[1586]: 2025-01-30 16:03:36.520 [INFO][4160] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Jan 30 16:03:36.828164 containerd[1586]: 2025-01-30 16:03:36.523 [INFO][4160] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" iface="eth0" netns="/var/run/netns/cni-0aadc757-e521-6e82-dad4-cb685f2b23cd" Jan 30 16:03:36.828164 containerd[1586]: 2025-01-30 16:03:36.523 [INFO][4160] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" iface="eth0" netns="/var/run/netns/cni-0aadc757-e521-6e82-dad4-cb685f2b23cd" Jan 30 16:03:36.828164 containerd[1586]: 2025-01-30 16:03:36.533 [INFO][4160] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" iface="eth0" netns="/var/run/netns/cni-0aadc757-e521-6e82-dad4-cb685f2b23cd" Jan 30 16:03:36.828164 containerd[1586]: 2025-01-30 16:03:36.533 [INFO][4160] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Jan 30 16:03:36.828164 containerd[1586]: 2025-01-30 16:03:36.533 [INFO][4160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Jan 30 16:03:36.828164 containerd[1586]: 2025-01-30 16:03:36.747 [INFO][4215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" HandleID="k8s-pod-network.dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:36.828164 containerd[1586]: 2025-01-30 16:03:36.758 [INFO][4215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:36.828164 containerd[1586]: 2025-01-30 16:03:36.758 [INFO][4215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:36.828164 containerd[1586]: 2025-01-30 16:03:36.812 [WARNING][4215] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" HandleID="k8s-pod-network.dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:36.828164 containerd[1586]: 2025-01-30 16:03:36.812 [INFO][4215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" HandleID="k8s-pod-network.dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:36.828164 containerd[1586]: 2025-01-30 16:03:36.819 [INFO][4215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:36.828164 containerd[1586]: 2025-01-30 16:03:36.825 [INFO][4160] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Jan 30 16:03:36.836977 containerd[1586]: 2025-01-30 16:03:36.513 [INFO][4147] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Jan 30 16:03:36.836977 containerd[1586]: 2025-01-30 16:03:36.514 [INFO][4147] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" iface="eth0" netns="/var/run/netns/cni-e717cdf3-1d48-6a4e-9599-0d4ab81e3862" Jan 30 16:03:36.836977 containerd[1586]: 2025-01-30 16:03:36.520 [INFO][4147] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" iface="eth0" netns="/var/run/netns/cni-e717cdf3-1d48-6a4e-9599-0d4ab81e3862" Jan 30 16:03:36.836977 containerd[1586]: 2025-01-30 16:03:36.545 [INFO][4147] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" iface="eth0" netns="/var/run/netns/cni-e717cdf3-1d48-6a4e-9599-0d4ab81e3862" Jan 30 16:03:36.836977 containerd[1586]: 2025-01-30 16:03:36.552 [INFO][4147] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Jan 30 16:03:36.836977 containerd[1586]: 2025-01-30 16:03:36.552 [INFO][4147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Jan 30 16:03:36.836977 containerd[1586]: 2025-01-30 16:03:36.795 [INFO][4221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" HandleID="k8s-pod-network.026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:36.836977 containerd[1586]: 2025-01-30 16:03:36.796 [INFO][4221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:36.836977 containerd[1586]: 2025-01-30 16:03:36.819 [INFO][4221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:36.836977 containerd[1586]: 2025-01-30 16:03:36.831 [WARNING][4221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" HandleID="k8s-pod-network.026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:36.836977 containerd[1586]: 2025-01-30 16:03:36.831 [INFO][4221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" HandleID="k8s-pod-network.026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:36.836977 containerd[1586]: 2025-01-30 16:03:36.834 [INFO][4221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:36.836977 containerd[1586]: 2025-01-30 16:03:36.835 [INFO][4147] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Jan 30 16:03:36.980701 containerd[1586]: time="2025-01-30T16:03:36.980574037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9985758dc-qxvp4,Uid:9de676ba-fb3c-4eb2-bb01-cf41724693f5,Namespace:calico-apiserver,Attempt:1,}" Jan 30 16:03:36.987372 containerd[1586]: time="2025-01-30T16:03:36.986112993Z" level=info msg="TearDown network for sandbox \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\" successfully" Jan 30 16:03:36.987372 containerd[1586]: time="2025-01-30T16:03:36.986229412Z" level=info msg="StopPodSandbox for \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\" returns successfully" Jan 30 16:03:36.987372 containerd[1586]: time="2025-01-30T16:03:36.986308692Z" level=info msg="TearDown network for sandbox \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\" successfully" Jan 30 16:03:36.987372 containerd[1586]: time="2025-01-30T16:03:36.986356902Z" level=info msg="StopPodSandbox for \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\" returns successfully" Jan 30 16:03:36.994433 systemd[1]: run-netns-cni\x2de717cdf3\x2d1d48\x2d6a4e\x2d9599\x2d0d4ab81e3862.mount: Deactivated successfully. Jan 30 16:03:36.996870 containerd[1586]: time="2025-01-30T16:03:36.994891479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85dd67d8df-dv7gp,Uid:40fbe7ed-2398-4d81-b72d-863ad9e6ffb9,Namespace:calico-system,Attempt:1,}" Jan 30 16:03:36.994992 systemd[1]: run-netns-cni\x2d0aadc757\x2de521\x2d6e82\x2ddad4\x2dcb685f2b23cd.mount: Deactivated successfully. Jan 30 16:03:37.007630 containerd[1586]: time="2025-01-30T16:03:37.005400847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-thhwz,Uid:ebe91c93-3db1-469e-ba8a-da3a8ddda05d,Namespace:kube-system,Attempt:1,}" Jan 30 16:03:37.059112 kernel: bpftool[4291]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 16:03:37.354971 systemd-networkd[1207]: vxlan.calico: Link UP Jan 30 16:03:37.354981 systemd-networkd[1207]: vxlan.calico: Gained carrier Jan 30 16:03:37.873268 systemd-journald[1122]: Under memory pressure, flushing caches. Jan 30 16:03:37.827675 systemd-resolved[1468]: Under memory pressure, flushing caches. Jan 30 16:03:37.827764 systemd-resolved[1468]: Flushed all caches. Jan 30 16:03:38.171461 systemd-networkd[1207]: califedf616e5a6: Link UP Jan 30 16:03:38.171651 systemd-networkd[1207]: califedf616e5a6: Gained carrier Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.061 [INFO][4358] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0 calico-kube-controllers-85dd67d8df- calico-system 40fbe7ed-2398-4d81-b72d-863ad9e6ffb9 764 0 2025-01-30 16:03:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85dd67d8df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-2-e08351c9d9.novalocal calico-kube-controllers-85dd67d8df-dv7gp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califedf616e5a6 [] []}} ContainerID="c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" Namespace="calico-system" Pod="calico-kube-controllers-85dd67d8df-dv7gp" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-" Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.061 [INFO][4358] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" Namespace="calico-system" Pod="calico-kube-controllers-85dd67d8df-dv7gp" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.107 [INFO][4394] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" HandleID="k8s-pod-network.c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.121 [INFO][4394] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" HandleID="k8s-pod-network.c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000311140), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-2-e08351c9d9.novalocal", "pod":"calico-kube-controllers-85dd67d8df-dv7gp", "timestamp":"2025-01-30 16:03:38.107390669 +0000 UTC"}, Hostname:"ci-4081-3-0-2-e08351c9d9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.121 [INFO][4394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.121 [INFO][4394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.121 [INFO][4394] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-e08351c9d9.novalocal' Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.124 [INFO][4394] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.131 [INFO][4394] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.136 [INFO][4394] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.139 [INFO][4394] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.142 [INFO][4394] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.142 [INFO][4394] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.145 [INFO][4394] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74 Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.157 [INFO][4394] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.163 [INFO][4394] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.1/26] block=192.168.17.0/26 handle="k8s-pod-network.c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.163 [INFO][4394] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.1/26] handle="k8s-pod-network.c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.163 [INFO][4394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:38.198067 containerd[1586]: 2025-01-30 16:03:38.163 [INFO][4394] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.1/26] IPv6=[] ContainerID="c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" HandleID="k8s-pod-network.c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:38.200444 containerd[1586]: 2025-01-30 16:03:38.165 [INFO][4358] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" Namespace="calico-system" Pod="calico-kube-controllers-85dd67d8df-dv7gp" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0", GenerateName:"calico-kube-controllers-85dd67d8df-", Namespace:"calico-system", SelfLink:"", UID:"40fbe7ed-2398-4d81-b72d-863ad9e6ffb9", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85dd67d8df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"", Pod:"calico-kube-controllers-85dd67d8df-dv7gp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califedf616e5a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:38.200444 containerd[1586]: 2025-01-30 16:03:38.166 [INFO][4358] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.1/32] ContainerID="c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" Namespace="calico-system" Pod="calico-kube-controllers-85dd67d8df-dv7gp" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:38.200444 containerd[1586]: 2025-01-30 16:03:38.166 [INFO][4358] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califedf616e5a6 ContainerID="c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" Namespace="calico-system" Pod="calico-kube-controllers-85dd67d8df-dv7gp" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:38.200444 containerd[1586]: 2025-01-30 16:03:38.172 [INFO][4358] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" Namespace="calico-system" Pod="calico-kube-controllers-85dd67d8df-dv7gp" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:38.200444 containerd[1586]: 2025-01-30 16:03:38.173 [INFO][4358] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" Namespace="calico-system" Pod="calico-kube-controllers-85dd67d8df-dv7gp" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0", GenerateName:"calico-kube-controllers-85dd67d8df-", Namespace:"calico-system", SelfLink:"", UID:"40fbe7ed-2398-4d81-b72d-863ad9e6ffb9", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85dd67d8df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74", Pod:"calico-kube-controllers-85dd67d8df-dv7gp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califedf616e5a6", MAC:"92:21:51:3d:6b:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:38.200444 containerd[1586]: 2025-01-30 16:03:38.189 [INFO][4358] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74" Namespace="calico-system" Pod="calico-kube-controllers-85dd67d8df-dv7gp" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:38.230212 systemd-networkd[1207]: calic5b3ef24257: Link UP Jan 30 16:03:38.230428 systemd-networkd[1207]: calic5b3ef24257: Gained carrier Jan 30 16:03:38.243231 containerd[1586]: time="2025-01-30T16:03:38.241724813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 16:03:38.243231 containerd[1586]: time="2025-01-30T16:03:38.241778054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 16:03:38.243231 containerd[1586]: time="2025-01-30T16:03:38.241801738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:38.243231 containerd[1586]: time="2025-01-30T16:03:38.241920121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:38.266808 containerd[1586]: time="2025-01-30T16:03:38.266767870Z" level=info msg="StopPodSandbox for \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\"" Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.062 [INFO][4359] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0 calico-apiserver-9985758dc- calico-apiserver 9de676ba-fb3c-4eb2-bb01-cf41724693f5 766 0 2025-01-30 16:03:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9985758dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-2-e08351c9d9.novalocal calico-apiserver-9985758dc-qxvp4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic5b3ef24257 [] []}} ContainerID="77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-qxvp4" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-" Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.062 [INFO][4359] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-qxvp4" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.126 [INFO][4393] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" HandleID="k8s-pod-network.77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.147 [INFO][4393] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" HandleID="k8s-pod-network.77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290820), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-2-e08351c9d9.novalocal", "pod":"calico-apiserver-9985758dc-qxvp4", "timestamp":"2025-01-30 16:03:38.126234827 +0000 UTC"}, Hostname:"ci-4081-3-0-2-e08351c9d9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.147 [INFO][4393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.163 [INFO][4393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.163 [INFO][4393] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-e08351c9d9.novalocal' Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.166 [INFO][4393] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.173 [INFO][4393] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.182 [INFO][4393] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.191 [INFO][4393] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.195 [INFO][4393] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.195 [INFO][4393] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.197 [INFO][4393] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.203 [INFO][4393] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.214 [INFO][4393] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.2/26] block=192.168.17.0/26 handle="k8s-pod-network.77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.214 [INFO][4393] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.2/26] handle="k8s-pod-network.77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.214 [INFO][4393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:38.272629 containerd[1586]: 2025-01-30 16:03:38.214 [INFO][4393] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.2/26] IPv6=[] ContainerID="77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" HandleID="k8s-pod-network.77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:38.273825 containerd[1586]: 2025-01-30 16:03:38.223 [INFO][4359] cni-plugin/k8s.go 386: Populated endpoint ContainerID="77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-qxvp4" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0", GenerateName:"calico-apiserver-9985758dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9de676ba-fb3c-4eb2-bb01-cf41724693f5", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9985758dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"", Pod:"calico-apiserver-9985758dc-qxvp4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5b3ef24257", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:38.273825 containerd[1586]: 2025-01-30 16:03:38.223 [INFO][4359] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.2/32] ContainerID="77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-qxvp4" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:38.273825 containerd[1586]: 2025-01-30 16:03:38.223 [INFO][4359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5b3ef24257 ContainerID="77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-qxvp4" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:38.273825 containerd[1586]: 2025-01-30 16:03:38.230 [INFO][4359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-qxvp4" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:38.273825 containerd[1586]: 2025-01-30 16:03:38.233 [INFO][4359] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-qxvp4" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0", GenerateName:"calico-apiserver-9985758dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9de676ba-fb3c-4eb2-bb01-cf41724693f5", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9985758dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b", Pod:"calico-apiserver-9985758dc-qxvp4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5b3ef24257", MAC:"56:c7:16:20:d0:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:38.273825 containerd[1586]: 2025-01-30 16:03:38.263 [INFO][4359] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-qxvp4" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:38.294817 systemd-networkd[1207]: cali6e3319e6ea6: Link UP Jan 30 16:03:38.296233 systemd-networkd[1207]: cali6e3319e6ea6: Gained carrier Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.066 [INFO][4367] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0 coredns-7db6d8ff4d- kube-system ebe91c93-3db1-469e-ba8a-da3a8ddda05d 765 0 2025-01-30 16:03:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-2-e08351c9d9.novalocal coredns-7db6d8ff4d-thhwz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6e3319e6ea6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thhwz" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-" Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.066 [INFO][4367] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thhwz" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.136 [INFO][4399] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" HandleID="k8s-pod-network.8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.156 [INFO][4399] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" HandleID="k8s-pod-network.8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003346a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-2-e08351c9d9.novalocal", "pod":"coredns-7db6d8ff4d-thhwz", "timestamp":"2025-01-30 16:03:38.136132639 +0000 UTC"}, Hostname:"ci-4081-3-0-2-e08351c9d9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.157 [INFO][4399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.216 [INFO][4399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.216 [INFO][4399] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-e08351c9d9.novalocal' Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.221 [INFO][4399] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.231 [INFO][4399] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.243 [INFO][4399] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.247 [INFO][4399] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.261 [INFO][4399] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.261 [INFO][4399] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.264 [INFO][4399] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4 Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.272 [INFO][4399] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.286 [INFO][4399] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.3/26] block=192.168.17.0/26 handle="k8s-pod-network.8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.286 [INFO][4399] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.3/26] handle="k8s-pod-network.8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.286 [INFO][4399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:38.324989 containerd[1586]: 2025-01-30 16:03:38.286 [INFO][4399] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.3/26] IPv6=[] ContainerID="8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" HandleID="k8s-pod-network.8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:38.325694 containerd[1586]: 2025-01-30 16:03:38.289 [INFO][4367] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thhwz" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ebe91c93-3db1-469e-ba8a-da3a8ddda05d", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-thhwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e3319e6ea6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:38.325694 containerd[1586]: 2025-01-30 16:03:38.289 [INFO][4367] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.3/32] ContainerID="8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thhwz" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:38.325694 containerd[1586]: 2025-01-30 16:03:38.290 [INFO][4367] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e3319e6ea6 ContainerID="8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thhwz" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:38.325694 containerd[1586]: 2025-01-30 16:03:38.298 [INFO][4367] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thhwz" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:38.325694 containerd[1586]: 2025-01-30 16:03:38.301 [INFO][4367] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thhwz" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ebe91c93-3db1-469e-ba8a-da3a8ddda05d", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4", Pod:"coredns-7db6d8ff4d-thhwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e3319e6ea6", MAC:"96:4e:fd:8e:81:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:38.325694 containerd[1586]: 2025-01-30 16:03:38.321 [INFO][4367] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thhwz" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:38.345053 containerd[1586]: time="2025-01-30T16:03:38.344528630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 16:03:38.345438 containerd[1586]: time="2025-01-30T16:03:38.345183104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 16:03:38.345438 containerd[1586]: time="2025-01-30T16:03:38.345211628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:38.345438 containerd[1586]: time="2025-01-30T16:03:38.345293884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:38.395058 containerd[1586]: time="2025-01-30T16:03:38.394954363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 16:03:38.395346 containerd[1586]: time="2025-01-30T16:03:38.395256412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 16:03:38.395346 containerd[1586]: time="2025-01-30T16:03:38.395322267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:38.395646 containerd[1586]: time="2025-01-30T16:03:38.395619237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:38.423775 containerd[1586]: time="2025-01-30T16:03:38.423320826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85dd67d8df-dv7gp,Uid:40fbe7ed-2398-4d81-b72d-863ad9e6ffb9,Namespace:calico-system,Attempt:1,} returns sandbox id \"c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74\"" Jan 30 16:03:38.440907 containerd[1586]: time="2025-01-30T16:03:38.440727204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 16:03:38.503685 containerd[1586]: 2025-01-30 16:03:38.423 [INFO][4483] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Jan 30 16:03:38.503685 containerd[1586]: 2025-01-30 16:03:38.426 [INFO][4483] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" iface="eth0" netns="/var/run/netns/cni-d466d18a-d6e7-a2cb-3277-e8e991a53257" Jan 30 16:03:38.503685 containerd[1586]: 2025-01-30 16:03:38.426 [INFO][4483] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" iface="eth0" netns="/var/run/netns/cni-d466d18a-d6e7-a2cb-3277-e8e991a53257" Jan 30 16:03:38.503685 containerd[1586]: 2025-01-30 16:03:38.427 [INFO][4483] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" iface="eth0" netns="/var/run/netns/cni-d466d18a-d6e7-a2cb-3277-e8e991a53257" Jan 30 16:03:38.503685 containerd[1586]: 2025-01-30 16:03:38.427 [INFO][4483] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Jan 30 16:03:38.503685 containerd[1586]: 2025-01-30 16:03:38.431 [INFO][4483] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Jan 30 16:03:38.503685 containerd[1586]: 2025-01-30 16:03:38.486 [INFO][4574] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" HandleID="k8s-pod-network.36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:38.503685 containerd[1586]: 2025-01-30 16:03:38.486 [INFO][4574] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:38.503685 containerd[1586]: 2025-01-30 16:03:38.486 [INFO][4574] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:38.503685 containerd[1586]: 2025-01-30 16:03:38.497 [WARNING][4574] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" HandleID="k8s-pod-network.36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:38.503685 containerd[1586]: 2025-01-30 16:03:38.497 [INFO][4574] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" HandleID="k8s-pod-network.36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:38.503685 containerd[1586]: 2025-01-30 16:03:38.500 [INFO][4574] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:38.503685 containerd[1586]: 2025-01-30 16:03:38.501 [INFO][4483] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Jan 30 16:03:38.504881 containerd[1586]: time="2025-01-30T16:03:38.504843739Z" level=info msg="TearDown network for sandbox \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\" successfully" Jan 30 16:03:38.505134 containerd[1586]: time="2025-01-30T16:03:38.504988603Z" level=info msg="StopPodSandbox for \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\" returns successfully" Jan 30 16:03:38.506008 containerd[1586]: time="2025-01-30T16:03:38.505428061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-thhwz,Uid:ebe91c93-3db1-469e-ba8a-da3a8ddda05d,Namespace:kube-system,Attempt:1,} returns sandbox id \"8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4\"" Jan 30 16:03:38.507118 containerd[1586]: time="2025-01-30T16:03:38.506728323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7chv9,Uid:14c2adf1-67af-4345-a606-62d6d8c2f702,Namespace:kube-system,Attempt:1,}" Jan 30 16:03:38.511576 containerd[1586]: time="2025-01-30T16:03:38.511545325Z" level=info msg="CreateContainer within sandbox \"8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 16:03:38.526324 containerd[1586]: time="2025-01-30T16:03:38.526291377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9985758dc-qxvp4,Uid:9de676ba-fb3c-4eb2-bb01-cf41724693f5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b\"" Jan 30 16:03:38.549249 containerd[1586]: time="2025-01-30T16:03:38.549203777Z" level=info msg="CreateContainer within sandbox \"8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0cba2d1d35f2f879b28dfcf4b31ca011becaff6829ea7990fe1f44ac84c4ead\"" Jan 30 16:03:38.561125 containerd[1586]: time="2025-01-30T16:03:38.561088374Z" level=info msg="StartContainer for \"a0cba2d1d35f2f879b28dfcf4b31ca011becaff6829ea7990fe1f44ac84c4ead\"" Jan 30 16:03:38.672050 containerd[1586]: time="2025-01-30T16:03:38.671125138Z" level=info msg="StartContainer for \"a0cba2d1d35f2f879b28dfcf4b31ca011becaff6829ea7990fe1f44ac84c4ead\" returns successfully" Jan 30 16:03:38.737780 systemd-networkd[1207]: calie032521f8f2: Link UP Jan 30 16:03:38.737955 systemd-networkd[1207]: calie032521f8f2: Gained carrier Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.601 [INFO][4602] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0 coredns-7db6d8ff4d- kube-system 14c2adf1-67af-4345-a606-62d6d8c2f702 782 0 2025-01-30 16:03:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-2-e08351c9d9.novalocal coredns-7db6d8ff4d-7chv9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie032521f8f2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7chv9" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-" Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.603 [INFO][4602] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7chv9" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.670 [INFO][4639] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" HandleID="k8s-pod-network.ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.685 [INFO][4639] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" HandleID="k8s-pod-network.ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318b30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-2-e08351c9d9.novalocal", "pod":"coredns-7db6d8ff4d-7chv9", "timestamp":"2025-01-30 16:03:38.670179304 +0000 UTC"}, Hostname:"ci-4081-3-0-2-e08351c9d9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.685 [INFO][4639] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.686 [INFO][4639] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.686 [INFO][4639] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-e08351c9d9.novalocal' Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.687 [INFO][4639] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.692 [INFO][4639] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.702 [INFO][4639] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.712 [INFO][4639] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.719 [INFO][4639] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.719 [INFO][4639] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.720 [INFO][4639] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8 Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.725 [INFO][4639] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.734 [INFO][4639] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.4/26] block=192.168.17.0/26 handle="k8s-pod-network.ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.734 [INFO][4639] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.4/26] handle="k8s-pod-network.ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.734 [INFO][4639] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:38.755666 containerd[1586]: 2025-01-30 16:03:38.734 [INFO][4639] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.4/26] IPv6=[] ContainerID="ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" HandleID="k8s-pod-network.ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:38.757005 containerd[1586]: 2025-01-30 16:03:38.735 [INFO][4602] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7chv9" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"14c2adf1-67af-4345-a606-62d6d8c2f702", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-7chv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie032521f8f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:38.757005 containerd[1586]: 2025-01-30 16:03:38.736 [INFO][4602] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.4/32] ContainerID="ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7chv9" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:38.757005 containerd[1586]: 2025-01-30 16:03:38.736 [INFO][4602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie032521f8f2 ContainerID="ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7chv9" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:38.757005 containerd[1586]: 2025-01-30 16:03:38.737 [INFO][4602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7chv9" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:38.757005 containerd[1586]: 2025-01-30 16:03:38.738 [INFO][4602] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7chv9" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"14c2adf1-67af-4345-a606-62d6d8c2f702", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8", Pod:"coredns-7db6d8ff4d-7chv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie032521f8f2", MAC:"9e:d3:db:4d:ca:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:38.757005 containerd[1586]: 2025-01-30 16:03:38.751 [INFO][4602] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7chv9" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:38.788443 containerd[1586]: time="2025-01-30T16:03:38.788310790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 16:03:38.788630 containerd[1586]: time="2025-01-30T16:03:38.788459050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 16:03:38.789181 containerd[1586]: time="2025-01-30T16:03:38.789097053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:38.789411 containerd[1586]: time="2025-01-30T16:03:38.789362814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:38.852049 containerd[1586]: time="2025-01-30T16:03:38.851992917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7chv9,Uid:14c2adf1-67af-4345-a606-62d6d8c2f702,Namespace:kube-system,Attempt:1,} returns sandbox id \"ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8\"" Jan 30 16:03:38.856412 containerd[1586]: time="2025-01-30T16:03:38.856380660Z" level=info msg="CreateContainer within sandbox \"ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 16:03:38.876612 containerd[1586]: time="2025-01-30T16:03:38.876569955Z" level=info msg="CreateContainer within sandbox \"ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"600e443cd08027b55bd5af2f007cc4d19aa58ac6ea4536a5c36a53a7fe9138ee\"" Jan 30 16:03:38.878169 containerd[1586]: time="2025-01-30T16:03:38.877203880Z" level=info msg="StartContainer for \"600e443cd08027b55bd5af2f007cc4d19aa58ac6ea4536a5c36a53a7fe9138ee\"" Jan 30 16:03:38.942252 containerd[1586]: time="2025-01-30T16:03:38.942201147Z" level=info msg="StartContainer for \"600e443cd08027b55bd5af2f007cc4d19aa58ac6ea4536a5c36a53a7fe9138ee\" returns successfully" Jan 30 16:03:38.982824 systemd[1]: run-netns-cni\x2dd466d18a\x2dd6e7\x2da2cb\x2d3277\x2de8e991a53257.mount: Deactivated successfully. Jan 30 16:03:39.299430 systemd-networkd[1207]: califedf616e5a6: Gained IPv6LL Jan 30 16:03:39.363423 systemd-networkd[1207]: vxlan.calico: Gained IPv6LL Jan 30 16:03:39.555507 systemd-networkd[1207]: calic5b3ef24257: Gained IPv6LL Jan 30 16:03:39.678616 kubelet[2849]: I0130 16:03:39.677941 2849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7chv9" podStartSLOduration=37.677362083 podStartE2EDuration="37.677362083s" podCreationTimestamp="2025-01-30 16:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 16:03:39.672887176 +0000 UTC m=+51.553270084" watchObservedRunningTime="2025-01-30 16:03:39.677362083 +0000 UTC m=+51.557744921" Jan 30 16:03:39.711385 kubelet[2849]: I0130 16:03:39.711089 2849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-thhwz" podStartSLOduration=37.711069506 podStartE2EDuration="37.711069506s" podCreationTimestamp="2025-01-30 16:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 16:03:39.710876813 +0000 UTC m=+51.591259611" watchObservedRunningTime="2025-01-30 16:03:39.711069506 +0000 UTC m=+51.591452294" Jan 30 16:03:40.196118 systemd-networkd[1207]: cali6e3319e6ea6: Gained IPv6LL Jan 30 16:03:40.259824 containerd[1586]: time="2025-01-30T16:03:40.259756691Z" level=info msg="StopPodSandbox for \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\"" Jan 30 16:03:40.273818 containerd[1586]: time="2025-01-30T16:03:40.273788209Z" level=info msg="StopPodSandbox for \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\"" Jan 30 16:03:40.390447 systemd-networkd[1207]: calie032521f8f2: Gained IPv6LL Jan 30 16:03:40.489379 containerd[1586]: 2025-01-30 16:03:40.396 [INFO][4777] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Jan 30 16:03:40.489379 containerd[1586]: 2025-01-30 16:03:40.396 [INFO][4777] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" iface="eth0" netns="/var/run/netns/cni-9baea827-a97c-e2a3-2767-1d4b6624b23f" Jan 30 16:03:40.489379 containerd[1586]: 2025-01-30 16:03:40.397 [INFO][4777] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" iface="eth0" netns="/var/run/netns/cni-9baea827-a97c-e2a3-2767-1d4b6624b23f" Jan 30 16:03:40.489379 containerd[1586]: 2025-01-30 16:03:40.401 [INFO][4777] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" iface="eth0" netns="/var/run/netns/cni-9baea827-a97c-e2a3-2767-1d4b6624b23f" Jan 30 16:03:40.489379 containerd[1586]: 2025-01-30 16:03:40.401 [INFO][4777] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Jan 30 16:03:40.489379 containerd[1586]: 2025-01-30 16:03:40.401 [INFO][4777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Jan 30 16:03:40.489379 containerd[1586]: 2025-01-30 16:03:40.458 [INFO][4792] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" HandleID="k8s-pod-network.efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:40.489379 containerd[1586]: 2025-01-30 16:03:40.459 [INFO][4792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:40.489379 containerd[1586]: 2025-01-30 16:03:40.459 [INFO][4792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:40.489379 containerd[1586]: 2025-01-30 16:03:40.481 [WARNING][4792] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" HandleID="k8s-pod-network.efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:40.489379 containerd[1586]: 2025-01-30 16:03:40.481 [INFO][4792] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" HandleID="k8s-pod-network.efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:40.489379 containerd[1586]: 2025-01-30 16:03:40.483 [INFO][4792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:40.489379 containerd[1586]: 2025-01-30 16:03:40.487 [INFO][4777] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Jan 30 16:03:40.490804 containerd[1586]: time="2025-01-30T16:03:40.490655584Z" level=info msg="TearDown network for sandbox \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\" successfully" Jan 30 16:03:40.490804 containerd[1586]: time="2025-01-30T16:03:40.490694477Z" level=info msg="StopPodSandbox for \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\" returns successfully" Jan 30 16:03:40.493687 systemd[1]: run-netns-cni\x2d9baea827\x2da97c\x2de2a3\x2d2767\x2d1d4b6624b23f.mount: Deactivated successfully. Jan 30 16:03:40.527946 containerd[1586]: 2025-01-30 16:03:40.437 [INFO][4785] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Jan 30 16:03:40.527946 containerd[1586]: 2025-01-30 16:03:40.438 [INFO][4785] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" iface="eth0" netns="/var/run/netns/cni-2bcb8922-a09a-ee9c-139e-10d8aa438674" Jan 30 16:03:40.527946 containerd[1586]: 2025-01-30 16:03:40.438 [INFO][4785] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" iface="eth0" netns="/var/run/netns/cni-2bcb8922-a09a-ee9c-139e-10d8aa438674" Jan 30 16:03:40.527946 containerd[1586]: 2025-01-30 16:03:40.438 [INFO][4785] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" iface="eth0" netns="/var/run/netns/cni-2bcb8922-a09a-ee9c-139e-10d8aa438674" Jan 30 16:03:40.527946 containerd[1586]: 2025-01-30 16:03:40.439 [INFO][4785] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Jan 30 16:03:40.527946 containerd[1586]: 2025-01-30 16:03:40.439 [INFO][4785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Jan 30 16:03:40.527946 containerd[1586]: 2025-01-30 16:03:40.510 [INFO][4798] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" HandleID="k8s-pod-network.822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:40.527946 containerd[1586]: 2025-01-30 16:03:40.510 [INFO][4798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:40.527946 containerd[1586]: 2025-01-30 16:03:40.510 [INFO][4798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:40.527946 containerd[1586]: 2025-01-30 16:03:40.519 [WARNING][4798] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" HandleID="k8s-pod-network.822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:40.527946 containerd[1586]: 2025-01-30 16:03:40.519 [INFO][4798] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" HandleID="k8s-pod-network.822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:40.527946 containerd[1586]: 2025-01-30 16:03:40.521 [INFO][4798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:40.527946 containerd[1586]: 2025-01-30 16:03:40.525 [INFO][4785] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Jan 30 16:03:40.529810 containerd[1586]: time="2025-01-30T16:03:40.529095169Z" level=info msg="TearDown network for sandbox \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\" successfully" Jan 30 16:03:40.529810 containerd[1586]: time="2025-01-30T16:03:40.529121548Z" level=info msg="StopPodSandbox for \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\" returns successfully" Jan 30 16:03:40.532050 containerd[1586]: time="2025-01-30T16:03:40.530649128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pt8fx,Uid:d9d99818-5b34-43e4-ab32-bbd2d570ff1b,Namespace:calico-system,Attempt:1,}" Jan 30 16:03:40.535331 systemd[1]: run-netns-cni\x2d2bcb8922\x2da09a\x2dee9c\x2d139e\x2d10d8aa438674.mount: Deactivated successfully. Jan 30 16:03:40.548254 containerd[1586]: time="2025-01-30T16:03:40.548210399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9985758dc-6trrb,Uid:5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2,Namespace:calico-apiserver,Attempt:1,}" Jan 30 16:03:40.814831 systemd-networkd[1207]: calibbe80c44000: Link UP Jan 30 16:03:40.818608 systemd-networkd[1207]: calibbe80c44000: Gained carrier Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.636 [INFO][4805] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0 calico-apiserver-9985758dc- calico-apiserver 5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2 813 0 2025-01-30 16:03:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9985758dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-2-e08351c9d9.novalocal calico-apiserver-9985758dc-6trrb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibbe80c44000 [] []}} ContainerID="619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-6trrb" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-" Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.636 [INFO][4805] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-6trrb" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.698 [INFO][4825] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" HandleID="k8s-pod-network.619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.711 [INFO][4825] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" HandleID="k8s-pod-network.619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031bc60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-2-e08351c9d9.novalocal", "pod":"calico-apiserver-9985758dc-6trrb", "timestamp":"2025-01-30 16:03:40.698259357 +0000 UTC"}, Hostname:"ci-4081-3-0-2-e08351c9d9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.711 [INFO][4825] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.711 [INFO][4825] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.711 [INFO][4825] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-e08351c9d9.novalocal' Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.715 [INFO][4825] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.720 [INFO][4825] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.727 [INFO][4825] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.729 [INFO][4825] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.732 [INFO][4825] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.732 [INFO][4825] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.737 [INFO][4825] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2 Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.786 [INFO][4825] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.801 [INFO][4825] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.5/26] block=192.168.17.0/26 handle="k8s-pod-network.619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.802 [INFO][4825] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.5/26] handle="k8s-pod-network.619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.804 [INFO][4825] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:40.850204 containerd[1586]: 2025-01-30 16:03:40.806 [INFO][4825] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.5/26] IPv6=[] ContainerID="619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" HandleID="k8s-pod-network.619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:40.850968 containerd[1586]: 2025-01-30 16:03:40.811 [INFO][4805] cni-plugin/k8s.go 386: Populated endpoint ContainerID="619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-6trrb" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0", GenerateName:"calico-apiserver-9985758dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9985758dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"", Pod:"calico-apiserver-9985758dc-6trrb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbe80c44000", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:40.850968 containerd[1586]: 2025-01-30 16:03:40.812 [INFO][4805] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.5/32] ContainerID="619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-6trrb" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:40.850968 containerd[1586]: 2025-01-30 16:03:40.812 [INFO][4805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbe80c44000 ContainerID="619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-6trrb" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:40.850968 containerd[1586]: 2025-01-30 16:03:40.816 [INFO][4805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-6trrb" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:40.850968 containerd[1586]: 2025-01-30 16:03:40.822 [INFO][4805] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-6trrb" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0", GenerateName:"calico-apiserver-9985758dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9985758dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2", Pod:"calico-apiserver-9985758dc-6trrb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbe80c44000", MAC:"32:cb:a9:43:b1:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:40.850968 containerd[1586]: 2025-01-30 16:03:40.842 [INFO][4805] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2" Namespace="calico-apiserver" Pod="calico-apiserver-9985758dc-6trrb" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:40.906491 systemd-networkd[1207]: cali34333decb52: Link UP Jan 30 16:03:40.907829 systemd-networkd[1207]: cali34333decb52: Gained carrier Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.674 [INFO][4814] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0 csi-node-driver- calico-system d9d99818-5b34-43e4-ab32-bbd2d570ff1b 814 0 2025-01-30 16:03:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-2-e08351c9d9.novalocal csi-node-driver-pt8fx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali34333decb52 [] []}} ContainerID="8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" Namespace="calico-system" Pod="csi-node-driver-pt8fx" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-" Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.674 [INFO][4814] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" Namespace="calico-system" Pod="csi-node-driver-pt8fx" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.719 [INFO][4832] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" HandleID="k8s-pod-network.8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.729 [INFO][4832] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" HandleID="k8s-pod-network.8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000514a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-2-e08351c9d9.novalocal", "pod":"csi-node-driver-pt8fx", "timestamp":"2025-01-30 16:03:40.719168769 +0000 UTC"}, Hostname:"ci-4081-3-0-2-e08351c9d9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.730 [INFO][4832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.804 [INFO][4832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.804 [INFO][4832] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-e08351c9d9.novalocal' Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.811 [INFO][4832] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.820 [INFO][4832] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.826 [INFO][4832] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.829 [INFO][4832] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.834 [INFO][4832] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.834 [INFO][4832] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.839 [INFO][4832] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.860 [INFO][4832] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.878 [INFO][4832] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.6/26] block=192.168.17.0/26 handle="k8s-pod-network.8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.878 [INFO][4832] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.6/26] handle="k8s-pod-network.8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" host="ci-4081-3-0-2-e08351c9d9.novalocal" Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.878 [INFO][4832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:40.928870 containerd[1586]: 2025-01-30 16:03:40.878 [INFO][4832] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.6/26] IPv6=[] ContainerID="8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" HandleID="k8s-pod-network.8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:40.929584 containerd[1586]: 2025-01-30 16:03:40.884 [INFO][4814] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" Namespace="calico-system" Pod="csi-node-driver-pt8fx" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9d99818-5b34-43e4-ab32-bbd2d570ff1b", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"", Pod:"csi-node-driver-pt8fx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali34333decb52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:40.929584 containerd[1586]: 2025-01-30 16:03:40.884 [INFO][4814] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.6/32] ContainerID="8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" Namespace="calico-system" Pod="csi-node-driver-pt8fx" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:40.929584 containerd[1586]: 2025-01-30 16:03:40.884 [INFO][4814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34333decb52 ContainerID="8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" Namespace="calico-system" Pod="csi-node-driver-pt8fx" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:40.929584 containerd[1586]: 2025-01-30 16:03:40.908 [INFO][4814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" Namespace="calico-system" Pod="csi-node-driver-pt8fx" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:40.929584 containerd[1586]: 2025-01-30 16:03:40.909 [INFO][4814] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" Namespace="calico-system" Pod="csi-node-driver-pt8fx" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9d99818-5b34-43e4-ab32-bbd2d570ff1b", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f", Pod:"csi-node-driver-pt8fx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali34333decb52", MAC:"32:21:1a:08:7a:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:40.929584 containerd[1586]: 2025-01-30 16:03:40.925 [INFO][4814] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f" Namespace="calico-system" Pod="csi-node-driver-pt8fx" WorkloadEndpoint="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:40.950996 containerd[1586]: time="2025-01-30T16:03:40.950292075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 16:03:40.950996 containerd[1586]: time="2025-01-30T16:03:40.950373879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 16:03:40.950996 containerd[1586]: time="2025-01-30T16:03:40.950403024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:40.950996 containerd[1586]: time="2025-01-30T16:03:40.950531436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:41.005531 containerd[1586]: time="2025-01-30T16:03:41.005215159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 16:03:41.005531 containerd[1586]: time="2025-01-30T16:03:41.005275422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 16:03:41.005531 containerd[1586]: time="2025-01-30T16:03:41.005293967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:41.005531 containerd[1586]: time="2025-01-30T16:03:41.005390168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 16:03:41.060307 containerd[1586]: time="2025-01-30T16:03:41.060257936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pt8fx,Uid:d9d99818-5b34-43e4-ab32-bbd2d570ff1b,Namespace:calico-system,Attempt:1,} returns sandbox id \"8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f\"" Jan 30 16:03:41.084517 containerd[1586]: time="2025-01-30T16:03:41.082804990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9985758dc-6trrb,Uid:5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2\"" Jan 30 16:03:41.861705 systemd-journald[1122]: Under memory pressure, flushing caches. Jan 30 16:03:41.859280 systemd-resolved[1468]: Under memory pressure, flushing caches. Jan 30 16:03:41.859317 systemd-resolved[1468]: Flushed all caches. Jan 30 16:03:42.243331 systemd-networkd[1207]: cali34333decb52: Gained IPv6LL Jan 30 16:03:42.371505 systemd-networkd[1207]: calibbe80c44000: Gained IPv6LL Jan 30 16:03:42.384917 containerd[1586]: time="2025-01-30T16:03:42.384808032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:42.389098 containerd[1586]: time="2025-01-30T16:03:42.387855815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 16:03:42.393242 containerd[1586]: time="2025-01-30T16:03:42.391980467Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:42.400551 containerd[1586]: time="2025-01-30T16:03:42.400475866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:42.404661 containerd[1586]: time="2025-01-30T16:03:42.404566876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.963416955s" Jan 30 16:03:42.405034 containerd[1586]: time="2025-01-30T16:03:42.404966379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 16:03:42.418116 containerd[1586]: time="2025-01-30T16:03:42.411950941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 16:03:42.473366 containerd[1586]: time="2025-01-30T16:03:42.473203898Z" level=info msg="CreateContainer within sandbox \"c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 16:03:42.512111 containerd[1586]: time="2025-01-30T16:03:42.511914406Z" level=info msg="CreateContainer within sandbox \"c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"30da8636c988901459e7b71ee890344edabebe83db66d543eb17c8f8ab7960a3\"" Jan 30 16:03:42.513827 containerd[1586]: time="2025-01-30T16:03:42.513784440Z" level=info msg="StartContainer for \"30da8636c988901459e7b71ee890344edabebe83db66d543eb17c8f8ab7960a3\"" Jan 30 16:03:42.760678 containerd[1586]: time="2025-01-30T16:03:42.760414567Z" level=info msg="StartContainer for \"30da8636c988901459e7b71ee890344edabebe83db66d543eb17c8f8ab7960a3\" returns successfully" Jan 30 16:03:43.849633 kubelet[2849]: I0130 16:03:43.849249 2849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85dd67d8df-dv7gp" podStartSLOduration=29.87785276 podStartE2EDuration="33.849212195s" podCreationTimestamp="2025-01-30 16:03:10 +0000 UTC" firstStartedPulling="2025-01-30 16:03:38.437985234 +0000 UTC m=+50.318368023" lastFinishedPulling="2025-01-30 16:03:42.40934463 +0000 UTC m=+54.289727458" observedRunningTime="2025-01-30 16:03:43.83994865 +0000 UTC m=+55.720331519" watchObservedRunningTime="2025-01-30 16:03:43.849212195 +0000 UTC m=+55.729595013" Jan 30 16:03:43.909376 systemd-journald[1122]: Under memory pressure, flushing caches. Jan 30 16:03:43.907119 systemd-resolved[1468]: Under memory pressure, flushing caches. Jan 30 16:03:43.907472 systemd-resolved[1468]: Flushed all caches. Jan 30 16:03:45.907036 containerd[1586]: time="2025-01-30T16:03:45.905357950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:45.907036 containerd[1586]: time="2025-01-30T16:03:45.906964678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 16:03:45.910180 containerd[1586]: time="2025-01-30T16:03:45.910119451Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:45.914045 containerd[1586]: time="2025-01-30T16:03:45.913992897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:45.914842 containerd[1586]: time="2025-01-30T16:03:45.914801541Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.502648479s" Jan 30 16:03:45.914895 containerd[1586]: time="2025-01-30T16:03:45.914843189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 16:03:45.917382 containerd[1586]: time="2025-01-30T16:03:45.917355351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 16:03:45.918382 containerd[1586]: time="2025-01-30T16:03:45.918223867Z" level=info msg="CreateContainer within sandbox \"77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 16:03:45.943487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount407200607.mount: Deactivated successfully. Jan 30 16:03:46.048728 containerd[1586]: time="2025-01-30T16:03:46.045881187Z" level=info msg="CreateContainer within sandbox \"77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bf4058a4b2c92c375f1ae54c5b1fbf1bc95a5560b238692cc16db52b7ee5e240\"" Jan 30 16:03:46.055313 containerd[1586]: time="2025-01-30T16:03:46.054804023Z" level=info msg="StartContainer for \"bf4058a4b2c92c375f1ae54c5b1fbf1bc95a5560b238692cc16db52b7ee5e240\"" Jan 30 16:03:46.314668 containerd[1586]: time="2025-01-30T16:03:46.314490193Z" level=info msg="StartContainer for \"bf4058a4b2c92c375f1ae54c5b1fbf1bc95a5560b238692cc16db52b7ee5e240\" returns successfully" Jan 30 16:03:46.831069 kubelet[2849]: I0130 16:03:46.828513 2849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9985758dc-qxvp4" podStartSLOduration=30.440327517 podStartE2EDuration="37.828483509s" podCreationTimestamp="2025-01-30 16:03:09 +0000 UTC" firstStartedPulling="2025-01-30 16:03:38.528151104 +0000 UTC m=+50.408533892" lastFinishedPulling="2025-01-30 16:03:45.916307096 +0000 UTC m=+57.796689884" observedRunningTime="2025-01-30 16:03:46.828292689 +0000 UTC m=+58.708675567" watchObservedRunningTime="2025-01-30 16:03:46.828483509 +0000 UTC m=+58.708866347" Jan 30 16:03:47.778688 containerd[1586]: time="2025-01-30T16:03:47.777763748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:47.778688 containerd[1586]: time="2025-01-30T16:03:47.778643605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 16:03:47.780556 containerd[1586]: time="2025-01-30T16:03:47.780475185Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:47.783299 containerd[1586]: time="2025-01-30T16:03:47.783273575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:47.784616 containerd[1586]: time="2025-01-30T16:03:47.784006415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.865716843s" Jan 30 16:03:47.784616 containerd[1586]: time="2025-01-30T16:03:47.784067941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 16:03:47.785326 containerd[1586]: time="2025-01-30T16:03:47.785306093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 16:03:47.788608 containerd[1586]: time="2025-01-30T16:03:47.788572474Z" level=info msg="CreateContainer within sandbox \"8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 16:03:47.816228 containerd[1586]: time="2025-01-30T16:03:47.815586032Z" level=info msg="CreateContainer within sandbox \"8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"304630507f1a237c245f3917dba4aed043a78fbf4a042ff911dffd7d4799b4f5\"" Jan 30 16:03:47.818577 containerd[1586]: time="2025-01-30T16:03:47.818176370Z" level=info msg="StartContainer for \"304630507f1a237c245f3917dba4aed043a78fbf4a042ff911dffd7d4799b4f5\"" Jan 30 16:03:47.887391 containerd[1586]: time="2025-01-30T16:03:47.887335092Z" level=info msg="StartContainer for \"304630507f1a237c245f3917dba4aed043a78fbf4a042ff911dffd7d4799b4f5\" returns successfully" Jan 30 16:03:48.180583 containerd[1586]: time="2025-01-30T16:03:48.180386054Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:48.182492 containerd[1586]: time="2025-01-30T16:03:48.182419554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 16:03:48.188077 containerd[1586]: time="2025-01-30T16:03:48.187947764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 402.453807ms" Jan 30 16:03:48.188077 containerd[1586]: time="2025-01-30T16:03:48.188003419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 16:03:48.191295 containerd[1586]: time="2025-01-30T16:03:48.190454555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 16:03:48.197686 containerd[1586]: time="2025-01-30T16:03:48.197380327Z" level=info msg="CreateContainer within sandbox \"619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 16:03:48.229352 containerd[1586]: time="2025-01-30T16:03:48.227956397Z" level=info msg="CreateContainer within sandbox \"619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d2914d6cf919fde466422adae5254daa49aafa7820748e1745df646067094619\"" Jan 30 16:03:48.232835 containerd[1586]: time="2025-01-30T16:03:48.232777026Z" level=info msg="StartContainer for \"d2914d6cf919fde466422adae5254daa49aafa7820748e1745df646067094619\"" Jan 30 16:03:48.255456 containerd[1586]: time="2025-01-30T16:03:48.255388658Z" level=info msg="StopPodSandbox for \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\"" Jan 30 16:03:48.353946 containerd[1586]: time="2025-01-30T16:03:48.353893622Z" level=info msg="StartContainer for \"d2914d6cf919fde466422adae5254daa49aafa7820748e1745df646067094619\" returns successfully" Jan 30 16:03:48.394169 containerd[1586]: 2025-01-30 16:03:48.336 [WARNING][5138] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0", GenerateName:"calico-apiserver-9985758dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9de676ba-fb3c-4eb2-bb01-cf41724693f5", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9985758dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b", Pod:"calico-apiserver-9985758dc-qxvp4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5b3ef24257", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:48.394169 containerd[1586]: 2025-01-30 16:03:48.336 [INFO][5138] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Jan 30 16:03:48.394169 containerd[1586]: 2025-01-30 16:03:48.336 [INFO][5138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" iface="eth0" netns="" Jan 30 16:03:48.394169 containerd[1586]: 2025-01-30 16:03:48.336 [INFO][5138] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Jan 30 16:03:48.394169 containerd[1586]: 2025-01-30 16:03:48.337 [INFO][5138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Jan 30 16:03:48.394169 containerd[1586]: 2025-01-30 16:03:48.378 [INFO][5161] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" HandleID="k8s-pod-network.bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:48.394169 containerd[1586]: 2025-01-30 16:03:48.378 [INFO][5161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:48.394169 containerd[1586]: 2025-01-30 16:03:48.379 [INFO][5161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:48.394169 containerd[1586]: 2025-01-30 16:03:48.389 [WARNING][5161] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" HandleID="k8s-pod-network.bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:48.394169 containerd[1586]: 2025-01-30 16:03:48.389 [INFO][5161] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" HandleID="k8s-pod-network.bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:48.394169 containerd[1586]: 2025-01-30 16:03:48.391 [INFO][5161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:48.394169 containerd[1586]: 2025-01-30 16:03:48.392 [INFO][5138] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Jan 30 16:03:48.394638 containerd[1586]: time="2025-01-30T16:03:48.394212489Z" level=info msg="TearDown network for sandbox \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\" successfully" Jan 30 16:03:48.394638 containerd[1586]: time="2025-01-30T16:03:48.394238598Z" level=info msg="StopPodSandbox for \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\" returns successfully" Jan 30 16:03:48.394927 containerd[1586]: time="2025-01-30T16:03:48.394901186Z" level=info msg="RemovePodSandbox for \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\"" Jan 30 16:03:48.394976 containerd[1586]: time="2025-01-30T16:03:48.394935762Z" level=info msg="Forcibly stopping sandbox \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\"" Jan 30 16:03:48.493069 containerd[1586]: 2025-01-30 16:03:48.435 [WARNING][5185] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0", GenerateName:"calico-apiserver-9985758dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9de676ba-fb3c-4eb2-bb01-cf41724693f5", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9985758dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"77f9960b6fb4550087ac225502eec022dca13a95781cddeb791c74cc3e2bc11b", Pod:"calico-apiserver-9985758dc-qxvp4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5b3ef24257", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:48.493069 containerd[1586]: 2025-01-30 16:03:48.436 [INFO][5185] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Jan 30 16:03:48.493069 containerd[1586]: 2025-01-30 16:03:48.436 [INFO][5185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" iface="eth0" netns="" Jan 30 16:03:48.493069 containerd[1586]: 2025-01-30 16:03:48.436 [INFO][5185] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Jan 30 16:03:48.493069 containerd[1586]: 2025-01-30 16:03:48.436 [INFO][5185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Jan 30 16:03:48.493069 containerd[1586]: 2025-01-30 16:03:48.478 [INFO][5192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" HandleID="k8s-pod-network.bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:48.493069 containerd[1586]: 2025-01-30 16:03:48.478 [INFO][5192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:48.493069 containerd[1586]: 2025-01-30 16:03:48.478 [INFO][5192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:48.493069 containerd[1586]: 2025-01-30 16:03:48.485 [WARNING][5192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" HandleID="k8s-pod-network.bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:48.493069 containerd[1586]: 2025-01-30 16:03:48.485 [INFO][5192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" HandleID="k8s-pod-network.bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--qxvp4-eth0" Jan 30 16:03:48.493069 containerd[1586]: 2025-01-30 16:03:48.487 [INFO][5192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:48.493069 containerd[1586]: 2025-01-30 16:03:48.489 [INFO][5185] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a" Jan 30 16:03:48.493069 containerd[1586]: time="2025-01-30T16:03:48.492967354Z" level=info msg="TearDown network for sandbox \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\" successfully" Jan 30 16:03:48.500244 containerd[1586]: time="2025-01-30T16:03:48.499897644Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 16:03:48.500244 containerd[1586]: time="2025-01-30T16:03:48.500011369Z" level=info msg="RemovePodSandbox \"bb389bece6f696d87da701fb4336672ce367892b09a729a4485ca7ddb2d47a5a\" returns successfully" Jan 30 16:03:48.504144 containerd[1586]: time="2025-01-30T16:03:48.500808149Z" level=info msg="StopPodSandbox for \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\"" Jan 30 16:03:48.628037 containerd[1586]: 2025-01-30 16:03:48.568 [WARNING][5210] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"14c2adf1-67af-4345-a606-62d6d8c2f702", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8", Pod:"coredns-7db6d8ff4d-7chv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie032521f8f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:48.628037 containerd[1586]: 2025-01-30 16:03:48.568 [INFO][5210] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Jan 30 16:03:48.628037 containerd[1586]: 2025-01-30 16:03:48.568 [INFO][5210] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" iface="eth0" netns="" Jan 30 16:03:48.628037 containerd[1586]: 2025-01-30 16:03:48.568 [INFO][5210] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Jan 30 16:03:48.628037 containerd[1586]: 2025-01-30 16:03:48.568 [INFO][5210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Jan 30 16:03:48.628037 containerd[1586]: 2025-01-30 16:03:48.604 [INFO][5217] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" HandleID="k8s-pod-network.36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:48.628037 containerd[1586]: 2025-01-30 16:03:48.605 [INFO][5217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:48.628037 containerd[1586]: 2025-01-30 16:03:48.605 [INFO][5217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:48.628037 containerd[1586]: 2025-01-30 16:03:48.622 [WARNING][5217] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" HandleID="k8s-pod-network.36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:48.628037 containerd[1586]: 2025-01-30 16:03:48.622 [INFO][5217] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" HandleID="k8s-pod-network.36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:48.628037 containerd[1586]: 2025-01-30 16:03:48.624 [INFO][5217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:48.628037 containerd[1586]: 2025-01-30 16:03:48.625 [INFO][5210] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Jan 30 16:03:48.629538 containerd[1586]: time="2025-01-30T16:03:48.628002480Z" level=info msg="TearDown network for sandbox \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\" successfully" Jan 30 16:03:48.629538 containerd[1586]: time="2025-01-30T16:03:48.628212085Z" level=info msg="StopPodSandbox for \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\" returns successfully" Jan 30 16:03:48.629538 containerd[1586]: time="2025-01-30T16:03:48.628683172Z" level=info msg="RemovePodSandbox for \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\"" Jan 30 16:03:48.629538 containerd[1586]: time="2025-01-30T16:03:48.628753735Z" level=info msg="Forcibly stopping sandbox \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\"" Jan 30 16:03:48.818841 containerd[1586]: 2025-01-30 16:03:48.700 [WARNING][5235] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"14c2adf1-67af-4345-a606-62d6d8c2f702", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"ee61a70846fbd0aa58d12a34d5b5d8b22ad524b18a0602480e6139afaa1e81c8", Pod:"coredns-7db6d8ff4d-7chv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie032521f8f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:48.818841 containerd[1586]: 2025-01-30 16:03:48.702 [INFO][5235] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Jan 30 16:03:48.818841 containerd[1586]: 2025-01-30 16:03:48.702 [INFO][5235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" iface="eth0" netns="" Jan 30 16:03:48.818841 containerd[1586]: 2025-01-30 16:03:48.708 [INFO][5235] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Jan 30 16:03:48.818841 containerd[1586]: 2025-01-30 16:03:48.709 [INFO][5235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Jan 30 16:03:48.818841 containerd[1586]: 2025-01-30 16:03:48.778 [INFO][5241] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" HandleID="k8s-pod-network.36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:48.818841 containerd[1586]: 2025-01-30 16:03:48.780 [INFO][5241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:48.818841 containerd[1586]: 2025-01-30 16:03:48.780 [INFO][5241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:48.818841 containerd[1586]: 2025-01-30 16:03:48.798 [WARNING][5241] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" HandleID="k8s-pod-network.36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:48.818841 containerd[1586]: 2025-01-30 16:03:48.798 [INFO][5241] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" HandleID="k8s-pod-network.36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--7chv9-eth0" Jan 30 16:03:48.818841 containerd[1586]: 2025-01-30 16:03:48.803 [INFO][5241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:48.818841 containerd[1586]: 2025-01-30 16:03:48.810 [INFO][5235] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed" Jan 30 16:03:48.818841 containerd[1586]: time="2025-01-30T16:03:48.816441125Z" level=info msg="TearDown network for sandbox \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\" successfully" Jan 30 16:03:48.867241 containerd[1586]: time="2025-01-30T16:03:48.867112700Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 16:03:48.867403 containerd[1586]: time="2025-01-30T16:03:48.867238507Z" level=info msg="RemovePodSandbox \"36ecc2296cc5c39fccb239064bf792c1c909ee126712884cb5c3dfa20d830fed\" returns successfully" Jan 30 16:03:48.870384 containerd[1586]: time="2025-01-30T16:03:48.869794459Z" level=info msg="StopPodSandbox for \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\"" Jan 30 16:03:48.870570 kubelet[2849]: I0130 16:03:48.870203 2849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9985758dc-6trrb" podStartSLOduration=32.767622666 podStartE2EDuration="39.870180747s" podCreationTimestamp="2025-01-30 16:03:09 +0000 UTC" firstStartedPulling="2025-01-30 16:03:41.087103762 +0000 UTC m=+52.967486550" lastFinishedPulling="2025-01-30 16:03:48.189661813 +0000 UTC m=+60.070044631" observedRunningTime="2025-01-30 16:03:48.867943604 +0000 UTC m=+60.748326412" watchObservedRunningTime="2025-01-30 16:03:48.870180747 +0000 UTC m=+60.750563535" Jan 30 16:03:48.990264 containerd[1586]: 2025-01-30 16:03:48.940 [WARNING][5260] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0", GenerateName:"calico-apiserver-9985758dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9985758dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2", Pod:"calico-apiserver-9985758dc-6trrb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbe80c44000", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:48.990264 containerd[1586]: 2025-01-30 16:03:48.940 [INFO][5260] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Jan 30 16:03:48.990264 containerd[1586]: 2025-01-30 16:03:48.940 [INFO][5260] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" iface="eth0" netns="" Jan 30 16:03:48.990264 containerd[1586]: 2025-01-30 16:03:48.940 [INFO][5260] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Jan 30 16:03:48.990264 containerd[1586]: 2025-01-30 16:03:48.940 [INFO][5260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Jan 30 16:03:48.990264 containerd[1586]: 2025-01-30 16:03:48.970 [INFO][5267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" HandleID="k8s-pod-network.efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:48.990264 containerd[1586]: 2025-01-30 16:03:48.970 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:48.990264 containerd[1586]: 2025-01-30 16:03:48.970 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:48.990264 containerd[1586]: 2025-01-30 16:03:48.982 [WARNING][5267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" HandleID="k8s-pod-network.efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:48.990264 containerd[1586]: 2025-01-30 16:03:48.982 [INFO][5267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" HandleID="k8s-pod-network.efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:48.990264 containerd[1586]: 2025-01-30 16:03:48.984 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:48.990264 containerd[1586]: 2025-01-30 16:03:48.988 [INFO][5260] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Jan 30 16:03:48.990915 containerd[1586]: time="2025-01-30T16:03:48.990744511Z" level=info msg="TearDown network for sandbox \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\" successfully" Jan 30 16:03:48.990915 containerd[1586]: time="2025-01-30T16:03:48.990776351Z" level=info msg="StopPodSandbox for \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\" returns successfully" Jan 30 16:03:48.991740 containerd[1586]: time="2025-01-30T16:03:48.991393473Z" level=info msg="RemovePodSandbox for \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\"" Jan 30 16:03:48.991740 containerd[1586]: time="2025-01-30T16:03:48.991420574Z" level=info msg="Forcibly stopping sandbox \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\"" Jan 30 16:03:49.086011 containerd[1586]: 2025-01-30 16:03:49.044 [WARNING][5286] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0", GenerateName:"calico-apiserver-9985758dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d2e8a5c-b2c6-481d-b7bd-2c3bcbfa62f2", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9985758dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"619f290c2d54379730c30e90afd5426673fee207eb58ebd31f9d3a909b18b6f2", Pod:"calico-apiserver-9985758dc-6trrb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbe80c44000", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:49.086011 containerd[1586]: 2025-01-30 16:03:49.045 [INFO][5286] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Jan 30 16:03:49.086011 containerd[1586]: 2025-01-30 16:03:49.045 [INFO][5286] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" iface="eth0" netns="" Jan 30 16:03:49.086011 containerd[1586]: 2025-01-30 16:03:49.045 [INFO][5286] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Jan 30 16:03:49.086011 containerd[1586]: 2025-01-30 16:03:49.045 [INFO][5286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Jan 30 16:03:49.086011 containerd[1586]: 2025-01-30 16:03:49.072 [INFO][5293] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" HandleID="k8s-pod-network.efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:49.086011 containerd[1586]: 2025-01-30 16:03:49.072 [INFO][5293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:49.086011 containerd[1586]: 2025-01-30 16:03:49.073 [INFO][5293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:49.086011 containerd[1586]: 2025-01-30 16:03:49.080 [WARNING][5293] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" HandleID="k8s-pod-network.efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:49.086011 containerd[1586]: 2025-01-30 16:03:49.080 [INFO][5293] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" HandleID="k8s-pod-network.efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--apiserver--9985758dc--6trrb-eth0" Jan 30 16:03:49.086011 containerd[1586]: 2025-01-30 16:03:49.082 [INFO][5293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:49.086011 containerd[1586]: 2025-01-30 16:03:49.084 [INFO][5286] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2" Jan 30 16:03:49.086011 containerd[1586]: time="2025-01-30T16:03:49.086079249Z" level=info msg="TearDown network for sandbox \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\" successfully" Jan 30 16:03:49.093841 containerd[1586]: time="2025-01-30T16:03:49.093808724Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 16:03:49.094004 containerd[1586]: time="2025-01-30T16:03:49.093983382Z" level=info msg="RemovePodSandbox \"efcf3d2181df361876e2ffc3adf6e81340b403c83d06a24c3ec575d09fabafa2\" returns successfully" Jan 30 16:03:49.095260 containerd[1586]: time="2025-01-30T16:03:49.095239699Z" level=info msg="StopPodSandbox for \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\"" Jan 30 16:03:49.189391 containerd[1586]: 2025-01-30 16:03:49.147 [WARNING][5311] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ebe91c93-3db1-469e-ba8a-da3a8ddda05d", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4", Pod:"coredns-7db6d8ff4d-thhwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e3319e6ea6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:49.189391 containerd[1586]: 2025-01-30 16:03:49.147 [INFO][5311] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Jan 30 16:03:49.189391 containerd[1586]: 2025-01-30 16:03:49.147 [INFO][5311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" iface="eth0" netns="" Jan 30 16:03:49.189391 containerd[1586]: 2025-01-30 16:03:49.147 [INFO][5311] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Jan 30 16:03:49.189391 containerd[1586]: 2025-01-30 16:03:49.147 [INFO][5311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Jan 30 16:03:49.189391 containerd[1586]: 2025-01-30 16:03:49.177 [INFO][5317] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" HandleID="k8s-pod-network.dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:49.189391 containerd[1586]: 2025-01-30 16:03:49.177 [INFO][5317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:49.189391 containerd[1586]: 2025-01-30 16:03:49.177 [INFO][5317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:49.189391 containerd[1586]: 2025-01-30 16:03:49.185 [WARNING][5317] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" HandleID="k8s-pod-network.dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:49.189391 containerd[1586]: 2025-01-30 16:03:49.185 [INFO][5317] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" HandleID="k8s-pod-network.dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:49.189391 containerd[1586]: 2025-01-30 16:03:49.187 [INFO][5317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:49.189391 containerd[1586]: 2025-01-30 16:03:49.188 [INFO][5311] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Jan 30 16:03:49.190196 containerd[1586]: time="2025-01-30T16:03:49.189996536Z" level=info msg="TearDown network for sandbox \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\" successfully" Jan 30 16:03:49.190196 containerd[1586]: time="2025-01-30T16:03:49.190058722Z" level=info msg="StopPodSandbox for \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\" returns successfully" Jan 30 16:03:49.192096 containerd[1586]: time="2025-01-30T16:03:49.191426969Z" level=info msg="RemovePodSandbox for \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\"" Jan 30 16:03:49.192096 containerd[1586]: time="2025-01-30T16:03:49.191458358Z" level=info msg="Forcibly stopping sandbox \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\"" Jan 30 16:03:49.272791 containerd[1586]: 2025-01-30 16:03:49.233 [WARNING][5335] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ebe91c93-3db1-469e-ba8a-da3a8ddda05d", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"8c3db32a30a39f7e2f49e88fe375c4772d28bd85eab41b6a47ab4717094be1a4", Pod:"coredns-7db6d8ff4d-thhwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e3319e6ea6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:49.272791 containerd[1586]: 2025-01-30 16:03:49.234 [INFO][5335] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Jan 30 16:03:49.272791 containerd[1586]: 2025-01-30 16:03:49.234 [INFO][5335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" iface="eth0" netns="" Jan 30 16:03:49.272791 containerd[1586]: 2025-01-30 16:03:49.234 [INFO][5335] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Jan 30 16:03:49.272791 containerd[1586]: 2025-01-30 16:03:49.234 [INFO][5335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Jan 30 16:03:49.272791 containerd[1586]: 2025-01-30 16:03:49.262 [INFO][5341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" HandleID="k8s-pod-network.dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:49.272791 containerd[1586]: 2025-01-30 16:03:49.262 [INFO][5341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:49.272791 containerd[1586]: 2025-01-30 16:03:49.262 [INFO][5341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:49.272791 containerd[1586]: 2025-01-30 16:03:49.269 [WARNING][5341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" HandleID="k8s-pod-network.dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:49.272791 containerd[1586]: 2025-01-30 16:03:49.269 [INFO][5341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" HandleID="k8s-pod-network.dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-coredns--7db6d8ff4d--thhwz-eth0" Jan 30 16:03:49.272791 containerd[1586]: 2025-01-30 16:03:49.270 [INFO][5341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:49.272791 containerd[1586]: 2025-01-30 16:03:49.271 [INFO][5335] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557" Jan 30 16:03:49.273713 containerd[1586]: time="2025-01-30T16:03:49.273204606Z" level=info msg="TearDown network for sandbox \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\" successfully" Jan 30 16:03:49.277256 containerd[1586]: time="2025-01-30T16:03:49.277093699Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 16:03:49.277256 containerd[1586]: time="2025-01-30T16:03:49.277176385Z" level=info msg="RemovePodSandbox \"dbc58f4925ce85158f6a01d8d3beecef53ba5d2329fd9589ed8eaf18f7992557\" returns successfully" Jan 30 16:03:49.278653 containerd[1586]: time="2025-01-30T16:03:49.278420317Z" level=info msg="StopPodSandbox for \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\"" Jan 30 16:03:49.363185 containerd[1586]: 2025-01-30 16:03:49.325 [WARNING][5360] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9d99818-5b34-43e4-ab32-bbd2d570ff1b", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f", Pod:"csi-node-driver-pt8fx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali34333decb52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:49.363185 containerd[1586]: 2025-01-30 16:03:49.326 [INFO][5360] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Jan 30 16:03:49.363185 containerd[1586]: 2025-01-30 16:03:49.326 [INFO][5360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" iface="eth0" netns="" Jan 30 16:03:49.363185 containerd[1586]: 2025-01-30 16:03:49.326 [INFO][5360] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Jan 30 16:03:49.363185 containerd[1586]: 2025-01-30 16:03:49.326 [INFO][5360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Jan 30 16:03:49.363185 containerd[1586]: 2025-01-30 16:03:49.349 [INFO][5367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" HandleID="k8s-pod-network.822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:49.363185 containerd[1586]: 2025-01-30 16:03:49.349 [INFO][5367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:49.363185 containerd[1586]: 2025-01-30 16:03:49.349 [INFO][5367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:49.363185 containerd[1586]: 2025-01-30 16:03:49.357 [WARNING][5367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" HandleID="k8s-pod-network.822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:49.363185 containerd[1586]: 2025-01-30 16:03:49.357 [INFO][5367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" HandleID="k8s-pod-network.822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:49.363185 containerd[1586]: 2025-01-30 16:03:49.359 [INFO][5367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:49.363185 containerd[1586]: 2025-01-30 16:03:49.361 [INFO][5360] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Jan 30 16:03:49.363185 containerd[1586]: time="2025-01-30T16:03:49.363133973Z" level=info msg="TearDown network for sandbox \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\" successfully" Jan 30 16:03:49.363185 containerd[1586]: time="2025-01-30T16:03:49.363159922Z" level=info msg="StopPodSandbox for \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\" returns successfully" Jan 30 16:03:49.363931 containerd[1586]: time="2025-01-30T16:03:49.363679420Z" level=info msg="RemovePodSandbox for \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\"" Jan 30 16:03:49.363931 containerd[1586]: time="2025-01-30T16:03:49.363704579Z" level=info msg="Forcibly stopping sandbox \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\"" Jan 30 16:03:49.442681 containerd[1586]: 2025-01-30 16:03:49.409 [WARNING][5386] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9d99818-5b34-43e4-ab32-bbd2d570ff1b", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f", Pod:"csi-node-driver-pt8fx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali34333decb52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:49.442681 containerd[1586]: 2025-01-30 16:03:49.409 [INFO][5386] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Jan 30 16:03:49.442681 containerd[1586]: 2025-01-30 16:03:49.409 [INFO][5386] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" iface="eth0" netns="" Jan 30 16:03:49.442681 containerd[1586]: 2025-01-30 16:03:49.409 [INFO][5386] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Jan 30 16:03:49.442681 containerd[1586]: 2025-01-30 16:03:49.409 [INFO][5386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Jan 30 16:03:49.442681 containerd[1586]: 2025-01-30 16:03:49.431 [INFO][5392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" HandleID="k8s-pod-network.822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:49.442681 containerd[1586]: 2025-01-30 16:03:49.431 [INFO][5392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:49.442681 containerd[1586]: 2025-01-30 16:03:49.431 [INFO][5392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:49.442681 containerd[1586]: 2025-01-30 16:03:49.439 [WARNING][5392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" HandleID="k8s-pod-network.822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:49.442681 containerd[1586]: 2025-01-30 16:03:49.439 [INFO][5392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" HandleID="k8s-pod-network.822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-csi--node--driver--pt8fx-eth0" Jan 30 16:03:49.442681 containerd[1586]: 2025-01-30 16:03:49.440 [INFO][5392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:49.442681 containerd[1586]: 2025-01-30 16:03:49.441 [INFO][5386] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9" Jan 30 16:03:49.443309 containerd[1586]: time="2025-01-30T16:03:49.442744351Z" level=info msg="TearDown network for sandbox \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\" successfully" Jan 30 16:03:49.447206 containerd[1586]: time="2025-01-30T16:03:49.447171296Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 16:03:49.447261 containerd[1586]: time="2025-01-30T16:03:49.447229255Z" level=info msg="RemovePodSandbox \"822970f2f7a92073232f149136bd18ffcc54e6a1bd1f502de1fd0c7cf69c04b9\" returns successfully" Jan 30 16:03:49.447829 containerd[1586]: time="2025-01-30T16:03:49.447775053Z" level=info msg="StopPodSandbox for \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\"" Jan 30 16:03:50.064090 containerd[1586]: 2025-01-30 16:03:49.502 [WARNING][5410] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0", GenerateName:"calico-kube-controllers-85dd67d8df-", Namespace:"calico-system", SelfLink:"", UID:"40fbe7ed-2398-4d81-b72d-863ad9e6ffb9", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85dd67d8df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74", Pod:"calico-kube-controllers-85dd67d8df-dv7gp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califedf616e5a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:50.064090 containerd[1586]: 2025-01-30 16:03:49.502 [INFO][5410] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Jan 30 16:03:50.064090 containerd[1586]: 2025-01-30 16:03:49.502 [INFO][5410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" iface="eth0" netns="" Jan 30 16:03:50.064090 containerd[1586]: 2025-01-30 16:03:49.502 [INFO][5410] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Jan 30 16:03:50.064090 containerd[1586]: 2025-01-30 16:03:49.502 [INFO][5410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Jan 30 16:03:50.064090 containerd[1586]: 2025-01-30 16:03:49.555 [INFO][5418] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" HandleID="k8s-pod-network.026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:50.064090 containerd[1586]: 2025-01-30 16:03:49.970 [INFO][5418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:50.064090 containerd[1586]: 2025-01-30 16:03:49.970 [INFO][5418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:50.064090 containerd[1586]: 2025-01-30 16:03:50.022 [WARNING][5418] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" HandleID="k8s-pod-network.026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:50.064090 containerd[1586]: 2025-01-30 16:03:50.048 [INFO][5418] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" HandleID="k8s-pod-network.026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:50.064090 containerd[1586]: 2025-01-30 16:03:50.055 [INFO][5418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:50.064090 containerd[1586]: 2025-01-30 16:03:50.059 [INFO][5410] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Jan 30 16:03:50.068434 containerd[1586]: time="2025-01-30T16:03:50.064155027Z" level=info msg="TearDown network for sandbox \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\" successfully" Jan 30 16:03:50.068434 containerd[1586]: time="2025-01-30T16:03:50.064186126Z" level=info msg="StopPodSandbox for \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\" returns successfully" Jan 30 16:03:50.068434 containerd[1586]: time="2025-01-30T16:03:50.065226103Z" level=info msg="RemovePodSandbox for \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\"" Jan 30 16:03:50.068434 containerd[1586]: time="2025-01-30T16:03:50.065258715Z" level=info msg="Forcibly stopping sandbox \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\"" Jan 30 16:03:50.169143 containerd[1586]: 2025-01-30 16:03:50.124 [WARNING][5436] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0", GenerateName:"calico-kube-controllers-85dd67d8df-", Namespace:"calico-system", SelfLink:"", UID:"40fbe7ed-2398-4d81-b72d-863ad9e6ffb9", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 16, 3, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85dd67d8df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-e08351c9d9.novalocal", ContainerID:"c8bdd5de6c6db7ecb51de50bd88c699869e7df64848cc86ed51b5fab8ee43d74", Pod:"calico-kube-controllers-85dd67d8df-dv7gp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califedf616e5a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 16:03:50.169143 containerd[1586]: 2025-01-30 16:03:50.125 [INFO][5436] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Jan 30 16:03:50.169143 containerd[1586]: 2025-01-30 16:03:50.125 [INFO][5436] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" iface="eth0" netns="" Jan 30 16:03:50.169143 containerd[1586]: 2025-01-30 16:03:50.125 [INFO][5436] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Jan 30 16:03:50.169143 containerd[1586]: 2025-01-30 16:03:50.125 [INFO][5436] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Jan 30 16:03:50.169143 containerd[1586]: 2025-01-30 16:03:50.153 [INFO][5442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" HandleID="k8s-pod-network.026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:50.169143 containerd[1586]: 2025-01-30 16:03:50.153 [INFO][5442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 16:03:50.169143 containerd[1586]: 2025-01-30 16:03:50.153 [INFO][5442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 16:03:50.169143 containerd[1586]: 2025-01-30 16:03:50.162 [WARNING][5442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" HandleID="k8s-pod-network.026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:50.169143 containerd[1586]: 2025-01-30 16:03:50.162 [INFO][5442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" HandleID="k8s-pod-network.026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Workload="ci--4081--3--0--2--e08351c9d9.novalocal-k8s-calico--kube--controllers--85dd67d8df--dv7gp-eth0" Jan 30 16:03:50.169143 containerd[1586]: 2025-01-30 16:03:50.165 [INFO][5442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 16:03:50.169143 containerd[1586]: 2025-01-30 16:03:50.167 [INFO][5436] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca" Jan 30 16:03:50.170073 containerd[1586]: time="2025-01-30T16:03:50.169185911Z" level=info msg="TearDown network for sandbox \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\" successfully" Jan 30 16:03:50.175403 containerd[1586]: time="2025-01-30T16:03:50.175353143Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 16:03:50.175663 containerd[1586]: time="2025-01-30T16:03:50.175424698Z" level=info msg="RemovePodSandbox \"026fee91e7c1d4dcbca816f684bd4f4c3ce960e11739028605493949c8b231ca\" returns successfully" Jan 30 16:03:50.862889 containerd[1586]: time="2025-01-30T16:03:50.862088835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:50.863210 containerd[1586]: time="2025-01-30T16:03:50.863172745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 16:03:50.864473 containerd[1586]: time="2025-01-30T16:03:50.864427588Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:50.867083 containerd[1586]: time="2025-01-30T16:03:50.867036430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 16:03:50.868048 containerd[1586]: time="2025-01-30T16:03:50.867951553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.677417358s" Jan 30 16:03:50.868048 containerd[1586]: time="2025-01-30T16:03:50.867997420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 16:03:50.871776 containerd[1586]: time="2025-01-30T16:03:50.871660186Z" level=info msg="CreateContainer within sandbox \"8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 16:03:50.892868 containerd[1586]: time="2025-01-30T16:03:50.892836186Z" level=info msg="CreateContainer within sandbox \"8f8c827e1d751e8998a7abbde15060b658ca96abd5a5e334af96db2ec64c663f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"711be46d694f3d9e61da14ae286038bc3f66dd12b9d2e5ec9ec8123edf987247\"" Jan 30 16:03:50.894281 containerd[1586]: time="2025-01-30T16:03:50.894142416Z" level=info msg="StartContainer for \"711be46d694f3d9e61da14ae286038bc3f66dd12b9d2e5ec9ec8123edf987247\"" Jan 30 16:03:50.960631 containerd[1586]: time="2025-01-30T16:03:50.960572171Z" level=info msg="StartContainer for \"711be46d694f3d9e61da14ae286038bc3f66dd12b9d2e5ec9ec8123edf987247\" returns successfully" Jan 30 16:03:51.390085 kubelet[2849]: I0130 16:03:51.389473 2849 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 16:03:51.390085 kubelet[2849]: I0130 16:03:51.389539 2849 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 16:04:05.148139 kubelet[2849]: I0130 16:04:05.147647 2849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pt8fx" podStartSLOduration=46.350443024 podStartE2EDuration="56.147628224s" podCreationTimestamp="2025-01-30 16:03:09 +0000 UTC" firstStartedPulling="2025-01-30 16:03:41.071947677 +0000 UTC m=+52.952330465" lastFinishedPulling="2025-01-30 16:03:50.869132867 +0000 UTC m=+62.749515665" observedRunningTime="2025-01-30 16:03:51.946886011 +0000 UTC m=+63.827268859" watchObservedRunningTime="2025-01-30 16:04:05.147628224 +0000 UTC m=+77.028011012" Jan 30 16:04:35.053916 systemd[1]: run-containerd-runc-k8s.io-276e9cd267982b933ba8518ba933fabee47f2ed323d2ab38968c380a13ca09b8-runc.2s0Wus.mount: Deactivated successfully. Jan 30 16:04:52.790671 systemd[1]: Started sshd@9-172.24.4.55:22-172.24.4.1:52226.service - OpenSSH per-connection server daemon (172.24.4.1:52226). Jan 30 16:04:54.085368 sshd[5624]: Accepted publickey for core from 172.24.4.1 port 52226 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:04:54.087361 sshd[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:04:54.093184 systemd-logind[1567]: New session 12 of user core. Jan 30 16:04:54.099270 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 16:04:54.974423 sshd[5624]: pam_unix(sshd:session): session closed for user core Jan 30 16:04:54.984987 systemd[1]: sshd@9-172.24.4.55:22-172.24.4.1:52226.service: Deactivated successfully. Jan 30 16:04:54.995908 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 16:04:54.998832 systemd-logind[1567]: Session 12 logged out. Waiting for processes to exit. Jan 30 16:04:55.001381 systemd-logind[1567]: Removed session 12. Jan 30 16:04:59.986801 systemd[1]: Started sshd@10-172.24.4.55:22-172.24.4.1:57170.service - OpenSSH per-connection server daemon (172.24.4.1:57170). Jan 30 16:05:01.205453 sshd[5664]: Accepted publickey for core from 172.24.4.1 port 57170 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:05:01.208751 sshd[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:05:01.220077 systemd-logind[1567]: New session 13 of user core. Jan 30 16:05:01.231308 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 16:05:01.772935 sshd[5664]: pam_unix(sshd:session): session closed for user core Jan 30 16:05:01.776552 systemd[1]: sshd@10-172.24.4.55:22-172.24.4.1:57170.service: Deactivated successfully. Jan 30 16:05:01.781124 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 16:05:01.782651 systemd-logind[1567]: Session 13 logged out. Waiting for processes to exit. Jan 30 16:05:01.783728 systemd-logind[1567]: Removed session 13. Jan 30 16:05:01.901048 update_engine[1572]: I20250130 16:05:01.900965 1572 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 16:05:01.901405 update_engine[1572]: I20250130 16:05:01.901082 1572 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 16:05:01.901434 update_engine[1572]: I20250130 16:05:01.901413 1572 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 16:05:01.902306 update_engine[1572]: I20250130 16:05:01.902271 1572 omaha_request_params.cc:62] Current group set to lts Jan 30 16:05:01.907644 update_engine[1572]: I20250130 16:05:01.907397 1572 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 16:05:01.907644 update_engine[1572]: I20250130 16:05:01.907436 1572 update_attempter.cc:643] Scheduling an action processor start. Jan 30 16:05:01.907644 update_engine[1572]: I20250130 16:05:01.907533 1572 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 16:05:01.907644 update_engine[1572]: I20250130 16:05:01.907604 1572 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 16:05:01.907773 locksmithd[1599]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 16:05:01.907972 update_engine[1572]: I20250130 16:05:01.907716 1572 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 16:05:01.907972 update_engine[1572]: I20250130 16:05:01.907734 1572 omaha_request_action.cc:272] Request: Jan 30 16:05:01.907972 update_engine[1572]: Jan 30 16:05:01.907972 update_engine[1572]: Jan 30 16:05:01.907972 update_engine[1572]: Jan 30 16:05:01.907972 update_engine[1572]: Jan 30 16:05:01.907972 update_engine[1572]: Jan 30 16:05:01.907972 update_engine[1572]: Jan 30 16:05:01.907972 update_engine[1572]: Jan 30 16:05:01.907972 update_engine[1572]: Jan 30 16:05:01.907972 update_engine[1572]: I20250130 16:05:01.907747 1572 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 16:05:01.914024 update_engine[1572]: I20250130 16:05:01.913975 1572 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 16:05:01.914629 update_engine[1572]: I20250130 16:05:01.914572 1572 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 16:05:01.927662 update_engine[1572]: E20250130 16:05:01.927413 1572 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 16:05:01.927662 update_engine[1572]: I20250130 16:05:01.927560 1572 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 16:05:06.785911 systemd[1]: Started sshd@11-172.24.4.55:22-172.24.4.1:36152.service - OpenSSH per-connection server daemon (172.24.4.1:36152). Jan 30 16:05:08.109870 sshd[5705]: Accepted publickey for core from 172.24.4.1 port 36152 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:05:08.112720 sshd[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:05:08.124975 systemd-logind[1567]: New session 14 of user core. Jan 30 16:05:08.130523 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 16:05:08.847499 sshd[5705]: pam_unix(sshd:session): session closed for user core Jan 30 16:05:08.856611 systemd[1]: Started sshd@12-172.24.4.55:22-172.24.4.1:36156.service - OpenSSH per-connection server daemon (172.24.4.1:36156). Jan 30 16:05:08.857732 systemd[1]: sshd@11-172.24.4.55:22-172.24.4.1:36152.service: Deactivated successfully. Jan 30 16:05:08.862571 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 16:05:08.865495 systemd-logind[1567]: Session 14 logged out. Waiting for processes to exit. Jan 30 16:05:08.867546 systemd-logind[1567]: Removed session 14. Jan 30 16:05:10.348822 sshd[5728]: Accepted publickey for core from 172.24.4.1 port 36156 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:05:10.351614 sshd[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:05:10.363278 systemd-logind[1567]: New session 15 of user core. Jan 30 16:05:10.370002 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 16:05:11.263095 sshd[5728]: pam_unix(sshd:session): session closed for user core Jan 30 16:05:11.269940 systemd[1]: Started sshd@13-172.24.4.55:22-172.24.4.1:36158.service - OpenSSH per-connection server daemon (172.24.4.1:36158). Jan 30 16:05:11.275211 systemd[1]: sshd@12-172.24.4.55:22-172.24.4.1:36156.service: Deactivated successfully. Jan 30 16:05:11.281412 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 16:05:11.283182 systemd-logind[1567]: Session 15 logged out. Waiting for processes to exit. Jan 30 16:05:11.286503 systemd-logind[1567]: Removed session 15. Jan 30 16:05:11.900905 update_engine[1572]: I20250130 16:05:11.900774 1572 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 16:05:11.901720 update_engine[1572]: I20250130 16:05:11.901273 1572 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 16:05:11.901720 update_engine[1572]: I20250130 16:05:11.901644 1572 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 16:05:11.912385 update_engine[1572]: E20250130 16:05:11.912290 1572 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 16:05:11.912518 update_engine[1572]: I20250130 16:05:11.912403 1572 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 16:05:12.567471 sshd[5744]: Accepted publickey for core from 172.24.4.1 port 36158 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:05:12.570444 sshd[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:05:12.582074 systemd-logind[1567]: New session 16 of user core. Jan 30 16:05:12.590594 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 16:05:13.383759 sshd[5744]: pam_unix(sshd:session): session closed for user core Jan 30 16:05:13.390382 systemd-logind[1567]: Session 16 logged out. Waiting for processes to exit. Jan 30 16:05:13.390884 systemd[1]: sshd@13-172.24.4.55:22-172.24.4.1:36158.service: Deactivated successfully. Jan 30 16:05:13.398480 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 16:05:13.401979 systemd-logind[1567]: Removed session 16. Jan 30 16:05:18.399121 systemd[1]: Started sshd@14-172.24.4.55:22-172.24.4.1:36668.service - OpenSSH per-connection server daemon (172.24.4.1:36668). Jan 30 16:05:19.724836 sshd[5760]: Accepted publickey for core from 172.24.4.1 port 36668 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:05:19.727598 sshd[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:05:19.738756 systemd-logind[1567]: New session 17 of user core. Jan 30 16:05:19.744553 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 16:05:20.414550 sshd[5760]: pam_unix(sshd:session): session closed for user core Jan 30 16:05:20.421708 systemd[1]: sshd@14-172.24.4.55:22-172.24.4.1:36668.service: Deactivated successfully. Jan 30 16:05:20.427760 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 16:05:20.430227 systemd-logind[1567]: Session 17 logged out. Waiting for processes to exit. Jan 30 16:05:20.432630 systemd-logind[1567]: Removed session 17. Jan 30 16:05:21.902232 update_engine[1572]: I20250130 16:05:21.902068 1572 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 16:05:21.903199 update_engine[1572]: I20250130 16:05:21.902521 1572 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 16:05:21.903199 update_engine[1572]: I20250130 16:05:21.902894 1572 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 16:05:21.913851 update_engine[1572]: E20250130 16:05:21.913759 1572 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 16:05:21.914346 update_engine[1572]: I20250130 16:05:21.913870 1572 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 16:05:25.432706 systemd[1]: Started sshd@15-172.24.4.55:22-172.24.4.1:55446.service - OpenSSH per-connection server daemon (172.24.4.1:55446). Jan 30 16:05:26.546048 sshd[5796]: Accepted publickey for core from 172.24.4.1 port 55446 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:05:26.549733 sshd[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:05:26.567129 systemd-logind[1567]: New session 18 of user core. Jan 30 16:05:26.577192 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 16:05:27.455969 sshd[5796]: pam_unix(sshd:session): session closed for user core Jan 30 16:05:27.460498 systemd[1]: sshd@15-172.24.4.55:22-172.24.4.1:55446.service: Deactivated successfully. Jan 30 16:05:27.464441 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 16:05:27.464618 systemd-logind[1567]: Session 18 logged out. Waiting for processes to exit. Jan 30 16:05:27.467437 systemd-logind[1567]: Removed session 18. Jan 30 16:05:31.904614 update_engine[1572]: I20250130 16:05:31.904287 1572 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 16:05:31.907509 update_engine[1572]: I20250130 16:05:31.906177 1572 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 16:05:31.907509 update_engine[1572]: I20250130 16:05:31.906697 1572 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 16:05:31.917985 update_engine[1572]: E20250130 16:05:31.917857 1572 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 16:05:31.918185 update_engine[1572]: I20250130 16:05:31.918087 1572 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 16:05:31.918185 update_engine[1572]: I20250130 16:05:31.918126 1572 omaha_request_action.cc:617] Omaha request response: Jan 30 16:05:31.918439 update_engine[1572]: E20250130 16:05:31.918351 1572 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 16:05:31.918565 update_engine[1572]: I20250130 16:05:31.918463 1572 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 16:05:31.918565 update_engine[1572]: I20250130 16:05:31.918491 1572 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 16:05:31.918565 update_engine[1572]: I20250130 16:05:31.918511 1572 update_attempter.cc:306] Processing Done. Jan 30 16:05:31.918565 update_engine[1572]: E20250130 16:05:31.918548 1572 update_attempter.cc:619] Update failed. Jan 30 16:05:31.918794 update_engine[1572]: I20250130 16:05:31.918572 1572 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 16:05:31.918794 update_engine[1572]: I20250130 16:05:31.918593 1572 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 16:05:31.918794 update_engine[1572]: I20250130 16:05:31.918612 1572 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 16:05:31.918975 update_engine[1572]: I20250130 16:05:31.918803 1572 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 16:05:31.918975 update_engine[1572]: I20250130 16:05:31.918879 1572 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 16:05:31.918975 update_engine[1572]: I20250130 16:05:31.918907 1572 omaha_request_action.cc:272] Request: Jan 30 16:05:31.918975 update_engine[1572]: Jan 30 16:05:31.918975 update_engine[1572]: Jan 30 16:05:31.918975 update_engine[1572]: Jan 30 16:05:31.918975 update_engine[1572]: Jan 30 16:05:31.918975 update_engine[1572]: Jan 30 16:05:31.918975 update_engine[1572]: Jan 30 16:05:31.918975 update_engine[1572]: I20250130 16:05:31.918926 1572 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 16:05:31.919553 update_engine[1572]: I20250130 16:05:31.919438 1572 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 16:05:31.920465 update_engine[1572]: I20250130 16:05:31.919991 1572 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 16:05:31.921228 locksmithd[1599]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 16:05:31.931465 update_engine[1572]: E20250130 16:05:31.931193 1572 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 16:05:31.931465 update_engine[1572]: I20250130 16:05:31.931278 1572 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 16:05:31.931465 update_engine[1572]: I20250130 16:05:31.931293 1572 omaha_request_action.cc:617] Omaha request response: Jan 30 16:05:31.931465 update_engine[1572]: I20250130 16:05:31.931305 1572 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 16:05:31.931465 update_engine[1572]: I20250130 16:05:31.931314 1572 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 16:05:31.931465 update_engine[1572]: I20250130 16:05:31.931324 1572 update_attempter.cc:306] Processing Done. Jan 30 16:05:31.931465 update_engine[1572]: I20250130 16:05:31.931334 1572 update_attempter.cc:310] Error event sent. Jan 30 16:05:31.931465 update_engine[1572]: I20250130 16:05:31.931347 1572 update_check_scheduler.cc:74] Next update check in 44m38s Jan 30 16:05:31.931922 locksmithd[1599]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 16:05:32.472853 systemd[1]: Started sshd@16-172.24.4.55:22-172.24.4.1:55460.service - OpenSSH per-connection server daemon (172.24.4.1:55460). Jan 30 16:05:33.748654 sshd[5811]: Accepted publickey for core from 172.24.4.1 port 55460 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:05:33.751903 sshd[5811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:05:33.763683 systemd-logind[1567]: New session 19 of user core. Jan 30 16:05:33.770383 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 16:05:34.619381 sshd[5811]: pam_unix(sshd:session): session closed for user core Jan 30 16:05:34.626478 systemd[1]: Started sshd@17-172.24.4.55:22-172.24.4.1:60972.service - OpenSSH per-connection server daemon (172.24.4.1:60972). Jan 30 16:05:34.631837 systemd[1]: sshd@16-172.24.4.55:22-172.24.4.1:55460.service: Deactivated successfully. Jan 30 16:05:34.640546 systemd-logind[1567]: Session 19 logged out. Waiting for processes to exit. Jan 30 16:05:34.640550 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 16:05:34.647130 systemd-logind[1567]: Removed session 19. Jan 30 16:05:35.962577 sshd[5824]: Accepted publickey for core from 172.24.4.1 port 60972 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:05:35.965775 sshd[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:05:35.975870 systemd-logind[1567]: New session 20 of user core. Jan 30 16:05:35.988768 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 16:05:37.082297 sshd[5824]: pam_unix(sshd:session): session closed for user core Jan 30 16:05:37.093430 systemd[1]: Started sshd@18-172.24.4.55:22-172.24.4.1:60980.service - OpenSSH per-connection server daemon (172.24.4.1:60980). Jan 30 16:05:37.096679 systemd[1]: sshd@17-172.24.4.55:22-172.24.4.1:60972.service: Deactivated successfully. Jan 30 16:05:37.108220 systemd-logind[1567]: Session 20 logged out. Waiting for processes to exit. Jan 30 16:05:37.112387 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 16:05:37.118288 systemd-logind[1567]: Removed session 20. Jan 30 16:05:38.326527 sshd[5859]: Accepted publickey for core from 172.24.4.1 port 60980 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:05:38.329558 sshd[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:05:38.345908 systemd-logind[1567]: New session 21 of user core. Jan 30 16:05:38.352988 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 16:05:41.589063 sshd[5859]: pam_unix(sshd:session): session closed for user core Jan 30 16:05:41.598351 systemd[1]: Started sshd@19-172.24.4.55:22-172.24.4.1:60992.service - OpenSSH per-connection server daemon (172.24.4.1:60992). Jan 30 16:05:41.600522 systemd[1]: sshd@18-172.24.4.55:22-172.24.4.1:60980.service: Deactivated successfully. Jan 30 16:05:41.606207 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 16:05:41.606637 systemd-logind[1567]: Session 21 logged out. Waiting for processes to exit. Jan 30 16:05:41.610292 systemd-logind[1567]: Removed session 21. Jan 30 16:05:42.776465 sshd[5892]: Accepted publickey for core from 172.24.4.1 port 60992 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:05:42.780813 sshd[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:05:42.792151 systemd-logind[1567]: New session 22 of user core. Jan 30 16:05:42.803692 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 16:05:43.788452 sshd[5892]: pam_unix(sshd:session): session closed for user core Jan 30 16:05:43.805869 systemd[1]: Started sshd@20-172.24.4.55:22-172.24.4.1:56708.service - OpenSSH per-connection server daemon (172.24.4.1:56708). Jan 30 16:05:43.807591 systemd[1]: sshd@19-172.24.4.55:22-172.24.4.1:60992.service: Deactivated successfully. Jan 30 16:05:43.820715 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 16:05:43.824496 systemd-logind[1567]: Session 22 logged out. Waiting for processes to exit. Jan 30 16:05:43.831243 systemd-logind[1567]: Removed session 22. Jan 30 16:05:45.142526 sshd[5903]: Accepted publickey for core from 172.24.4.1 port 56708 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:05:45.144810 sshd[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:05:45.155385 systemd-logind[1567]: New session 23 of user core. Jan 30 16:05:45.163589 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 16:05:45.918577 sshd[5903]: pam_unix(sshd:session): session closed for user core Jan 30 16:05:45.927354 systemd[1]: sshd@20-172.24.4.55:22-172.24.4.1:56708.service: Deactivated successfully. Jan 30 16:05:45.933488 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 16:05:45.933944 systemd-logind[1567]: Session 23 logged out. Waiting for processes to exit. Jan 30 16:05:45.938455 systemd-logind[1567]: Removed session 23. Jan 30 16:05:50.932137 systemd[1]: Started sshd@21-172.24.4.55:22-172.24.4.1:56722.service - OpenSSH per-connection server daemon (172.24.4.1:56722). Jan 30 16:05:52.222380 sshd[5945]: Accepted publickey for core from 172.24.4.1 port 56722 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:05:52.225417 sshd[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:05:52.235996 systemd-logind[1567]: New session 24 of user core. Jan 30 16:05:52.242606 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 16:05:52.912539 sshd[5945]: pam_unix(sshd:session): session closed for user core Jan 30 16:05:52.919476 systemd[1]: sshd@21-172.24.4.55:22-172.24.4.1:56722.service: Deactivated successfully. Jan 30 16:05:52.927068 systemd-logind[1567]: Session 24 logged out. Waiting for processes to exit. Jan 30 16:05:52.928506 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 16:05:52.932891 systemd-logind[1567]: Removed session 24. Jan 30 16:05:57.923782 systemd[1]: Started sshd@22-172.24.4.55:22-172.24.4.1:47166.service - OpenSSH per-connection server daemon (172.24.4.1:47166). Jan 30 16:05:59.215117 sshd[5977]: Accepted publickey for core from 172.24.4.1 port 47166 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:05:59.218256 sshd[5977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:05:59.230094 systemd-logind[1567]: New session 25 of user core. Jan 30 16:05:59.237696 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 16:06:00.034479 sshd[5977]: pam_unix(sshd:session): session closed for user core Jan 30 16:06:00.040274 systemd[1]: sshd@22-172.24.4.55:22-172.24.4.1:47166.service: Deactivated successfully. Jan 30 16:06:00.042709 systemd-logind[1567]: Session 25 logged out. Waiting for processes to exit. Jan 30 16:06:00.043220 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 16:06:00.044892 systemd-logind[1567]: Removed session 25. Jan 30 16:06:05.048544 systemd[1]: Started sshd@23-172.24.4.55:22-172.24.4.1:33346.service - OpenSSH per-connection server daemon (172.24.4.1:33346). Jan 30 16:06:07.010435 sshd[6000]: Accepted publickey for core from 172.24.4.1 port 33346 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:06:07.013306 sshd[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:06:07.024506 systemd-logind[1567]: New session 26 of user core. Jan 30 16:06:07.034179 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 16:06:07.839134 sshd[6000]: pam_unix(sshd:session): session closed for user core Jan 30 16:06:07.843679 systemd[1]: sshd@23-172.24.4.55:22-172.24.4.1:33346.service: Deactivated successfully. Jan 30 16:06:07.844304 systemd-logind[1567]: Session 26 logged out. Waiting for processes to exit. Jan 30 16:06:07.849395 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 16:06:07.851525 systemd-logind[1567]: Removed session 26. Jan 30 16:06:12.848886 systemd[1]: Started sshd@24-172.24.4.55:22-172.24.4.1:33362.service - OpenSSH per-connection server daemon (172.24.4.1:33362). Jan 30 16:06:14.188997 sshd[6030]: Accepted publickey for core from 172.24.4.1 port 33362 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:06:14.192483 sshd[6030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:06:14.209907 systemd-logind[1567]: New session 27 of user core. Jan 30 16:06:14.216165 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 16:06:15.012827 sshd[6030]: pam_unix(sshd:session): session closed for user core Jan 30 16:06:15.018241 systemd[1]: sshd@24-172.24.4.55:22-172.24.4.1:33362.service: Deactivated successfully. Jan 30 16:06:15.026693 systemd-logind[1567]: Session 27 logged out. Waiting for processes to exit. Jan 30 16:06:15.028143 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 16:06:15.030913 systemd-logind[1567]: Removed session 27. Jan 30 16:06:20.024513 systemd[1]: Started sshd@25-172.24.4.55:22-172.24.4.1:35368.service - OpenSSH per-connection server daemon (172.24.4.1:35368). Jan 30 16:06:21.363513 sshd[6052]: Accepted publickey for core from 172.24.4.1 port 35368 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 16:06:21.366515 sshd[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 16:06:21.377713 systemd-logind[1567]: New session 28 of user core. Jan 30 16:06:21.383568 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 16:06:22.231596 sshd[6052]: pam_unix(sshd:session): session closed for user core Jan 30 16:06:22.236970 systemd[1]: sshd@25-172.24.4.55:22-172.24.4.1:35368.service: Deactivated successfully. Jan 30 16:06:22.244474 systemd-logind[1567]: Session 28 logged out. Waiting for processes to exit. Jan 30 16:06:22.245323 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 16:06:22.248978 systemd-logind[1567]: Removed session 28.