Jan 13 22:02:12.096497 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 22:02:12.096524 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 22:02:12.096534 kernel: BIOS-provided physical RAM map: Jan 13 22:02:12.096541 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 22:02:12.096548 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 22:02:12.096558 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 22:02:12.096566 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 13 22:02:12.096573 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 13 22:02:12.096581 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 22:02:12.096588 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 22:02:12.096595 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 13 22:02:12.096602 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 22:02:12.096610 kernel: NX (Execute Disable) protection: active Jan 13 22:02:12.096617 kernel: APIC: Static calls initialized Jan 13 22:02:12.096628 kernel: SMBIOS 3.0.0 present. Jan 13 22:02:12.096636 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 13 22:02:12.096643 kernel: Hypervisor detected: KVM Jan 13 22:02:12.096651 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 22:02:12.096659 kernel: kvm-clock: using sched offset of 3727098707 cycles Jan 13 22:02:12.096669 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 22:02:12.096677 kernel: tsc: Detected 1996.249 MHz processor Jan 13 22:02:12.096685 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 22:02:12.096693 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 22:02:12.096701 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 13 22:02:12.096709 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 22:02:12.096717 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 22:02:12.096725 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 13 22:02:12.096733 kernel: ACPI: Early table checksum verification disabled Jan 13 22:02:12.096742 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 13 22:02:12.096750 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 22:02:12.096758 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 22:02:12.096766 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 22:02:12.096774 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 13 22:02:12.097842 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 22:02:12.097853 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 22:02:12.097861 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 13 22:02:12.097869 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 13 22:02:12.097880 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 13 22:02:12.097888 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 13 22:02:12.097896 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 13 22:02:12.097907 kernel: No NUMA configuration found Jan 13 22:02:12.097915 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 13 22:02:12.097923 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 13 22:02:12.097934 kernel: Zone ranges: Jan 13 22:02:12.097942 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 22:02:12.097950 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 22:02:12.097958 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 13 22:02:12.097966 kernel: Movable zone start for each node Jan 13 22:02:12.097974 kernel: Early memory node ranges Jan 13 22:02:12.097983 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 22:02:12.097991 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 13 22:02:12.098000 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 13 22:02:12.098009 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 13 22:02:12.098017 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 22:02:12.098025 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 22:02:12.098033 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 13 22:02:12.098041 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 22:02:12.098050 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 22:02:12.098058 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 22:02:12.098066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 22:02:12.098076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 22:02:12.098085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 22:02:12.098093 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 22:02:12.098101 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 22:02:12.098109 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 22:02:12.098118 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 22:02:12.098126 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 22:02:12.098134 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 13 22:02:12.098142 kernel: Booting paravirtualized kernel on KVM Jan 13 22:02:12.098153 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 22:02:12.098161 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 22:02:12.098169 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 22:02:12.098177 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 22:02:12.098185 kernel: pcpu-alloc: [0] 0 1 Jan 13 22:02:12.098193 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 13 22:02:12.098203 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 22:02:12.098212 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 22:02:12.098222 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 22:02:12.098231 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 22:02:12.098239 kernel: Fallback order for Node 0: 0 Jan 13 22:02:12.098247 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 13 22:02:12.098255 kernel: Policy zone: Normal Jan 13 22:02:12.098264 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 22:02:12.098272 kernel: software IO TLB: area num 2. Jan 13 22:02:12.098280 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 227308K reserved, 0K cma-reserved) Jan 13 22:02:12.098289 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 22:02:12.098299 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 22:02:12.098307 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 22:02:12.098315 kernel: Dynamic Preempt: voluntary Jan 13 22:02:12.098323 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 22:02:12.098332 kernel: rcu: RCU event tracing is enabled. Jan 13 22:02:12.098341 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 22:02:12.098349 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 22:02:12.098357 kernel: Rude variant of Tasks RCU enabled. Jan 13 22:02:12.098366 kernel: Tracing variant of Tasks RCU enabled. Jan 13 22:02:12.098375 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 22:02:12.098384 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 22:02:12.098392 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 22:02:12.098400 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 22:02:12.098408 kernel: Console: colour VGA+ 80x25 Jan 13 22:02:12.098416 kernel: printk: console [tty0] enabled Jan 13 22:02:12.098425 kernel: printk: console [ttyS0] enabled Jan 13 22:02:12.098433 kernel: ACPI: Core revision 20230628 Jan 13 22:02:12.098441 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 22:02:12.098451 kernel: x2apic enabled Jan 13 22:02:12.098459 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 22:02:12.098468 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 22:02:12.098476 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 22:02:12.098484 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 13 22:02:12.098493 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 22:02:12.098501 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 22:02:12.098509 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 22:02:12.098517 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 22:02:12.098527 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 22:02:12.098535 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 22:02:12.098544 kernel: Speculative Store Bypass: Vulnerable Jan 13 22:02:12.098552 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 13 22:02:12.098560 kernel: Freeing SMP alternatives memory: 32K Jan 13 22:02:12.098574 kernel: pid_max: default: 32768 minimum: 301 Jan 13 22:02:12.098584 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 22:02:12.098593 kernel: landlock: Up and running. Jan 13 22:02:12.098601 kernel: SELinux: Initializing. Jan 13 22:02:12.098610 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 22:02:12.098619 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 22:02:12.098627 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 13 22:02:12.098638 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 22:02:12.098647 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 22:02:12.098656 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 22:02:12.098665 kernel: Performance Events: AMD PMU driver. Jan 13 22:02:12.098673 kernel: ... version: 0 Jan 13 22:02:12.098684 kernel: ... bit width: 48 Jan 13 22:02:12.098692 kernel: ... generic registers: 4 Jan 13 22:02:12.098701 kernel: ... value mask: 0000ffffffffffff Jan 13 22:02:12.098709 kernel: ... max period: 00007fffffffffff Jan 13 22:02:12.098718 kernel: ... fixed-purpose events: 0 Jan 13 22:02:12.098727 kernel: ... event mask: 000000000000000f Jan 13 22:02:12.098735 kernel: signal: max sigframe size: 1440 Jan 13 22:02:12.098744 kernel: rcu: Hierarchical SRCU implementation. Jan 13 22:02:12.098753 kernel: rcu: Max phase no-delay instances is 400. Jan 13 22:02:12.098763 kernel: smp: Bringing up secondary CPUs ... Jan 13 22:02:12.098772 kernel: smpboot: x86: Booting SMP configuration: Jan 13 22:02:12.098793 kernel: .... node #0, CPUs: #1 Jan 13 22:02:12.098802 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 22:02:12.098811 kernel: smpboot: Max logical packages: 2 Jan 13 22:02:12.098820 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 13 22:02:12.098828 kernel: devtmpfs: initialized Jan 13 22:02:12.098837 kernel: x86/mm: Memory block size: 128MB Jan 13 22:02:12.098845 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 22:02:12.098857 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 22:02:12.098866 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 22:02:12.098874 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 22:02:12.098883 kernel: audit: initializing netlink subsys (disabled) Jan 13 22:02:12.098893 kernel: audit: type=2000 audit(1736805731.585:1): state=initialized audit_enabled=0 res=1 Jan 13 22:02:12.098901 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 22:02:12.098910 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 22:02:12.098919 kernel: cpuidle: using governor menu Jan 13 22:02:12.098927 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 22:02:12.098938 kernel: dca service started, version 1.12.1 Jan 13 22:02:12.098946 kernel: PCI: Using configuration type 1 for base access Jan 13 22:02:12.098955 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 22:02:12.098964 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 22:02:12.098973 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 22:02:12.098981 kernel: ACPI: Added _OSI(Module Device) Jan 13 22:02:12.098990 kernel: ACPI: Added _OSI(Processor Device) Jan 13 22:02:12.098998 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 22:02:12.099007 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 22:02:12.099017 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 22:02:12.099026 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 22:02:12.099035 kernel: ACPI: Interpreter enabled Jan 13 22:02:12.099043 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 22:02:12.099052 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 22:02:12.099060 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 22:02:12.099069 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 22:02:12.099078 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 13 22:02:12.099086 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 22:02:12.099226 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 22:02:12.099323 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 22:02:12.099413 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 22:02:12.099426 kernel: acpiphp: Slot [3] registered Jan 13 22:02:12.099435 kernel: acpiphp: Slot [4] registered Jan 13 22:02:12.099444 kernel: acpiphp: Slot [5] registered Jan 13 22:02:12.099452 kernel: acpiphp: Slot [6] registered Jan 13 22:02:12.099461 kernel: acpiphp: Slot [7] registered Jan 13 22:02:12.099473 kernel: acpiphp: Slot [8] registered Jan 13 22:02:12.099481 kernel: acpiphp: Slot [9] registered Jan 13 22:02:12.099489 kernel: acpiphp: Slot [10] registered Jan 13 22:02:12.099498 kernel: acpiphp: Slot [11] registered Jan 13 22:02:12.099506 kernel: acpiphp: Slot [12] registered Jan 13 22:02:12.099515 kernel: acpiphp: Slot [13] registered Jan 13 22:02:12.099524 kernel: acpiphp: Slot [14] registered Jan 13 22:02:12.099532 kernel: acpiphp: Slot [15] registered Jan 13 22:02:12.099541 kernel: acpiphp: Slot [16] registered Jan 13 22:02:12.099551 kernel: acpiphp: Slot [17] registered Jan 13 22:02:12.099559 kernel: acpiphp: Slot [18] registered Jan 13 22:02:12.099568 kernel: acpiphp: Slot [19] registered Jan 13 22:02:12.099576 kernel: acpiphp: Slot [20] registered Jan 13 22:02:12.099585 kernel: acpiphp: Slot [21] registered Jan 13 22:02:12.099593 kernel: acpiphp: Slot [22] registered Jan 13 22:02:12.099602 kernel: acpiphp: Slot [23] registered Jan 13 22:02:12.099610 kernel: acpiphp: Slot [24] registered Jan 13 22:02:12.099619 kernel: acpiphp: Slot [25] registered Jan 13 22:02:12.099627 kernel: acpiphp: Slot [26] registered Jan 13 22:02:12.099637 kernel: acpiphp: Slot [27] registered Jan 13 22:02:12.099646 kernel: acpiphp: Slot [28] registered Jan 13 22:02:12.099654 kernel: acpiphp: Slot [29] registered Jan 13 22:02:12.099663 kernel: acpiphp: Slot [30] registered Jan 13 22:02:12.099671 kernel: acpiphp: Slot [31] registered Jan 13 22:02:12.099680 kernel: PCI host bridge to bus 0000:00 Jan 13 22:02:12.099775 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 22:02:12.101939 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 22:02:12.102030 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 22:02:12.102112 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 22:02:12.102191 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 13 22:02:12.102272 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 22:02:12.102382 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 22:02:12.102485 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 22:02:12.102597 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 13 22:02:12.102689 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 13 22:02:12.102801 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 22:02:12.102898 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 22:02:12.102986 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 22:02:12.103075 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 22:02:12.103174 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 22:02:12.103269 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 13 22:02:12.103359 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 13 22:02:12.103458 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 13 22:02:12.103549 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 13 22:02:12.103639 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 13 22:02:12.103732 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 13 22:02:12.105858 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 13 22:02:12.105959 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 22:02:12.106063 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 22:02:12.106157 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 13 22:02:12.106249 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 13 22:02:12.106340 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 13 22:02:12.106433 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 13 22:02:12.106533 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 22:02:12.106632 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 22:02:12.106723 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 13 22:02:12.108859 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 13 22:02:12.108971 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 13 22:02:12.109070 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 13 22:02:12.109166 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 13 22:02:12.109277 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 22:02:12.109374 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 13 22:02:12.109469 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 13 22:02:12.109571 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 13 22:02:12.109585 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 22:02:12.109595 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 22:02:12.109605 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 22:02:12.109614 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 22:02:12.109627 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 22:02:12.109637 kernel: iommu: Default domain type: Translated Jan 13 22:02:12.109646 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 22:02:12.109656 kernel: PCI: Using ACPI for IRQ routing Jan 13 22:02:12.109665 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 22:02:12.109674 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 22:02:12.109683 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 13 22:02:12.109776 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 13 22:02:12.109919 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 13 22:02:12.110019 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 22:02:12.110033 kernel: vgaarb: loaded Jan 13 22:02:12.110043 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 22:02:12.110052 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 22:02:12.110061 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 22:02:12.110070 kernel: pnp: PnP ACPI init Jan 13 22:02:12.110170 kernel: pnp 00:03: [dma 2] Jan 13 22:02:12.110186 kernel: pnp: PnP ACPI: found 5 devices Jan 13 22:02:12.110195 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 22:02:12.110208 kernel: NET: Registered PF_INET protocol family Jan 13 22:02:12.110218 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 22:02:12.110227 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 22:02:12.110237 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 22:02:12.110246 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 22:02:12.110255 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 22:02:12.110265 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 22:02:12.110274 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 22:02:12.110285 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 22:02:12.110295 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 22:02:12.110304 kernel: NET: Registered PF_XDP protocol family Jan 13 22:02:12.110387 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 22:02:12.110471 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 22:02:12.110554 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 22:02:12.110637 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 13 22:02:12.110719 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 13 22:02:12.110834 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 13 22:02:12.110949 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 22:02:12.110963 kernel: PCI: CLS 0 bytes, default 64 Jan 13 22:02:12.110973 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 22:02:12.110982 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 13 22:02:12.110991 kernel: Initialise system trusted keyrings Jan 13 22:02:12.111001 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 22:02:12.111010 kernel: Key type asymmetric registered Jan 13 22:02:12.111019 kernel: Asymmetric key parser 'x509' registered Jan 13 22:02:12.111031 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 22:02:12.111041 kernel: io scheduler mq-deadline registered Jan 13 22:02:12.111050 kernel: io scheduler kyber registered Jan 13 22:02:12.111059 kernel: io scheduler bfq registered Jan 13 22:02:12.111068 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 22:02:12.111078 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 13 22:02:12.111087 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 22:02:12.111097 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 22:02:12.111106 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 22:02:12.111117 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 22:02:12.111127 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 22:02:12.111136 kernel: random: crng init done Jan 13 22:02:12.111145 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 22:02:12.111154 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 22:02:12.111163 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 22:02:12.111259 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 22:02:12.111274 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 22:02:12.111362 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 22:02:12.111448 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T22:02:11 UTC (1736805731) Jan 13 22:02:12.111535 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 22:02:12.111549 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 22:02:12.111559 kernel: NET: Registered PF_INET6 protocol family Jan 13 22:02:12.111568 kernel: Segment Routing with IPv6 Jan 13 22:02:12.111577 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 22:02:12.111586 kernel: NET: Registered PF_PACKET protocol family Jan 13 22:02:12.111596 kernel: Key type dns_resolver registered Jan 13 22:02:12.111608 kernel: IPI shorthand broadcast: enabled Jan 13 22:02:12.111617 kernel: sched_clock: Marking stable (1031011982, 174277040)->(1247822281, -42533259) Jan 13 22:02:12.111627 kernel: registered taskstats version 1 Jan 13 22:02:12.111636 kernel: Loading compiled-in X.509 certificates Jan 13 22:02:12.111645 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 22:02:12.111654 kernel: Key type .fscrypt registered Jan 13 22:02:12.111663 kernel: Key type fscrypt-provisioning registered Jan 13 22:02:12.111673 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 22:02:12.111684 kernel: ima: Allocated hash algorithm: sha1 Jan 13 22:02:12.111693 kernel: ima: No architecture policies found Jan 13 22:02:12.111702 kernel: clk: Disabling unused clocks Jan 13 22:02:12.111711 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 22:02:12.111721 kernel: Write protecting the kernel read-only data: 36864k Jan 13 22:02:12.111730 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 22:02:12.111740 kernel: Run /init as init process Jan 13 22:02:12.111749 kernel: with arguments: Jan 13 22:02:12.111758 kernel: /init Jan 13 22:02:12.111767 kernel: with environment: Jan 13 22:02:12.113795 kernel: HOME=/ Jan 13 22:02:12.113811 kernel: TERM=linux Jan 13 22:02:12.113821 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 22:02:12.113834 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 22:02:12.113847 systemd[1]: Detected virtualization kvm. Jan 13 22:02:12.113858 systemd[1]: Detected architecture x86-64. Jan 13 22:02:12.113868 systemd[1]: Running in initrd. Jan 13 22:02:12.113881 systemd[1]: No hostname configured, using default hostname. Jan 13 22:02:12.113891 systemd[1]: Hostname set to . Jan 13 22:02:12.113902 systemd[1]: Initializing machine ID from VM UUID. Jan 13 22:02:12.113912 systemd[1]: Queued start job for default target initrd.target. Jan 13 22:02:12.113923 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:02:12.113933 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:02:12.113945 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 22:02:12.113964 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 22:02:12.113977 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 22:02:12.113987 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 22:02:12.113999 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 22:02:12.114010 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 22:02:12.114022 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:02:12.114033 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:02:12.114043 systemd[1]: Reached target paths.target - Path Units. Jan 13 22:02:12.114054 systemd[1]: Reached target slices.target - Slice Units. Jan 13 22:02:12.114064 systemd[1]: Reached target swap.target - Swaps. Jan 13 22:02:12.114075 systemd[1]: Reached target timers.target - Timer Units. Jan 13 22:02:12.114085 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 22:02:12.114095 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 22:02:12.114106 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 22:02:12.114118 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 22:02:12.114129 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:02:12.114139 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 22:02:12.114150 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:02:12.114160 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 22:02:12.114171 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 22:02:12.114181 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 22:02:12.114191 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 22:02:12.114203 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 22:02:12.114214 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 22:02:12.114224 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 22:02:12.114235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:02:12.114245 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 22:02:12.114255 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:02:12.114266 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 22:02:12.114279 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 22:02:12.114314 systemd-journald[183]: Collecting audit messages is disabled. Jan 13 22:02:12.114344 systemd-journald[183]: Journal started Jan 13 22:02:12.114369 systemd-journald[183]: Runtime Journal (/run/log/journal/524054a76aef4352b14d8f09d14d4412) is 8.0M, max 78.3M, 70.3M free. Jan 13 22:02:12.096812 systemd-modules-load[185]: Inserted module 'overlay' Jan 13 22:02:12.122795 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 22:02:12.123637 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:02:12.129935 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:02:12.138951 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 22:02:12.139659 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 22:02:12.148819 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 22:02:12.151005 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 22:02:12.151824 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:02:12.156925 kernel: Bridge firewalling registered Jan 13 22:02:12.154512 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 13 22:02:12.155939 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 22:02:12.163175 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 22:02:12.165556 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:02:12.167967 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:02:12.169918 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 22:02:12.175906 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:02:12.184446 dracut-cmdline[216]: dracut-dracut-053 Jan 13 22:02:12.186450 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 22:02:12.188455 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 22:02:12.218157 systemd-resolved[224]: Positive Trust Anchors: Jan 13 22:02:12.218896 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 22:02:12.218939 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 22:02:12.224735 systemd-resolved[224]: Defaulting to hostname 'linux'. Jan 13 22:02:12.225697 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 22:02:12.226519 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:02:12.250836 kernel: SCSI subsystem initialized Jan 13 22:02:12.261827 kernel: Loading iSCSI transport class v2.0-870. Jan 13 22:02:12.274255 kernel: iscsi: registered transport (tcp) Jan 13 22:02:12.297158 kernel: iscsi: registered transport (qla4xxx) Jan 13 22:02:12.297220 kernel: QLogic iSCSI HBA Driver Jan 13 22:02:12.357812 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 22:02:12.368120 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 22:02:12.418609 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 22:02:12.418662 kernel: device-mapper: uevent: version 1.0.3 Jan 13 22:02:12.420847 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 22:02:12.466854 kernel: raid6: sse2x4 gen() 12911 MB/s Jan 13 22:02:12.484833 kernel: raid6: sse2x2 gen() 14865 MB/s Jan 13 22:02:12.503244 kernel: raid6: sse2x1 gen() 9846 MB/s Jan 13 22:02:12.503305 kernel: raid6: using algorithm sse2x2 gen() 14865 MB/s Jan 13 22:02:12.522356 kernel: raid6: .... xor() 9054 MB/s, rmw enabled Jan 13 22:02:12.522417 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 22:02:12.545520 kernel: xor: measuring software checksum speed Jan 13 22:02:12.545581 kernel: prefetch64-sse : 17279 MB/sec Jan 13 22:02:12.546070 kernel: generic_sse : 15734 MB/sec Jan 13 22:02:12.548405 kernel: xor: using function: prefetch64-sse (17279 MB/sec) Jan 13 22:02:12.728871 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 22:02:12.741966 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 22:02:12.749069 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:02:12.799767 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 13 22:02:12.810640 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:02:12.820010 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 22:02:12.841886 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 13 22:02:12.870021 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 22:02:12.875936 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 22:02:12.924340 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:02:12.939175 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 22:02:12.982244 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 22:02:12.985577 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 22:02:12.988067 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:02:12.990317 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 22:02:12.997054 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 22:02:13.015638 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 22:02:13.018912 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 13 22:02:13.065004 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 13 22:02:13.065176 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 22:02:13.065193 kernel: GPT:17805311 != 20971519 Jan 13 22:02:13.065206 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 22:02:13.065220 kernel: GPT:17805311 != 20971519 Jan 13 22:02:13.065240 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 22:02:13.065253 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 22:02:13.070827 kernel: libata version 3.00 loaded. Jan 13 22:02:13.073398 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 13 22:02:13.079824 kernel: scsi host0: ata_piix Jan 13 22:02:13.079969 kernel: scsi host1: ata_piix Jan 13 22:02:13.080090 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 13 22:02:13.080104 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 13 22:02:13.077740 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 22:02:13.078043 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:02:13.084829 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:02:13.086003 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:02:13.086222 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:02:13.087403 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:02:13.096279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:02:13.118806 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (455) Jan 13 22:02:13.127870 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (457) Jan 13 22:02:13.138238 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 22:02:13.172328 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:02:13.179205 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 22:02:13.184852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 22:02:13.189549 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 22:02:13.190165 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 22:02:13.200088 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 22:02:13.204391 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:02:13.211404 disk-uuid[506]: Primary Header is updated. Jan 13 22:02:13.211404 disk-uuid[506]: Secondary Entries is updated. Jan 13 22:02:13.211404 disk-uuid[506]: Secondary Header is updated. Jan 13 22:02:13.229676 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 22:02:13.229701 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 22:02:13.232900 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 22:02:13.246394 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:02:14.237867 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 22:02:14.238902 disk-uuid[508]: The operation has completed successfully. Jan 13 22:02:14.322990 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 22:02:14.323103 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 22:02:14.344953 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 22:02:14.369015 sh[530]: Success Jan 13 22:02:14.388816 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 13 22:02:14.502899 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 22:02:14.511019 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 22:02:14.514634 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 22:02:14.560930 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 22:02:14.561063 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:02:14.565648 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 22:02:14.570966 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 22:02:14.576083 kernel: BTRFS info (device dm-0): using free space tree Jan 13 22:02:14.602732 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 22:02:14.604562 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 22:02:14.611156 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 22:02:14.615068 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 22:02:14.659667 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 22:02:14.659827 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:02:14.663870 kernel: BTRFS info (device vda6): using free space tree Jan 13 22:02:14.672880 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 22:02:14.686462 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 22:02:14.688274 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 22:02:14.700043 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 22:02:14.706055 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 22:02:14.769382 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 22:02:14.784453 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 22:02:14.808939 systemd-networkd[713]: lo: Link UP Jan 13 22:02:14.808948 systemd-networkd[713]: lo: Gained carrier Jan 13 22:02:14.816125 systemd-networkd[713]: Enumeration completed Jan 13 22:02:14.817213 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 22:02:14.818060 systemd-networkd[713]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 22:02:14.818064 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:02:14.820997 systemd[1]: Reached target network.target - Network. Jan 13 22:02:14.822323 systemd-networkd[713]: eth0: Link UP Jan 13 22:02:14.822329 systemd-networkd[713]: eth0: Gained carrier Jan 13 22:02:14.822347 systemd-networkd[713]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 22:02:14.837834 systemd-networkd[713]: eth0: DHCPv4 address 172.24.4.131/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 22:02:14.850338 ignition[638]: Ignition 2.19.0 Jan 13 22:02:14.850352 ignition[638]: Stage: fetch-offline Jan 13 22:02:14.851769 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 22:02:14.850390 ignition[638]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:02:14.850400 ignition[638]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 22:02:14.850500 ignition[638]: parsed url from cmdline: "" Jan 13 22:02:14.850504 ignition[638]: no config URL provided Jan 13 22:02:14.850511 ignition[638]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 22:02:14.850521 ignition[638]: no config at "/usr/lib/ignition/user.ign" Jan 13 22:02:14.850527 ignition[638]: failed to fetch config: resource requires networking Jan 13 22:02:14.850728 ignition[638]: Ignition finished successfully Jan 13 22:02:14.859031 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 22:02:14.872497 ignition[722]: Ignition 2.19.0 Jan 13 22:02:14.872511 ignition[722]: Stage: fetch Jan 13 22:02:14.872710 ignition[722]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:02:14.872724 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 22:02:14.872865 ignition[722]: parsed url from cmdline: "" Jan 13 22:02:14.872870 ignition[722]: no config URL provided Jan 13 22:02:14.872876 ignition[722]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 22:02:14.872885 ignition[722]: no config at "/usr/lib/ignition/user.ign" Jan 13 22:02:14.873034 ignition[722]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 22:02:14.873051 ignition[722]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 22:02:14.873062 ignition[722]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 22:02:15.050849 ignition[722]: GET result: OK Jan 13 22:02:15.051048 ignition[722]: parsing config with SHA512: d8273af7465f066f3165a6e304dcda4aa229eb8562028dda0953b40aa9249b2aa1f56792498f94d02a86be8769819b61a1e39f8436f60400a94282763280f14a Jan 13 22:02:15.061391 unknown[722]: fetched base config from "system" Jan 13 22:02:15.061421 unknown[722]: fetched base config from "system" Jan 13 22:02:15.062455 ignition[722]: fetch: fetch complete Jan 13 22:02:15.061438 unknown[722]: fetched user config from "openstack" Jan 13 22:02:15.062469 ignition[722]: fetch: fetch passed Jan 13 22:02:15.068515 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 22:02:15.062565 ignition[722]: Ignition finished successfully Jan 13 22:02:15.084125 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 22:02:15.117276 ignition[728]: Ignition 2.19.0 Jan 13 22:02:15.117296 ignition[728]: Stage: kargs Jan 13 22:02:15.117720 ignition[728]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:02:15.117749 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 22:02:15.122336 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 22:02:15.120059 ignition[728]: kargs: kargs passed Jan 13 22:02:15.120167 ignition[728]: Ignition finished successfully Jan 13 22:02:15.132141 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 22:02:15.171822 ignition[734]: Ignition 2.19.0 Jan 13 22:02:15.173615 ignition[734]: Stage: disks Jan 13 22:02:15.174100 ignition[734]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:02:15.174128 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 22:02:15.180478 ignition[734]: disks: disks passed Jan 13 22:02:15.180588 ignition[734]: Ignition finished successfully Jan 13 22:02:15.182509 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 22:02:15.186126 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 22:02:15.188240 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 22:02:15.191454 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 22:02:15.194659 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 22:02:15.197453 systemd[1]: Reached target basic.target - Basic System. Jan 13 22:02:15.209070 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 22:02:15.256281 systemd-fsck[742]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 22:02:15.266054 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 22:02:15.276509 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 22:02:15.426867 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 22:02:15.427020 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 22:02:15.428084 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 22:02:15.440898 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 22:02:15.445714 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 22:02:15.449031 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 22:02:15.475410 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (750) Jan 13 22:02:15.475467 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 22:02:15.475497 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:02:15.475526 kernel: BTRFS info (device vda6): using free space tree Jan 13 22:02:15.475553 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 22:02:15.452892 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 22:02:15.454869 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 22:02:15.454902 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 22:02:15.481605 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 22:02:15.488347 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 22:02:15.501967 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 22:02:15.606398 initrd-setup-root[779]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 22:02:15.618194 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Jan 13 22:02:15.622678 initrd-setup-root[793]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 22:02:15.627416 initrd-setup-root[800]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 22:02:15.769091 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 22:02:15.776997 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 22:02:15.783122 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 22:02:15.797700 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 22:02:15.805591 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 22:02:15.856731 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 22:02:15.870344 ignition[867]: INFO : Ignition 2.19.0 Jan 13 22:02:15.872986 ignition[867]: INFO : Stage: mount Jan 13 22:02:15.872986 ignition[867]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:02:15.872986 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 22:02:15.874875 ignition[867]: INFO : mount: mount passed Jan 13 22:02:15.875371 ignition[867]: INFO : Ignition finished successfully Jan 13 22:02:15.877089 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 22:02:16.491289 systemd-networkd[713]: eth0: Gained IPv6LL Jan 13 22:02:22.715375 coreos-metadata[752]: Jan 13 22:02:22.715 WARN failed to locate config-drive, using the metadata service API instead Jan 13 22:02:22.755852 coreos-metadata[752]: Jan 13 22:02:22.755 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 22:02:22.771541 coreos-metadata[752]: Jan 13 22:02:22.771 INFO Fetch successful Jan 13 22:02:22.774013 coreos-metadata[752]: Jan 13 22:02:22.771 INFO wrote hostname ci-4081-3-0-2-0f60d24a30.novalocal to /sysroot/etc/hostname Jan 13 22:02:22.777196 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 22:02:22.777482 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 22:02:22.792096 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 22:02:22.814107 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 22:02:22.842876 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (884) Jan 13 22:02:22.853846 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 22:02:22.853939 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:02:22.853969 kernel: BTRFS info (device vda6): using free space tree Jan 13 22:02:22.865928 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 22:02:22.870398 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 22:02:22.926262 ignition[902]: INFO : Ignition 2.19.0 Jan 13 22:02:22.926262 ignition[902]: INFO : Stage: files Jan 13 22:02:22.926262 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:02:22.926262 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 22:02:22.937393 ignition[902]: DEBUG : files: compiled without relabeling support, skipping Jan 13 22:02:22.937393 ignition[902]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 22:02:22.937393 ignition[902]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 22:02:22.943229 ignition[902]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 22:02:22.943229 ignition[902]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 22:02:22.943229 ignition[902]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 22:02:22.943229 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 22:02:22.943229 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 22:02:22.939895 unknown[902]: wrote ssh authorized keys file for user: core Jan 13 22:02:22.995889 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 22:02:23.255023 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 22:02:23.255023 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 22:02:23.260210 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 22:02:23.260210 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 22:02:23.260210 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 22:02:23.260210 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 22:02:23.260210 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 22:02:23.260210 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 22:02:23.260210 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 22:02:23.260210 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 22:02:23.260210 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 22:02:23.260210 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 22:02:23.260210 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 22:02:23.260210 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 22:02:23.260210 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 22:02:23.808512 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 22:02:25.318757 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 22:02:25.318757 ignition[902]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 22:02:25.324937 ignition[902]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 22:02:25.324937 ignition[902]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 22:02:25.324937 ignition[902]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 22:02:25.324937 ignition[902]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 22:02:25.324937 ignition[902]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 22:02:25.324937 ignition[902]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 22:02:25.324937 ignition[902]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 22:02:25.324937 ignition[902]: INFO : files: files passed Jan 13 22:02:25.324937 ignition[902]: INFO : Ignition finished successfully Jan 13 22:02:25.324515 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 22:02:25.340762 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 22:02:25.354954 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 22:02:25.364066 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 22:02:25.364166 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 22:02:25.373389 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:02:25.373389 initrd-setup-root-after-ignition[930]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:02:25.376330 initrd-setup-root-after-ignition[934]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:02:25.379216 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 22:02:25.380252 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 22:02:25.385940 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 22:02:25.422053 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 22:02:25.422293 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 22:02:25.425056 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 22:02:25.430089 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 22:02:25.432106 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 22:02:25.440081 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 22:02:25.458455 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 22:02:25.472076 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 22:02:25.491337 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:02:25.493645 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:02:25.496011 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 22:02:25.497962 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 22:02:25.498287 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 22:02:25.500716 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 22:02:25.502734 systemd[1]: Stopped target basic.target - Basic System. Jan 13 22:02:25.504633 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 22:02:25.506936 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 22:02:25.509176 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 22:02:25.511346 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 22:02:25.513472 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 22:02:25.515745 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 22:02:25.517946 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 22:02:25.519954 systemd[1]: Stopped target swap.target - Swaps. Jan 13 22:02:25.521703 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 22:02:25.522047 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 22:02:25.523960 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:02:25.524769 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:02:25.526006 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 22:02:25.526116 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:02:25.527221 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 22:02:25.527374 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 22:02:25.528712 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 22:02:25.528922 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 22:02:25.530142 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 22:02:25.530294 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 22:02:25.541280 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 22:02:25.541851 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 22:02:25.542024 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:02:25.545041 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 22:02:25.545582 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 22:02:25.545759 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:02:25.546543 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 22:02:25.549150 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 22:02:25.557563 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 22:02:25.557670 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 22:02:25.564688 ignition[954]: INFO : Ignition 2.19.0 Jan 13 22:02:25.566878 ignition[954]: INFO : Stage: umount Jan 13 22:02:25.566878 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:02:25.566878 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 22:02:25.569211 ignition[954]: INFO : umount: umount passed Jan 13 22:02:25.569211 ignition[954]: INFO : Ignition finished successfully Jan 13 22:02:25.570899 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 22:02:25.571550 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 22:02:25.573200 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 22:02:25.573265 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 22:02:25.574518 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 22:02:25.574560 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 22:02:25.576920 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 22:02:25.576960 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 22:02:25.577518 systemd[1]: Stopped target network.target - Network. Jan 13 22:02:25.577964 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 22:02:25.578010 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 22:02:25.578531 systemd[1]: Stopped target paths.target - Path Units. Jan 13 22:02:25.579909 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 22:02:25.583835 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:02:25.585182 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 22:02:25.585696 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 22:02:25.586736 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 22:02:25.586777 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 22:02:25.587745 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 22:02:25.587776 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 22:02:25.588751 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 22:02:25.588836 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 22:02:25.589822 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 22:02:25.589864 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 22:02:25.590963 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 22:02:25.592038 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 22:02:25.593965 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 22:02:25.594467 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 22:02:25.594571 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 22:02:25.595690 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 22:02:25.595766 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 22:02:25.596859 systemd-networkd[713]: eth0: DHCPv6 lease lost Jan 13 22:02:25.598330 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 22:02:25.598422 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 22:02:25.599180 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 22:02:25.599212 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:02:25.604017 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 22:02:25.605467 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 22:02:25.605530 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 22:02:25.608077 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:02:25.613866 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 22:02:25.613975 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 22:02:25.621118 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 22:02:25.621863 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:02:25.623487 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 22:02:25.624586 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 22:02:25.626954 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 22:02:25.627024 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 22:02:25.627649 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 22:02:25.627682 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:02:25.628854 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 22:02:25.628900 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 22:02:25.630608 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 22:02:25.630651 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 22:02:25.631883 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 22:02:25.631926 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:02:25.642933 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 22:02:25.644139 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 22:02:25.644211 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:02:25.644737 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 22:02:25.644809 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 22:02:25.645354 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 22:02:25.645396 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:02:25.647762 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 22:02:25.647845 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:02:25.648556 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:02:25.648598 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:02:25.650163 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 22:02:25.650265 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 22:02:25.651443 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 22:02:25.667161 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 22:02:25.674521 systemd[1]: Switching root. Jan 13 22:02:25.705147 systemd-journald[183]: Journal stopped Jan 13 22:02:27.336205 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 13 22:02:27.336280 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 22:02:27.336300 kernel: SELinux: policy capability open_perms=1 Jan 13 22:02:27.336313 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 22:02:27.336325 kernel: SELinux: policy capability always_check_network=0 Jan 13 22:02:27.336338 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 22:02:27.336355 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 22:02:27.336368 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 22:02:27.336380 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 22:02:27.336393 systemd[1]: Successfully loaded SELinux policy in 79.014ms. Jan 13 22:02:27.336418 kernel: audit: type=1403 audit(1736805746.341:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 22:02:27.336436 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.643ms. Jan 13 22:02:27.336451 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 22:02:27.336466 systemd[1]: Detected virtualization kvm. Jan 13 22:02:27.336479 systemd[1]: Detected architecture x86-64. Jan 13 22:02:27.336495 systemd[1]: Detected first boot. Jan 13 22:02:27.336508 systemd[1]: Hostname set to . Jan 13 22:02:27.336522 systemd[1]: Initializing machine ID from VM UUID. Jan 13 22:02:27.336538 zram_generator::config[996]: No configuration found. Jan 13 22:02:27.336552 systemd[1]: Populated /etc with preset unit settings. Jan 13 22:02:27.336565 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 22:02:27.336578 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 22:02:27.336591 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 22:02:27.336607 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 22:02:27.336621 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 22:02:27.336635 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 22:02:27.336648 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 22:02:27.336663 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 22:02:27.336676 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 22:02:27.336690 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 22:02:27.336704 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 22:02:27.336726 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:02:27.336747 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:02:27.336765 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 22:02:27.336803 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 22:02:27.336820 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 22:02:27.336834 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 22:02:27.336847 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 22:02:27.336861 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:02:27.336874 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 22:02:27.336891 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 22:02:27.336905 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 22:02:27.336918 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 22:02:27.336932 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:02:27.336945 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 22:02:27.336958 systemd[1]: Reached target slices.target - Slice Units. Jan 13 22:02:27.336972 systemd[1]: Reached target swap.target - Swaps. Jan 13 22:02:27.336987 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 22:02:27.337002 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 22:02:27.337016 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:02:27.337029 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 22:02:27.337042 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:02:27.337055 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 22:02:27.337068 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 22:02:27.337082 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 22:02:27.337095 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 22:02:27.337111 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:02:27.337124 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 22:02:27.337138 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 22:02:27.337151 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 22:02:27.337166 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 22:02:27.337179 systemd[1]: Reached target machines.target - Containers. Jan 13 22:02:27.337193 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 22:02:27.337206 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 22:02:27.337221 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 22:02:27.337235 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 22:02:27.337248 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 22:02:27.337262 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 22:02:27.337275 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 22:02:27.337289 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 22:02:27.337303 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 22:02:27.337317 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 22:02:27.337332 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 22:02:27.337345 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 22:02:27.337359 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 22:02:27.337372 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 22:02:27.337385 kernel: fuse: init (API version 7.39) Jan 13 22:02:27.337401 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 22:02:27.337414 kernel: loop: module loaded Jan 13 22:02:27.337427 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 22:02:27.337456 systemd-journald[1099]: Collecting audit messages is disabled. Jan 13 22:02:27.337484 systemd-journald[1099]: Journal started Jan 13 22:02:27.337510 systemd-journald[1099]: Runtime Journal (/run/log/journal/524054a76aef4352b14d8f09d14d4412) is 8.0M, max 78.3M, 70.3M free. Jan 13 22:02:27.000729 systemd[1]: Queued start job for default target multi-user.target. Jan 13 22:02:27.026244 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 22:02:27.026649 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 22:02:27.345994 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 22:02:27.346047 kernel: ACPI: bus type drm_connector registered Jan 13 22:02:27.354820 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 22:02:27.359897 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 22:02:27.362384 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 22:02:27.362417 systemd[1]: Stopped verity-setup.service. Jan 13 22:02:27.366824 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:02:27.374826 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 22:02:27.375526 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 22:02:27.376186 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 22:02:27.376847 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 22:02:27.377450 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 22:02:27.378053 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 22:02:27.378645 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 22:02:27.379389 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 22:02:27.380209 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:02:27.381033 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 22:02:27.381211 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 22:02:27.381994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 22:02:27.382151 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 22:02:27.382940 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 22:02:27.383114 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 22:02:27.384010 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 22:02:27.384172 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 22:02:27.385053 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 22:02:27.385196 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 22:02:27.385980 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 22:02:27.386139 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 22:02:27.386993 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 22:02:27.387753 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 22:02:27.388541 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 22:02:27.398718 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 22:02:27.405889 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 22:02:27.412956 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 22:02:27.415466 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 22:02:27.415511 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 22:02:27.417268 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 22:02:27.419902 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 22:02:27.428750 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 22:02:27.429572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 22:02:27.439066 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 22:02:27.444955 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 22:02:27.445572 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 22:02:27.446653 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 22:02:27.448103 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 22:02:27.454997 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 22:02:27.460177 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 22:02:27.475018 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 22:02:27.479493 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 22:02:27.482120 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 22:02:27.484185 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 22:02:27.506898 systemd-journald[1099]: Time spent on flushing to /var/log/journal/524054a76aef4352b14d8f09d14d4412 is 55.985ms for 945 entries. Jan 13 22:02:27.506898 systemd-journald[1099]: System Journal (/var/log/journal/524054a76aef4352b14d8f09d14d4412) is 8.0M, max 584.8M, 576.8M free. Jan 13 22:02:27.579263 systemd-journald[1099]: Received client request to flush runtime journal. Jan 13 22:02:27.579314 kernel: loop0: detected capacity change from 0 to 140768 Jan 13 22:02:27.506897 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 22:02:27.511066 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 22:02:27.520068 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 22:02:27.521046 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:02:27.531093 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 22:02:27.566800 udevadm[1138]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 22:02:27.583109 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 22:02:27.607560 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:02:27.638993 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 22:02:27.640545 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 22:02:27.658821 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 22:02:27.662729 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 22:02:27.674942 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 22:02:27.695822 kernel: loop1: detected capacity change from 0 to 142488 Jan 13 22:02:27.706534 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Jan 13 22:02:27.706966 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Jan 13 22:02:27.713745 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:02:27.751945 kernel: loop2: detected capacity change from 0 to 205544 Jan 13 22:02:27.819824 kernel: loop3: detected capacity change from 0 to 8 Jan 13 22:02:27.849097 kernel: loop4: detected capacity change from 0 to 140768 Jan 13 22:02:27.902818 kernel: loop5: detected capacity change from 0 to 142488 Jan 13 22:02:27.951353 kernel: loop6: detected capacity change from 0 to 205544 Jan 13 22:02:28.009652 kernel: loop7: detected capacity change from 0 to 8 Jan 13 22:02:28.010598 (sd-merge)[1154]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 22:02:28.011120 (sd-merge)[1154]: Merged extensions into '/usr'. Jan 13 22:02:28.017053 systemd[1]: Reloading requested from client PID 1129 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 22:02:28.017070 systemd[1]: Reloading... Jan 13 22:02:28.108669 zram_generator::config[1176]: No configuration found. Jan 13 22:02:28.330134 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:02:28.387926 systemd[1]: Reloading finished in 370 ms. Jan 13 22:02:28.400916 ldconfig[1124]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 22:02:28.413545 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 22:02:28.414677 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 22:02:28.425408 systemd[1]: Starting ensure-sysext.service... Jan 13 22:02:28.430990 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 22:02:28.435957 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Jan 13 22:02:28.435971 systemd[1]: Reloading... Jan 13 22:02:28.477068 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 22:02:28.477445 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 22:02:28.479348 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 22:02:28.479658 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 13 22:02:28.479729 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 13 22:02:28.485745 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 22:02:28.485759 systemd-tmpfiles[1237]: Skipping /boot Jan 13 22:02:28.497825 zram_generator::config[1261]: No configuration found. Jan 13 22:02:28.498225 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 22:02:28.498238 systemd-tmpfiles[1237]: Skipping /boot Jan 13 22:02:28.649512 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:02:28.705995 systemd[1]: Reloading finished in 269 ms. Jan 13 22:02:28.721990 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 22:02:28.728135 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:02:28.742516 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 22:02:28.753003 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 22:02:28.763189 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 22:02:28.767959 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 22:02:28.780082 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:02:28.784962 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 22:02:28.799901 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 22:02:28.804948 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:02:28.805125 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 22:02:28.809094 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 22:02:28.812907 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 22:02:28.821889 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 22:02:28.822565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 22:02:28.822687 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:02:28.829694 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:02:28.829962 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 22:02:28.830171 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 22:02:28.830278 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:02:28.832867 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 22:02:28.842553 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 22:02:28.846153 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:02:28.846445 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 22:02:28.857253 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 22:02:28.858081 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 22:02:28.862075 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 22:02:28.862694 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:02:28.864766 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 22:02:28.865048 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 22:02:28.876920 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 22:02:28.877121 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 22:02:28.879133 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 22:02:28.879334 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 22:02:28.886854 systemd[1]: Finished ensure-sysext.service. Jan 13 22:02:28.890655 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 22:02:28.890962 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 22:02:28.901766 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 22:02:28.903869 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 22:02:28.909034 augenrules[1354]: No rules Jan 13 22:02:28.904736 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 22:02:28.905921 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 22:02:28.906752 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 22:02:28.907263 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Jan 13 22:02:28.911414 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 22:02:28.919191 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 22:02:28.921586 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 22:02:28.955208 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:02:28.965636 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 22:02:29.005637 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 22:02:29.006371 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 22:02:29.045095 systemd-resolved[1331]: Positive Trust Anchors: Jan 13 22:02:29.045427 systemd-networkd[1371]: lo: Link UP Jan 13 22:02:29.045432 systemd-networkd[1371]: lo: Gained carrier Jan 13 22:02:29.045974 systemd-networkd[1371]: Enumeration completed Jan 13 22:02:29.046090 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 22:02:29.048175 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 22:02:29.048280 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 22:02:29.054965 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 22:02:29.064624 systemd-resolved[1331]: Using system hostname 'ci-4081-3-0-2-0f60d24a30.novalocal'. Jan 13 22:02:29.066762 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 22:02:29.067474 systemd[1]: Reached target network.target - Network. Jan 13 22:02:29.068437 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:02:29.093343 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 22:02:29.112876 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1383) Jan 13 22:02:29.137410 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 22:02:29.137419 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:02:29.138479 systemd-networkd[1371]: eth0: Link UP Jan 13 22:02:29.138483 systemd-networkd[1371]: eth0: Gained carrier Jan 13 22:02:29.138497 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 22:02:29.147883 systemd-networkd[1371]: eth0: DHCPv4 address 172.24.4.131/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 22:02:29.148922 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Jan 13 22:02:29.169881 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 22:02:29.176824 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 13 22:02:29.188429 kernel: ACPI: button: Power Button [PWRF] Jan 13 22:02:29.194185 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 22:02:29.202060 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 22:02:29.219969 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 22:02:29.228607 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 22:02:29.250348 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 22:02:29.254250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:02:29.255947 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 13 22:02:29.256024 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 13 22:02:29.265111 kernel: Console: switching to colour dummy device 80x25 Jan 13 22:02:29.265203 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 22:02:29.265218 kernel: [drm] features: -context_init Jan 13 22:02:29.269297 kernel: [drm] number of scanouts: 1 Jan 13 22:02:29.269455 kernel: [drm] number of cap sets: 0 Jan 13 22:02:29.274136 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 13 22:02:29.273952 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:02:29.274168 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:02:29.282529 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 13 22:02:29.282610 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 22:02:29.287042 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 22:02:29.286131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:02:29.297816 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:02:29.298087 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:02:29.308940 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:02:29.309442 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 22:02:29.313966 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 22:02:29.341858 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 22:02:29.372870 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 22:02:29.374207 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:02:29.380919 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 22:02:29.385162 lvm[1420]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 22:02:29.402179 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:02:29.402413 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 22:02:29.402574 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 22:02:29.402679 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 22:02:29.403273 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 22:02:29.404444 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 22:02:29.404668 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 22:02:29.404974 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 22:02:29.405024 systemd[1]: Reached target paths.target - Path Units. Jan 13 22:02:29.405152 systemd[1]: Reached target timers.target - Timer Units. Jan 13 22:02:29.406993 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 22:02:29.409713 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 22:02:29.417306 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 22:02:29.418354 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 22:02:29.419698 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 22:02:29.420561 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 22:02:29.423497 systemd[1]: Reached target basic.target - Basic System. Jan 13 22:02:29.424598 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 22:02:29.424658 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 22:02:29.434038 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 22:02:29.441118 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 22:02:29.449057 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 22:02:29.456989 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 22:02:29.466983 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 22:02:29.467615 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 22:02:29.475509 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 22:02:29.479421 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 22:02:29.493075 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 22:02:29.495643 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 22:02:29.504285 jq[1429]: false Jan 13 22:02:29.512993 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 22:02:29.516114 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 22:02:29.516645 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 22:02:29.519458 dbus-daemon[1428]: [system] SELinux support is enabled Jan 13 22:02:29.524045 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 22:02:29.524972 extend-filesystems[1432]: Found loop4 Jan 13 22:02:29.524972 extend-filesystems[1432]: Found loop5 Jan 13 22:02:29.524972 extend-filesystems[1432]: Found loop6 Jan 13 22:02:29.524972 extend-filesystems[1432]: Found loop7 Jan 13 22:02:29.524972 extend-filesystems[1432]: Found vda Jan 13 22:02:29.524972 extend-filesystems[1432]: Found vda1 Jan 13 22:02:29.524972 extend-filesystems[1432]: Found vda2 Jan 13 22:02:29.524972 extend-filesystems[1432]: Found vda3 Jan 13 22:02:29.524972 extend-filesystems[1432]: Found usr Jan 13 22:02:29.524972 extend-filesystems[1432]: Found vda4 Jan 13 22:02:29.524972 extend-filesystems[1432]: Found vda6 Jan 13 22:02:29.524972 extend-filesystems[1432]: Found vda7 Jan 13 22:02:29.524972 extend-filesystems[1432]: Found vda9 Jan 13 22:02:29.524972 extend-filesystems[1432]: Checking size of /dev/vda9 Jan 13 22:02:29.666115 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 13 22:02:29.666161 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1375) Jan 13 22:02:29.666179 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 13 22:02:29.666197 extend-filesystems[1432]: Resized partition /dev/vda9 Jan 13 22:02:29.540948 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 22:02:29.679214 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) Jan 13 22:02:29.679214 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 22:02:29.679214 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 22:02:29.679214 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 13 22:02:29.701977 jq[1445]: true Jan 13 22:02:29.544306 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 22:02:29.702315 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Jan 13 22:02:29.704586 update_engine[1441]: I20250113 22:02:29.631719 1441 main.cc:92] Flatcar Update Engine starting Jan 13 22:02:29.704586 update_engine[1441]: I20250113 22:02:29.637588 1441 update_check_scheduler.cc:74] Next update check in 4m34s Jan 13 22:02:29.566728 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 22:02:29.566942 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 22:02:29.568120 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 22:02:29.710205 jq[1457]: true Jan 13 22:02:29.568855 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 22:02:29.710437 tar[1455]: linux-amd64/helm Jan 13 22:02:29.579053 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 22:02:29.579585 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 22:02:29.603893 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 22:02:29.603922 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 22:02:29.635143 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 22:02:29.635167 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 22:02:29.643206 (ntainerd)[1458]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 22:02:29.667627 systemd[1]: Started update-engine.service - Update Engine. Jan 13 22:02:29.677027 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 22:02:29.679536 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 22:02:29.679702 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 22:02:29.771550 systemd-logind[1440]: New seat seat0. Jan 13 22:02:29.777737 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 22:02:29.777762 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 22:02:29.778028 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 22:02:29.828325 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Jan 13 22:02:29.831868 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 22:02:29.850081 systemd[1]: Starting sshkeys.service... Jan 13 22:02:29.881041 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 22:02:29.892129 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 22:02:29.892954 locksmithd[1467]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 22:02:30.083707 containerd[1458]: time="2025-01-13T22:02:30.083571233Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 22:02:30.117542 containerd[1458]: time="2025-01-13T22:02:30.117488706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:02:30.119214 containerd[1458]: time="2025-01-13T22:02:30.119184195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:02:30.119299 containerd[1458]: time="2025-01-13T22:02:30.119283812Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 22:02:30.119413 containerd[1458]: time="2025-01-13T22:02:30.119397606Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 22:02:30.119679 containerd[1458]: time="2025-01-13T22:02:30.119661350Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 22:02:30.119745 containerd[1458]: time="2025-01-13T22:02:30.119731572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 22:02:30.119926 containerd[1458]: time="2025-01-13T22:02:30.119860854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:02:30.120001 containerd[1458]: time="2025-01-13T22:02:30.119984747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:02:30.120297 containerd[1458]: time="2025-01-13T22:02:30.120275492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:02:30.120405 containerd[1458]: time="2025-01-13T22:02:30.120390227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 22:02:30.120475 containerd[1458]: time="2025-01-13T22:02:30.120458906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:02:30.120526 containerd[1458]: time="2025-01-13T22:02:30.120514100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 22:02:30.120716 containerd[1458]: time="2025-01-13T22:02:30.120698335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:02:30.121025 containerd[1458]: time="2025-01-13T22:02:30.121006072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:02:30.121193 containerd[1458]: time="2025-01-13T22:02:30.121172755Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:02:30.121260 containerd[1458]: time="2025-01-13T22:02:30.121246463Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 22:02:30.121388 containerd[1458]: time="2025-01-13T22:02:30.121370506Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 22:02:30.121495 containerd[1458]: time="2025-01-13T22:02:30.121478398Z" level=info msg="metadata content store policy set" policy=shared Jan 13 22:02:30.131544 containerd[1458]: time="2025-01-13T22:02:30.131519500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 22:02:30.131646 containerd[1458]: time="2025-01-13T22:02:30.131631039Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 22:02:30.131801 containerd[1458]: time="2025-01-13T22:02:30.131766473Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 22:02:30.131951 containerd[1458]: time="2025-01-13T22:02:30.131934689Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 22:02:30.132020 containerd[1458]: time="2025-01-13T22:02:30.132004961Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 22:02:30.132242 containerd[1458]: time="2025-01-13T22:02:30.132224432Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 22:02:30.134480 containerd[1458]: time="2025-01-13T22:02:30.132820470Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 22:02:30.134480 containerd[1458]: time="2025-01-13T22:02:30.132921199Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 22:02:30.134480 containerd[1458]: time="2025-01-13T22:02:30.132941207Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 22:02:30.134480 containerd[1458]: time="2025-01-13T22:02:30.132960373Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 22:02:30.134480 containerd[1458]: time="2025-01-13T22:02:30.132979068Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 22:02:30.134480 containerd[1458]: time="2025-01-13T22:02:30.133001630Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 22:02:30.134480 containerd[1458]: time="2025-01-13T22:02:30.133018902Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 22:02:30.134480 containerd[1458]: time="2025-01-13T22:02:30.133036125Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 22:02:30.134480 containerd[1458]: time="2025-01-13T22:02:30.133056373Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 22:02:30.134480 containerd[1458]: time="2025-01-13T22:02:30.133072523Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 22:02:30.134480 containerd[1458]: time="2025-01-13T22:02:30.133087271Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 22:02:30.134480 containerd[1458]: time="2025-01-13T22:02:30.133102559Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 22:02:30.134480 containerd[1458]: time="2025-01-13T22:02:30.133126033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.134480 containerd[1458]: time="2025-01-13T22:02:30.133143196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.134771 containerd[1458]: time="2025-01-13T22:02:30.133159065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.134771 containerd[1458]: time="2025-01-13T22:02:30.133175186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.134771 containerd[1458]: time="2025-01-13T22:02:30.133190875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.134771 containerd[1458]: time="2025-01-13T22:02:30.133218356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.134771 containerd[1458]: time="2025-01-13T22:02:30.133235949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.134771 containerd[1458]: time="2025-01-13T22:02:30.133253232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.134771 containerd[1458]: time="2025-01-13T22:02:30.133269522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.134771 containerd[1458]: time="2025-01-13T22:02:30.133287135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.134771 containerd[1458]: time="2025-01-13T22:02:30.133301332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.134771 containerd[1458]: time="2025-01-13T22:02:30.133317482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.134771 containerd[1458]: time="2025-01-13T22:02:30.133332901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.134771 containerd[1458]: time="2025-01-13T22:02:30.133351997Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 22:02:30.134771 containerd[1458]: time="2025-01-13T22:02:30.133374960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.134771 containerd[1458]: time="2025-01-13T22:02:30.133389167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.134771 containerd[1458]: time="2025-01-13T22:02:30.133402462Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 22:02:30.135084 containerd[1458]: time="2025-01-13T22:02:30.133447817Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 22:02:30.135084 containerd[1458]: time="2025-01-13T22:02:30.133468776Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 22:02:30.135084 containerd[1458]: time="2025-01-13T22:02:30.133482612Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 22:02:30.135084 containerd[1458]: time="2025-01-13T22:02:30.133497400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 22:02:30.135084 containerd[1458]: time="2025-01-13T22:02:30.133509503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.135084 containerd[1458]: time="2025-01-13T22:02:30.133523799Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 22:02:30.135084 containerd[1458]: time="2025-01-13T22:02:30.133535391Z" level=info msg="NRI interface is disabled by configuration." Jan 13 22:02:30.135084 containerd[1458]: time="2025-01-13T22:02:30.133546672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 22:02:30.136819 containerd[1458]: time="2025-01-13T22:02:30.136719363Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 22:02:30.138806 containerd[1458]: time="2025-01-13T22:02:30.137346479Z" level=info msg="Connect containerd service" Jan 13 22:02:30.138806 containerd[1458]: time="2025-01-13T22:02:30.137393597Z" level=info msg="using legacy CRI server" Jan 13 22:02:30.138806 containerd[1458]: time="2025-01-13T22:02:30.137404247Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 22:02:30.138806 containerd[1458]: time="2025-01-13T22:02:30.137507561Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 22:02:30.139439 containerd[1458]: time="2025-01-13T22:02:30.139405450Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 22:02:30.140410 containerd[1458]: time="2025-01-13T22:02:30.140374859Z" level=info msg="Start subscribing containerd event" Jan 13 22:02:30.140509 containerd[1458]: time="2025-01-13T22:02:30.140494122Z" level=info msg="Start recovering state" Jan 13 22:02:30.140637 containerd[1458]: time="2025-01-13T22:02:30.140621291Z" level=info msg="Start event monitor" Jan 13 22:02:30.140698 containerd[1458]: time="2025-01-13T22:02:30.140685571Z" level=info msg="Start snapshots syncer" Jan 13 22:02:30.140755 containerd[1458]: time="2025-01-13T22:02:30.140742979Z" level=info msg="Start cni network conf syncer for default" Jan 13 22:02:30.140885 containerd[1458]: time="2025-01-13T22:02:30.140869346Z" level=info msg="Start streaming server" Jan 13 22:02:30.141985 containerd[1458]: time="2025-01-13T22:02:30.141967576Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 22:02:30.142088 containerd[1458]: time="2025-01-13T22:02:30.142072092Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 22:02:30.142566 containerd[1458]: time="2025-01-13T22:02:30.142544898Z" level=info msg="containerd successfully booted in 0.059722s" Jan 13 22:02:30.142628 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 22:02:30.360990 tar[1455]: linux-amd64/LICENSE Jan 13 22:02:30.361408 tar[1455]: linux-amd64/README.md Jan 13 22:02:30.381273 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 22:02:31.083116 systemd-networkd[1371]: eth0: Gained IPv6LL Jan 13 22:02:31.084290 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Jan 13 22:02:31.091755 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 22:02:31.099063 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 22:02:31.117976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:02:31.132368 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 22:02:31.142061 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 22:02:31.175047 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 22:02:31.178932 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 22:02:31.191325 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 22:02:31.202701 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 22:02:31.203769 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 22:02:31.212711 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 22:02:31.224873 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 22:02:31.236368 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 22:02:31.246182 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 22:02:31.247590 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 22:02:33.450145 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 22:02:33.464545 systemd[1]: Started sshd@0-172.24.4.131:22-172.24.4.1:45620.service - OpenSSH per-connection server daemon (172.24.4.1:45620). Jan 13 22:02:33.551323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:02:33.565688 (kubelet)[1544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:02:34.995180 sshd[1538]: Accepted publickey for core from 172.24.4.1 port 45620 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:35.020633 sshd[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:35.051157 systemd-logind[1440]: New session 1 of user core. Jan 13 22:02:35.056434 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 22:02:35.073689 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 22:02:35.106174 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 22:02:35.126125 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 22:02:35.143480 (systemd)[1553]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 22:02:35.532551 systemd[1553]: Queued start job for default target default.target. Jan 13 22:02:35.540644 systemd[1553]: Created slice app.slice - User Application Slice. Jan 13 22:02:35.540670 systemd[1553]: Reached target paths.target - Paths. Jan 13 22:02:35.540684 systemd[1553]: Reached target timers.target - Timers. Jan 13 22:02:35.542906 systemd[1553]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 22:02:35.554370 systemd[1553]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 22:02:35.554491 systemd[1553]: Reached target sockets.target - Sockets. Jan 13 22:02:35.554509 systemd[1553]: Reached target basic.target - Basic System. Jan 13 22:02:35.554550 systemd[1553]: Reached target default.target - Main User Target. Jan 13 22:02:35.554579 systemd[1553]: Startup finished in 404ms. Jan 13 22:02:35.554649 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 22:02:35.566998 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 22:02:35.607611 kubelet[1544]: E0113 22:02:35.607543 1544 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:02:35.611377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:02:35.611520 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:02:35.612013 systemd[1]: kubelet.service: Consumed 2.170s CPU time. Jan 13 22:02:36.036483 systemd[1]: Started sshd@1-172.24.4.131:22-172.24.4.1:45628.service - OpenSSH per-connection server daemon (172.24.4.1:45628). Jan 13 22:02:36.300604 login[1533]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 22:02:36.311291 login[1534]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 22:02:36.315417 systemd-logind[1440]: New session 2 of user core. Jan 13 22:02:36.324700 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 22:02:36.332037 systemd-logind[1440]: New session 3 of user core. Jan 13 22:02:36.340531 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 22:02:36.634339 coreos-metadata[1427]: Jan 13 22:02:36.634 WARN failed to locate config-drive, using the metadata service API instead Jan 13 22:02:36.696114 coreos-metadata[1427]: Jan 13 22:02:36.696 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 22:02:36.886239 coreos-metadata[1427]: Jan 13 22:02:36.885 INFO Fetch successful Jan 13 22:02:36.886239 coreos-metadata[1427]: Jan 13 22:02:36.886 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 22:02:36.900901 coreos-metadata[1427]: Jan 13 22:02:36.900 INFO Fetch successful Jan 13 22:02:36.900901 coreos-metadata[1427]: Jan 13 22:02:36.900 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 22:02:36.912450 coreos-metadata[1427]: Jan 13 22:02:36.912 INFO Fetch successful Jan 13 22:02:36.912450 coreos-metadata[1427]: Jan 13 22:02:36.912 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 22:02:36.926693 coreos-metadata[1427]: Jan 13 22:02:36.926 INFO Fetch successful Jan 13 22:02:36.926693 coreos-metadata[1427]: Jan 13 22:02:36.926 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 22:02:36.940738 coreos-metadata[1427]: Jan 13 22:02:36.940 INFO Fetch successful Jan 13 22:02:36.940738 coreos-metadata[1427]: Jan 13 22:02:36.940 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 22:02:36.955068 coreos-metadata[1427]: Jan 13 22:02:36.954 INFO Fetch successful Jan 13 22:02:36.996826 coreos-metadata[1493]: Jan 13 22:02:36.996 WARN failed to locate config-drive, using the metadata service API instead Jan 13 22:02:37.001906 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 22:02:37.006092 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 22:02:37.041331 coreos-metadata[1493]: Jan 13 22:02:37.041 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 22:02:37.054939 coreos-metadata[1493]: Jan 13 22:02:37.054 INFO Fetch successful Jan 13 22:02:37.055328 coreos-metadata[1493]: Jan 13 22:02:37.055 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 22:02:37.066361 coreos-metadata[1493]: Jan 13 22:02:37.066 INFO Fetch successful Jan 13 22:02:37.072995 unknown[1493]: wrote ssh authorized keys file for user: core Jan 13 22:02:37.129918 update-ssh-keys[1603]: Updated "/home/core/.ssh/authorized_keys" Jan 13 22:02:37.131128 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 22:02:37.135484 systemd[1]: Finished sshkeys.service. Jan 13 22:02:37.141216 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 22:02:37.143198 systemd[1]: Startup finished in 1.256s (kernel) + 14.508s (initrd) + 10.880s (userspace) = 26.645s. Jan 13 22:02:37.559461 sshd[1565]: Accepted publickey for core from 172.24.4.1 port 45628 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:37.562407 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:37.573187 systemd-logind[1440]: New session 4 of user core. Jan 13 22:02:37.581142 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 22:02:38.249823 sshd[1565]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:38.261070 systemd[1]: sshd@1-172.24.4.131:22-172.24.4.1:45628.service: Deactivated successfully. Jan 13 22:02:38.264139 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 22:02:38.266047 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Jan 13 22:02:38.273364 systemd[1]: Started sshd@2-172.24.4.131:22-172.24.4.1:45630.service - OpenSSH per-connection server daemon (172.24.4.1:45630). Jan 13 22:02:38.275992 systemd-logind[1440]: Removed session 4. Jan 13 22:02:39.625150 sshd[1611]: Accepted publickey for core from 172.24.4.1 port 45630 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:39.627936 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:39.640079 systemd-logind[1440]: New session 5 of user core. Jan 13 22:02:39.648199 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 22:02:40.333380 sshd[1611]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:40.347031 systemd[1]: sshd@2-172.24.4.131:22-172.24.4.1:45630.service: Deactivated successfully. Jan 13 22:02:40.350341 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 22:02:40.354210 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Jan 13 22:02:40.361582 systemd[1]: Started sshd@3-172.24.4.131:22-172.24.4.1:45646.service - OpenSSH per-connection server daemon (172.24.4.1:45646). Jan 13 22:02:40.365472 systemd-logind[1440]: Removed session 5. Jan 13 22:02:41.856672 sshd[1618]: Accepted publickey for core from 172.24.4.1 port 45646 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:41.859527 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:41.868947 systemd-logind[1440]: New session 6 of user core. Jan 13 22:02:41.878195 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 22:02:42.565762 sshd[1618]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:42.578023 systemd[1]: sshd@3-172.24.4.131:22-172.24.4.1:45646.service: Deactivated successfully. Jan 13 22:02:42.580957 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 22:02:42.582927 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Jan 13 22:02:42.594360 systemd[1]: Started sshd@4-172.24.4.131:22-172.24.4.1:49972.service - OpenSSH per-connection server daemon (172.24.4.1:49972). Jan 13 22:02:42.597741 systemd-logind[1440]: Removed session 6. Jan 13 22:02:43.757121 sshd[1625]: Accepted publickey for core from 172.24.4.1 port 49972 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:43.759951 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:43.771486 systemd-logind[1440]: New session 7 of user core. Jan 13 22:02:43.781092 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 22:02:44.098887 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 22:02:44.099628 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:02:44.119314 sudo[1628]: pam_unix(sudo:session): session closed for user root Jan 13 22:02:44.384056 sshd[1625]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:44.396228 systemd[1]: sshd@4-172.24.4.131:22-172.24.4.1:49972.service: Deactivated successfully. Jan 13 22:02:44.400221 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 22:02:44.406307 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Jan 13 22:02:44.409384 systemd[1]: Started sshd@5-172.24.4.131:22-172.24.4.1:49974.service - OpenSSH per-connection server daemon (172.24.4.1:49974). Jan 13 22:02:44.412619 systemd-logind[1440]: Removed session 7. Jan 13 22:02:45.568289 sshd[1633]: Accepted publickey for core from 172.24.4.1 port 49974 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:45.571150 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:45.580080 systemd-logind[1440]: New session 8 of user core. Jan 13 22:02:45.589146 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 22:02:45.659913 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 22:02:45.668181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:02:45.952766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:02:45.969331 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:02:46.049402 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 22:02:46.050746 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:02:46.059085 sudo[1651]: pam_unix(sudo:session): session closed for user root Jan 13 22:02:46.072514 kubelet[1644]: E0113 22:02:46.072378 1644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:02:46.074409 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 22:02:46.074703 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:02:46.081098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:02:46.081423 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:02:46.095311 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 22:02:46.097255 auditctl[1656]: No rules Jan 13 22:02:46.099169 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 22:02:46.099553 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 22:02:46.106496 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 22:02:46.144804 augenrules[1675]: No rules Jan 13 22:02:46.146114 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 22:02:46.148835 sudo[1649]: pam_unix(sudo:session): session closed for user root Jan 13 22:02:46.315190 sshd[1633]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:46.328850 systemd[1]: sshd@5-172.24.4.131:22-172.24.4.1:49974.service: Deactivated successfully. Jan 13 22:02:46.332454 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 22:02:46.336393 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Jan 13 22:02:46.347379 systemd[1]: Started sshd@6-172.24.4.131:22-172.24.4.1:49976.service - OpenSSH per-connection server daemon (172.24.4.1:49976). Jan 13 22:02:46.350486 systemd-logind[1440]: Removed session 8. Jan 13 22:02:47.713854 sshd[1683]: Accepted publickey for core from 172.24.4.1 port 49976 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:47.716994 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:47.726914 systemd-logind[1440]: New session 9 of user core. Jan 13 22:02:47.736079 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 22:02:48.035363 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 22:02:48.036164 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:02:48.667065 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 22:02:48.679463 (dockerd)[1703]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 22:02:49.519958 dockerd[1703]: time="2025-01-13T22:02:49.519520892Z" level=info msg="Starting up" Jan 13 22:02:49.866092 dockerd[1703]: time="2025-01-13T22:02:49.865860832Z" level=info msg="Loading containers: start." Jan 13 22:02:50.056963 kernel: Initializing XFRM netlink socket Jan 13 22:02:50.087947 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Jan 13 22:02:51.149122 systemd-resolved[1331]: Clock change detected. Flushing caches. Jan 13 22:02:51.149563 systemd-timesyncd[1355]: Contacted time server 164.132.166.29:123 (2.flatcar.pool.ntp.org). Jan 13 22:02:51.149657 systemd-timesyncd[1355]: Initial clock synchronization to Mon 2025-01-13 22:02:51.149046 UTC. Jan 13 22:02:51.182529 systemd-networkd[1371]: docker0: Link UP Jan 13 22:02:51.200134 dockerd[1703]: time="2025-01-13T22:02:51.200047284Z" level=info msg="Loading containers: done." Jan 13 22:02:51.223549 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4192284049-merged.mount: Deactivated successfully. Jan 13 22:02:51.226864 dockerd[1703]: time="2025-01-13T22:02:51.226352920Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 22:02:51.226864 dockerd[1703]: time="2025-01-13T22:02:51.226482774Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 22:02:51.226864 dockerd[1703]: time="2025-01-13T22:02:51.226602298Z" level=info msg="Daemon has completed initialization" Jan 13 22:02:51.303189 dockerd[1703]: time="2025-01-13T22:02:51.302933928Z" level=info msg="API listen on /run/docker.sock" Jan 13 22:02:51.303717 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 22:02:52.863799 containerd[1458]: time="2025-01-13T22:02:52.863653878Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 22:02:53.636532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1690898751.mount: Deactivated successfully. Jan 13 22:02:55.031409 containerd[1458]: time="2025-01-13T22:02:55.031354219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:02:55.033114 containerd[1458]: time="2025-01-13T22:02:55.033085737Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975491" Jan 13 22:02:55.034144 containerd[1458]: time="2025-01-13T22:02:55.034061236Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:02:55.038802 containerd[1458]: time="2025-01-13T22:02:55.038395305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:02:55.040359 containerd[1458]: time="2025-01-13T22:02:55.040328360Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 2.176564607s" Jan 13 22:02:55.040459 containerd[1458]: time="2025-01-13T22:02:55.040442384Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Jan 13 22:02:55.047132 containerd[1458]: time="2025-01-13T22:02:55.047113676Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 22:02:57.011261 containerd[1458]: time="2025-01-13T22:02:57.011167899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:02:57.013800 containerd[1458]: time="2025-01-13T22:02:57.013273408Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702165" Jan 13 22:02:57.015368 containerd[1458]: time="2025-01-13T22:02:57.015254584Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:02:57.024798 containerd[1458]: time="2025-01-13T22:02:57.023184346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:02:57.025836 containerd[1458]: time="2025-01-13T22:02:57.025797908Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.978540281s" Jan 13 22:02:57.025901 containerd[1458]: time="2025-01-13T22:02:57.025844104Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Jan 13 22:02:57.029926 containerd[1458]: time="2025-01-13T22:02:57.029890213Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 22:02:57.187420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 22:02:57.196651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:02:57.407158 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:02:57.407181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:02:57.586853 kubelet[1909]: E0113 22:02:57.586499 1909 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:02:57.591360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:02:57.591813 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:02:59.404262 containerd[1458]: time="2025-01-13T22:02:59.404036864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:02:59.405802 containerd[1458]: time="2025-01-13T22:02:59.405613642Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652075" Jan 13 22:02:59.407258 containerd[1458]: time="2025-01-13T22:02:59.407214394Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:02:59.411103 containerd[1458]: time="2025-01-13T22:02:59.411030491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:02:59.412442 containerd[1458]: time="2025-01-13T22:02:59.412262341Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 2.382214774s" Jan 13 22:02:59.412442 containerd[1458]: time="2025-01-13T22:02:59.412298780Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Jan 13 22:02:59.413490 containerd[1458]: time="2025-01-13T22:02:59.413471138Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 22:03:00.827425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4142608445.mount: Deactivated successfully. Jan 13 22:03:01.407680 containerd[1458]: time="2025-01-13T22:03:01.407603623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:01.408884 containerd[1458]: time="2025-01-13T22:03:01.408669412Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230251" Jan 13 22:03:01.411458 containerd[1458]: time="2025-01-13T22:03:01.410111306Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:01.417625 containerd[1458]: time="2025-01-13T22:03:01.417582479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:01.418478 containerd[1458]: time="2025-01-13T22:03:01.418452270Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.004891314s" Jan 13 22:03:01.419171 containerd[1458]: time="2025-01-13T22:03:01.419150570Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 22:03:01.419978 containerd[1458]: time="2025-01-13T22:03:01.419950129Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 22:03:02.099709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2223689219.mount: Deactivated successfully. Jan 13 22:03:03.317683 containerd[1458]: time="2025-01-13T22:03:03.317620540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:03.320430 containerd[1458]: time="2025-01-13T22:03:03.320393630Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 13 22:03:03.321621 containerd[1458]: time="2025-01-13T22:03:03.321595555Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:03.325584 containerd[1458]: time="2025-01-13T22:03:03.325560781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:03.327223 containerd[1458]: time="2025-01-13T22:03:03.327172384Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.907183372s" Jan 13 22:03:03.327274 containerd[1458]: time="2025-01-13T22:03:03.327226165Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 22:03:03.327847 containerd[1458]: time="2025-01-13T22:03:03.327818436Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 22:03:04.025951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount59327604.mount: Deactivated successfully. Jan 13 22:03:04.038418 containerd[1458]: time="2025-01-13T22:03:04.038300304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:04.040453 containerd[1458]: time="2025-01-13T22:03:04.040346873Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 13 22:03:04.042201 containerd[1458]: time="2025-01-13T22:03:04.042070425Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:04.047943 containerd[1458]: time="2025-01-13T22:03:04.047747943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:04.050765 containerd[1458]: time="2025-01-13T22:03:04.049995839Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 722.055926ms" Jan 13 22:03:04.050765 containerd[1458]: time="2025-01-13T22:03:04.050088733Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 13 22:03:04.051206 containerd[1458]: time="2025-01-13T22:03:04.051136358Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 22:03:04.706074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229450086.mount: Deactivated successfully. Jan 13 22:03:07.404297 containerd[1458]: time="2025-01-13T22:03:07.404223643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:07.409407 containerd[1458]: time="2025-01-13T22:03:07.409286999Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Jan 13 22:03:07.488739 containerd[1458]: time="2025-01-13T22:03:07.488639745Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:07.501316 containerd[1458]: time="2025-01-13T22:03:07.501152172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:07.507504 containerd[1458]: time="2025-01-13T22:03:07.507385643Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.456164385s" Jan 13 22:03:07.507504 containerd[1458]: time="2025-01-13T22:03:07.507462507Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 13 22:03:07.686396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 22:03:07.693597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:03:07.879139 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:03:07.879156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:03:07.928819 kubelet[2045]: E0113 22:03:07.927823 2045 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:03:07.932381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:03:07.932731 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:03:10.916281 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:03:10.929099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:03:10.964012 systemd[1]: Reloading requested from client PID 2074 ('systemctl') (unit session-9.scope)... Jan 13 22:03:10.964154 systemd[1]: Reloading... Jan 13 22:03:11.059066 zram_generator::config[2117]: No configuration found. Jan 13 22:03:11.202713 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:03:11.286373 systemd[1]: Reloading finished in 321 ms. Jan 13 22:03:11.336885 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 22:03:11.336962 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 22:03:11.337267 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:03:11.339166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:03:11.463925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:03:11.475026 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 22:03:11.523265 kubelet[2178]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:03:11.523265 kubelet[2178]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 22:03:11.523265 kubelet[2178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:03:11.524147 kubelet[2178]: I0113 22:03:11.523313 2178 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 22:03:12.282226 kubelet[2178]: I0113 22:03:12.282150 2178 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 22:03:12.282226 kubelet[2178]: I0113 22:03:12.282212 2178 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 22:03:12.282859 kubelet[2178]: I0113 22:03:12.282768 2178 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 22:03:12.323150 kubelet[2178]: E0113 22:03:12.323045 2178 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.131:6443: connect: connection refused" logger="UnhandledError" Jan 13 22:03:12.325111 kubelet[2178]: I0113 22:03:12.324721 2178 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 22:03:12.337853 kubelet[2178]: E0113 22:03:12.337750 2178 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 22:03:12.338308 kubelet[2178]: I0113 22:03:12.338173 2178 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 22:03:12.348230 kubelet[2178]: I0113 22:03:12.348195 2178 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 22:03:12.348729 kubelet[2178]: I0113 22:03:12.348569 2178 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 22:03:12.349811 kubelet[2178]: I0113 22:03:12.349109 2178 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 22:03:12.349811 kubelet[2178]: I0113 22:03:12.349158 2178 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-2-0f60d24a30.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 22:03:12.349811 kubelet[2178]: I0113 22:03:12.349537 2178 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 22:03:12.349811 kubelet[2178]: I0113 22:03:12.349563 2178 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 22:03:12.350109 kubelet[2178]: I0113 22:03:12.349750 2178 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:03:12.360984 kubelet[2178]: I0113 22:03:12.360724 2178 kubelet.go:408] "Attempting to sync node with API server" Jan 13 22:03:12.360984 kubelet[2178]: I0113 22:03:12.360762 2178 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 22:03:12.360984 kubelet[2178]: I0113 22:03:12.360814 2178 kubelet.go:314] "Adding apiserver pod source" Jan 13 22:03:12.360984 kubelet[2178]: I0113 22:03:12.360829 2178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 22:03:12.370790 kubelet[2178]: W0113 22:03:12.369728 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-0f60d24a30.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.131:6443: connect: connection refused Jan 13 22:03:12.370790 kubelet[2178]: E0113 22:03:12.369911 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-0f60d24a30.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.131:6443: connect: connection refused" logger="UnhandledError" Jan 13 22:03:12.370790 kubelet[2178]: W0113 22:03:12.370647 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.131:6443: connect: connection refused Jan 13 22:03:12.370790 kubelet[2178]: E0113 22:03:12.370732 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.131:6443: connect: connection refused" logger="UnhandledError" Jan 13 22:03:12.371577 kubelet[2178]: I0113 22:03:12.371538 2178 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 22:03:12.376729 kubelet[2178]: I0113 22:03:12.376516 2178 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 22:03:12.376729 kubelet[2178]: W0113 22:03:12.376642 2178 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 22:03:12.378672 kubelet[2178]: I0113 22:03:12.378628 2178 server.go:1269] "Started kubelet" Jan 13 22:03:12.382437 kubelet[2178]: I0113 22:03:12.382397 2178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 22:03:12.386802 kubelet[2178]: I0113 22:03:12.385616 2178 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 22:03:12.386802 kubelet[2178]: I0113 22:03:12.386560 2178 server.go:460] "Adding debug handlers to kubelet server" Jan 13 22:03:12.387760 kubelet[2178]: I0113 22:03:12.387589 2178 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 22:03:12.388041 kubelet[2178]: I0113 22:03:12.388021 2178 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 22:03:12.393926 kubelet[2178]: I0113 22:03:12.393904 2178 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 22:03:12.394040 kubelet[2178]: I0113 22:03:12.394004 2178 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 22:03:12.396429 kubelet[2178]: E0113 22:03:12.396378 2178 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-2-0f60d24a30.novalocal\" not found" Jan 13 22:03:12.399200 kubelet[2178]: E0113 22:03:12.394441 2178 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.131:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.131:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-2-0f60d24a30.novalocal.181a5faa8a90ee12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-2-0f60d24a30.novalocal,UID:ci-4081-3-0-2-0f60d24a30.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-2-0f60d24a30.novalocal,},FirstTimestamp:2025-01-13 22:03:12.378580498 +0000 UTC m=+0.900155693,LastTimestamp:2025-01-13 22:03:12.378580498 +0000 UTC m=+0.900155693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-2-0f60d24a30.novalocal,}" Jan 13 22:03:12.399200 kubelet[2178]: E0113 22:03:12.398893 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-0f60d24a30.novalocal?timeout=10s\": dial tcp 172.24.4.131:6443: connect: connection refused" interval="200ms" Jan 13 22:03:12.399200 kubelet[2178]: I0113 22:03:12.399096 2178 reconciler.go:26] "Reconciler: start to sync state" Jan 13 22:03:12.399200 kubelet[2178]: I0113 22:03:12.399133 2178 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 22:03:12.399728 kubelet[2178]: W0113 22:03:12.399386 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.131:6443: connect: connection refused Jan 13 22:03:12.399728 kubelet[2178]: E0113 22:03:12.399427 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.131:6443: connect: connection refused" logger="UnhandledError" Jan 13 22:03:12.401665 kubelet[2178]: I0113 22:03:12.401606 2178 factory.go:221] Registration of the systemd container factory successfully Jan 13 22:03:12.401861 kubelet[2178]: I0113 22:03:12.401691 2178 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 22:03:12.404317 kubelet[2178]: E0113 22:03:12.403752 2178 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 22:03:12.404317 kubelet[2178]: I0113 22:03:12.404093 2178 factory.go:221] Registration of the containerd container factory successfully Jan 13 22:03:12.412016 kubelet[2178]: I0113 22:03:12.411903 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 22:03:12.413173 kubelet[2178]: I0113 22:03:12.412889 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 22:03:12.413173 kubelet[2178]: I0113 22:03:12.412915 2178 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 22:03:12.413173 kubelet[2178]: I0113 22:03:12.412932 2178 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 22:03:12.413173 kubelet[2178]: E0113 22:03:12.412966 2178 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 22:03:12.420531 kubelet[2178]: W0113 22:03:12.420398 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.131:6443: connect: connection refused Jan 13 22:03:12.420531 kubelet[2178]: E0113 22:03:12.420472 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.131:6443: connect: connection refused" logger="UnhandledError" Jan 13 22:03:12.442812 kubelet[2178]: I0113 22:03:12.442440 2178 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 22:03:12.442812 kubelet[2178]: I0113 22:03:12.442458 2178 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 22:03:12.442812 kubelet[2178]: I0113 22:03:12.442482 2178 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:03:12.447255 kubelet[2178]: I0113 22:03:12.447152 2178 policy_none.go:49] "None policy: Start" Jan 13 22:03:12.448156 kubelet[2178]: I0113 22:03:12.448131 2178 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 22:03:12.448241 kubelet[2178]: I0113 22:03:12.448169 2178 state_mem.go:35] "Initializing new in-memory state store" Jan 13 22:03:12.456694 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 22:03:12.469332 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 22:03:12.478469 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 22:03:12.479763 kubelet[2178]: I0113 22:03:12.479741 2178 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 22:03:12.479954 kubelet[2178]: I0113 22:03:12.479932 2178 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 22:03:12.479995 kubelet[2178]: I0113 22:03:12.479949 2178 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 22:03:12.480440 kubelet[2178]: I0113 22:03:12.480420 2178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 22:03:12.482026 kubelet[2178]: E0113 22:03:12.482007 2178 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-2-0f60d24a30.novalocal\" not found" Jan 13 22:03:12.534046 systemd[1]: Created slice kubepods-burstable-pod6016ed30c524a645ee998767e65a0b15.slice - libcontainer container kubepods-burstable-pod6016ed30c524a645ee998767e65a0b15.slice. Jan 13 22:03:12.554513 systemd[1]: Created slice kubepods-burstable-pod12bb02566962c904532fce2870ae8101.slice - libcontainer container kubepods-burstable-pod12bb02566962c904532fce2870ae8101.slice. Jan 13 22:03:12.570057 systemd[1]: Created slice kubepods-burstable-pod6e50e6b81bfc146d0cd5c5dfe6cbd02a.slice - libcontainer container kubepods-burstable-pod6e50e6b81bfc146d0cd5c5dfe6cbd02a.slice. Jan 13 22:03:12.582541 kubelet[2178]: I0113 22:03:12.582421 2178 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:12.583307 kubelet[2178]: E0113 22:03:12.583275 2178 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.131:6443/api/v1/nodes\": dial tcp 172.24.4.131:6443: connect: connection refused" node="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:12.600639 kubelet[2178]: I0113 22:03:12.599990 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6016ed30c524a645ee998767e65a0b15-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"6016ed30c524a645ee998767e65a0b15\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:12.600639 kubelet[2178]: I0113 22:03:12.600067 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e50e6b81bfc146d0cd5c5dfe6cbd02a-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"6e50e6b81bfc146d0cd5c5dfe6cbd02a\") " pod="kube-system/kube-scheduler-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:12.600639 kubelet[2178]: E0113 22:03:12.600107 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-0f60d24a30.novalocal?timeout=10s\": dial tcp 172.24.4.131:6443: connect: connection refused" interval="400ms" Jan 13 22:03:12.600639 kubelet[2178]: I0113 22:03:12.600125 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12bb02566962c904532fce2870ae8101-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"12bb02566962c904532fce2870ae8101\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:12.600639 kubelet[2178]: I0113 22:03:12.600174 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12bb02566962c904532fce2870ae8101-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"12bb02566962c904532fce2870ae8101\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:12.601181 kubelet[2178]: I0113 22:03:12.600217 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6016ed30c524a645ee998767e65a0b15-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"6016ed30c524a645ee998767e65a0b15\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:12.601181 kubelet[2178]: I0113 22:03:12.600257 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6016ed30c524a645ee998767e65a0b15-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"6016ed30c524a645ee998767e65a0b15\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:12.601181 kubelet[2178]: I0113 22:03:12.600326 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6016ed30c524a645ee998767e65a0b15-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"6016ed30c524a645ee998767e65a0b15\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:12.601181 kubelet[2178]: I0113 22:03:12.600374 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6016ed30c524a645ee998767e65a0b15-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"6016ed30c524a645ee998767e65a0b15\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:12.601419 kubelet[2178]: I0113 22:03:12.600416 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12bb02566962c904532fce2870ae8101-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"12bb02566962c904532fce2870ae8101\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:12.789376 kubelet[2178]: I0113 22:03:12.788625 2178 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:12.789376 kubelet[2178]: E0113 22:03:12.789216 2178 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.131:6443/api/v1/nodes\": dial tcp 172.24.4.131:6443: connect: connection refused" node="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:12.852680 containerd[1458]: time="2025-01-13T22:03:12.852466601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal,Uid:6016ed30c524a645ee998767e65a0b15,Namespace:kube-system,Attempt:0,}" Jan 13 22:03:12.869192 containerd[1458]: time="2025-01-13T22:03:12.869120425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal,Uid:12bb02566962c904532fce2870ae8101,Namespace:kube-system,Attempt:0,}" Jan 13 22:03:12.873945 containerd[1458]: time="2025-01-13T22:03:12.873626346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-2-0f60d24a30.novalocal,Uid:6e50e6b81bfc146d0cd5c5dfe6cbd02a,Namespace:kube-system,Attempt:0,}" Jan 13 22:03:13.000929 kubelet[2178]: E0113 22:03:13.000831 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-0f60d24a30.novalocal?timeout=10s\": dial tcp 172.24.4.131:6443: connect: connection refused" interval="800ms" Jan 13 22:03:13.193023 kubelet[2178]: I0113 22:03:13.192925 2178 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:13.194180 kubelet[2178]: E0113 22:03:13.194111 2178 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.131:6443/api/v1/nodes\": dial tcp 172.24.4.131:6443: connect: connection refused" node="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:13.239720 kubelet[2178]: W0113 22:03:13.239477 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.131:6443: connect: connection refused Jan 13 22:03:13.239998 kubelet[2178]: E0113 22:03:13.239835 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.131:6443: connect: connection refused" logger="UnhandledError" Jan 13 22:03:13.340635 kubelet[2178]: W0113 22:03:13.340440 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.131:6443: connect: connection refused Jan 13 22:03:13.340959 kubelet[2178]: E0113 22:03:13.340689 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.131:6443: connect: connection refused" logger="UnhandledError" Jan 13 22:03:13.444715 kubelet[2178]: W0113 22:03:13.444431 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.131:6443: connect: connection refused Jan 13 22:03:13.444715 kubelet[2178]: E0113 22:03:13.444557 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.131:6443: connect: connection refused" logger="UnhandledError" Jan 13 22:03:13.801993 kubelet[2178]: E0113 22:03:13.801718 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-0f60d24a30.novalocal?timeout=10s\": dial tcp 172.24.4.131:6443: connect: connection refused" interval="1.6s" Jan 13 22:03:13.926999 kubelet[2178]: W0113 22:03:13.926767 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-0f60d24a30.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.131:6443: connect: connection refused Jan 13 22:03:13.926999 kubelet[2178]: E0113 22:03:13.926945 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-0f60d24a30.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.131:6443: connect: connection refused" logger="UnhandledError" Jan 13 22:03:13.999276 kubelet[2178]: I0113 22:03:13.998997 2178 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:14.000912 kubelet[2178]: E0113 22:03:14.000847 2178 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.131:6443/api/v1/nodes\": dial tcp 172.24.4.131:6443: connect: connection refused" node="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:14.176040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1680057756.mount: Deactivated successfully. Jan 13 22:03:14.192921 containerd[1458]: time="2025-01-13T22:03:14.192714616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:03:14.197726 containerd[1458]: time="2025-01-13T22:03:14.197574470Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 22:03:14.199032 containerd[1458]: time="2025-01-13T22:03:14.198906318Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:03:14.201116 containerd[1458]: time="2025-01-13T22:03:14.200988183Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:03:14.205979 containerd[1458]: time="2025-01-13T22:03:14.205893162Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 22:03:14.207064 containerd[1458]: time="2025-01-13T22:03:14.206882177Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:03:14.207064 containerd[1458]: time="2025-01-13T22:03:14.206992614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 22:03:14.214588 containerd[1458]: time="2025-01-13T22:03:14.214029271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:03:14.220862 containerd[1458]: time="2025-01-13T22:03:14.220736601Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.368110972s" Jan 13 22:03:14.226867 containerd[1458]: time="2025-01-13T22:03:14.226689405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.357426613s" Jan 13 22:03:14.229755 containerd[1458]: time="2025-01-13T22:03:14.229339646Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.355570603s" Jan 13 22:03:14.428977 containerd[1458]: time="2025-01-13T22:03:14.428471938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:03:14.428977 containerd[1458]: time="2025-01-13T22:03:14.428528525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:03:14.428977 containerd[1458]: time="2025-01-13T22:03:14.428542631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:03:14.428977 containerd[1458]: time="2025-01-13T22:03:14.428657356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:03:14.435497 containerd[1458]: time="2025-01-13T22:03:14.433531738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:03:14.435497 containerd[1458]: time="2025-01-13T22:03:14.433605707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:03:14.435497 containerd[1458]: time="2025-01-13T22:03:14.433624232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:03:14.435497 containerd[1458]: time="2025-01-13T22:03:14.433699262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:03:14.448059 containerd[1458]: time="2025-01-13T22:03:14.447758500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:03:14.448059 containerd[1458]: time="2025-01-13T22:03:14.447842298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:03:14.448059 containerd[1458]: time="2025-01-13T22:03:14.447878395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:03:14.449219 containerd[1458]: time="2025-01-13T22:03:14.448307250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:03:14.449279 kubelet[2178]: E0113 22:03:14.449170 2178 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.131:6443: connect: connection refused" logger="UnhandledError" Jan 13 22:03:14.463156 systemd[1]: Started cri-containerd-3567b37a7cdebc6d24c1f5977695e2f9499be01520eaeb70033c8f64dd848674.scope - libcontainer container 3567b37a7cdebc6d24c1f5977695e2f9499be01520eaeb70033c8f64dd848674. Jan 13 22:03:14.479028 systemd[1]: Started cri-containerd-e0c99eceb9ee27c99f43acc837129202934bb6d3a7950ea59fe90cd2722fd2bf.scope - libcontainer container e0c99eceb9ee27c99f43acc837129202934bb6d3a7950ea59fe90cd2722fd2bf. Jan 13 22:03:14.487458 systemd[1]: Started cri-containerd-6ecfbed36a776b32f0db5ca1e4f75064a9e42b9ea54123303547e2f44ec82024.scope - libcontainer container 6ecfbed36a776b32f0db5ca1e4f75064a9e42b9ea54123303547e2f44ec82024. Jan 13 22:03:14.532225 containerd[1458]: time="2025-01-13T22:03:14.532140579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal,Uid:12bb02566962c904532fce2870ae8101,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0c99eceb9ee27c99f43acc837129202934bb6d3a7950ea59fe90cd2722fd2bf\"" Jan 13 22:03:14.537331 containerd[1458]: time="2025-01-13T22:03:14.536955399Z" level=info msg="CreateContainer within sandbox \"e0c99eceb9ee27c99f43acc837129202934bb6d3a7950ea59fe90cd2722fd2bf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 22:03:14.552825 containerd[1458]: time="2025-01-13T22:03:14.552675883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-2-0f60d24a30.novalocal,Uid:6e50e6b81bfc146d0cd5c5dfe6cbd02a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3567b37a7cdebc6d24c1f5977695e2f9499be01520eaeb70033c8f64dd848674\"" Jan 13 22:03:14.558417 containerd[1458]: time="2025-01-13T22:03:14.558327733Z" level=info msg="CreateContainer within sandbox \"3567b37a7cdebc6d24c1f5977695e2f9499be01520eaeb70033c8f64dd848674\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 22:03:14.566379 containerd[1458]: time="2025-01-13T22:03:14.566242447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal,Uid:6016ed30c524a645ee998767e65a0b15,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ecfbed36a776b32f0db5ca1e4f75064a9e42b9ea54123303547e2f44ec82024\"" Jan 13 22:03:14.569012 containerd[1458]: time="2025-01-13T22:03:14.568900733Z" level=info msg="CreateContainer within sandbox \"6ecfbed36a776b32f0db5ca1e4f75064a9e42b9ea54123303547e2f44ec82024\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 22:03:14.572500 containerd[1458]: time="2025-01-13T22:03:14.572455129Z" level=info msg="CreateContainer within sandbox \"e0c99eceb9ee27c99f43acc837129202934bb6d3a7950ea59fe90cd2722fd2bf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3568e544b4f744476d6868fa189b1d540369ad983edc4965b463b0dabdc8f2df\"" Jan 13 22:03:14.573398 containerd[1458]: time="2025-01-13T22:03:14.573218240Z" level=info msg="StartContainer for \"3568e544b4f744476d6868fa189b1d540369ad983edc4965b463b0dabdc8f2df\"" Jan 13 22:03:14.598480 containerd[1458]: time="2025-01-13T22:03:14.598430536Z" level=info msg="CreateContainer within sandbox \"3567b37a7cdebc6d24c1f5977695e2f9499be01520eaeb70033c8f64dd848674\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"551a97e531f5ea998f6d92333f6f24bb4253aed7d740b87aff84d11eb7de93db\"" Jan 13 22:03:14.599506 containerd[1458]: time="2025-01-13T22:03:14.599473271Z" level=info msg="StartContainer for \"551a97e531f5ea998f6d92333f6f24bb4253aed7d740b87aff84d11eb7de93db\"" Jan 13 22:03:14.602190 systemd[1]: Started cri-containerd-3568e544b4f744476d6868fa189b1d540369ad983edc4965b463b0dabdc8f2df.scope - libcontainer container 3568e544b4f744476d6868fa189b1d540369ad983edc4965b463b0dabdc8f2df. Jan 13 22:03:14.608960 containerd[1458]: time="2025-01-13T22:03:14.608568109Z" level=info msg="CreateContainer within sandbox \"6ecfbed36a776b32f0db5ca1e4f75064a9e42b9ea54123303547e2f44ec82024\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b395d49d97fd96fcef078b4739e097dfe12a72516938f2d84093e086a25e1b04\"" Jan 13 22:03:14.611975 containerd[1458]: time="2025-01-13T22:03:14.611809097Z" level=info msg="StartContainer for \"b395d49d97fd96fcef078b4739e097dfe12a72516938f2d84093e086a25e1b04\"" Jan 13 22:03:14.662405 systemd[1]: Started cri-containerd-551a97e531f5ea998f6d92333f6f24bb4253aed7d740b87aff84d11eb7de93db.scope - libcontainer container 551a97e531f5ea998f6d92333f6f24bb4253aed7d740b87aff84d11eb7de93db. Jan 13 22:03:14.681493 containerd[1458]: time="2025-01-13T22:03:14.681148533Z" level=info msg="StartContainer for \"3568e544b4f744476d6868fa189b1d540369ad983edc4965b463b0dabdc8f2df\" returns successfully" Jan 13 22:03:14.684212 systemd[1]: Started cri-containerd-b395d49d97fd96fcef078b4739e097dfe12a72516938f2d84093e086a25e1b04.scope - libcontainer container b395d49d97fd96fcef078b4739e097dfe12a72516938f2d84093e086a25e1b04. Jan 13 22:03:14.750199 containerd[1458]: time="2025-01-13T22:03:14.750147591Z" level=info msg="StartContainer for \"551a97e531f5ea998f6d92333f6f24bb4253aed7d740b87aff84d11eb7de93db\" returns successfully" Jan 13 22:03:14.778514 containerd[1458]: time="2025-01-13T22:03:14.778029664Z" level=info msg="StartContainer for \"b395d49d97fd96fcef078b4739e097dfe12a72516938f2d84093e086a25e1b04\" returns successfully" Jan 13 22:03:15.526592 update_engine[1441]: I20250113 22:03:15.526010 1441 update_attempter.cc:509] Updating boot flags... Jan 13 22:03:15.560318 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2449) Jan 13 22:03:15.614298 kubelet[2178]: I0113 22:03:15.611462 2178 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:15.648834 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2451) Jan 13 22:03:17.033009 kubelet[2178]: E0113 22:03:17.032438 2178 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-2-0f60d24a30.novalocal\" not found" node="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:17.103516 kubelet[2178]: I0113 22:03:17.101227 2178 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:17.151202 kubelet[2178]: E0113 22:03:17.151099 2178 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-2-0f60d24a30.novalocal.181a5faa8a90ee12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-2-0f60d24a30.novalocal,UID:ci-4081-3-0-2-0f60d24a30.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-2-0f60d24a30.novalocal,},FirstTimestamp:2025-01-13 22:03:12.378580498 +0000 UTC m=+0.900155693,LastTimestamp:2025-01-13 22:03:12.378580498 +0000 UTC m=+0.900155693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-2-0f60d24a30.novalocal,}" Jan 13 22:03:17.214632 kubelet[2178]: E0113 22:03:17.214550 2178 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-2-0f60d24a30.novalocal.181a5faa8c10a985 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-2-0f60d24a30.novalocal,UID:ci-4081-3-0-2-0f60d24a30.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-2-0f60d24a30.novalocal,},FirstTimestamp:2025-01-13 22:03:12.403728773 +0000 UTC m=+0.925303968,LastTimestamp:2025-01-13 22:03:12.403728773 +0000 UTC m=+0.925303968,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-2-0f60d24a30.novalocal,}" Jan 13 22:03:17.279542 kubelet[2178]: E0113 22:03:17.279304 2178 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-2-0f60d24a30.novalocal.181a5faa8e4c0aff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-2-0f60d24a30.novalocal,UID:ci-4081-3-0-2-0f60d24a30.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081-3-0-2-0f60d24a30.novalocal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-2-0f60d24a30.novalocal,},FirstTimestamp:2025-01-13 22:03:12.441174783 +0000 UTC m=+0.962749928,LastTimestamp:2025-01-13 22:03:12.441174783 +0000 UTC m=+0.962749928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-2-0f60d24a30.novalocal,}" Jan 13 22:03:17.374478 kubelet[2178]: I0113 22:03:17.372996 2178 apiserver.go:52] "Watching apiserver" Jan 13 22:03:17.399371 kubelet[2178]: I0113 22:03:17.399228 2178 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 22:03:18.015835 kubelet[2178]: W0113 22:03:18.014365 2178 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:03:19.733550 systemd[1]: Reloading requested from client PID 2458 ('systemctl') (unit session-9.scope)... Jan 13 22:03:19.733577 systemd[1]: Reloading... Jan 13 22:03:19.833942 zram_generator::config[2498]: No configuration found. Jan 13 22:03:19.974525 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:03:20.051792 kubelet[2178]: W0113 22:03:20.051674 2178 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:03:20.077058 systemd[1]: Reloading finished in 342 ms. Jan 13 22:03:20.113827 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:03:20.129272 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 22:03:20.129448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:03:20.129496 systemd[1]: kubelet.service: Consumed 1.424s CPU time, 118.4M memory peak, 0B memory swap peak. Jan 13 22:03:20.138203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:03:20.400225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:03:20.416963 (kubelet)[2561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 22:03:20.517806 kubelet[2561]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:03:20.517806 kubelet[2561]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 22:03:20.517806 kubelet[2561]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:03:20.517806 kubelet[2561]: I0113 22:03:20.517395 2561 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 22:03:20.525510 kubelet[2561]: I0113 22:03:20.525482 2561 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 22:03:20.526639 kubelet[2561]: I0113 22:03:20.525647 2561 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 22:03:20.526639 kubelet[2561]: I0113 22:03:20.525929 2561 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 22:03:20.527867 kubelet[2561]: I0113 22:03:20.527841 2561 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 22:03:20.530376 kubelet[2561]: I0113 22:03:20.530358 2561 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 22:03:20.534112 kubelet[2561]: E0113 22:03:20.534079 2561 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 22:03:20.534283 kubelet[2561]: I0113 22:03:20.534273 2561 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 22:03:20.543298 kubelet[2561]: I0113 22:03:20.543256 2561 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 22:03:20.543892 kubelet[2561]: I0113 22:03:20.543880 2561 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 22:03:20.544472 kubelet[2561]: I0113 22:03:20.544439 2561 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 22:03:20.545124 kubelet[2561]: I0113 22:03:20.544818 2561 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-2-0f60d24a30.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 22:03:20.545340 kubelet[2561]: I0113 22:03:20.545132 2561 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 22:03:20.545340 kubelet[2561]: I0113 22:03:20.545144 2561 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 22:03:20.545340 kubelet[2561]: I0113 22:03:20.545184 2561 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:03:20.545340 kubelet[2561]: I0113 22:03:20.545285 2561 kubelet.go:408] "Attempting to sync node with API server" Jan 13 22:03:20.545340 kubelet[2561]: I0113 22:03:20.545299 2561 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 22:03:20.545340 kubelet[2561]: I0113 22:03:20.545325 2561 kubelet.go:314] "Adding apiserver pod source" Jan 13 22:03:20.545340 kubelet[2561]: I0113 22:03:20.545343 2561 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 22:03:20.549938 kubelet[2561]: I0113 22:03:20.549464 2561 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 22:03:20.551926 kubelet[2561]: I0113 22:03:20.551891 2561 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 22:03:20.554960 kubelet[2561]: I0113 22:03:20.554922 2561 server.go:1269] "Started kubelet" Jan 13 22:03:20.560145 kubelet[2561]: I0113 22:03:20.560109 2561 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 22:03:20.562792 kubelet[2561]: I0113 22:03:20.562688 2561 server.go:460] "Adding debug handlers to kubelet server" Jan 13 22:03:20.564704 kubelet[2561]: I0113 22:03:20.564663 2561 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 22:03:20.566944 kubelet[2561]: I0113 22:03:20.566930 2561 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 22:03:20.567701 kubelet[2561]: I0113 22:03:20.567570 2561 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 22:03:20.571429 kubelet[2561]: I0113 22:03:20.571410 2561 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 22:03:20.575041 kubelet[2561]: I0113 22:03:20.574598 2561 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 22:03:20.575041 kubelet[2561]: E0113 22:03:20.574801 2561 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-2-0f60d24a30.novalocal\" not found" Jan 13 22:03:20.579076 kubelet[2561]: I0113 22:03:20.579050 2561 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 22:03:20.579306 kubelet[2561]: I0113 22:03:20.579295 2561 reconciler.go:26] "Reconciler: start to sync state" Jan 13 22:03:20.582058 kubelet[2561]: I0113 22:03:20.582031 2561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 22:03:20.584485 kubelet[2561]: I0113 22:03:20.584471 2561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 22:03:20.584565 kubelet[2561]: I0113 22:03:20.584556 2561 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 22:03:20.584626 kubelet[2561]: I0113 22:03:20.584618 2561 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 22:03:20.584714 kubelet[2561]: E0113 22:03:20.584698 2561 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 22:03:20.589168 kubelet[2561]: I0113 22:03:20.589151 2561 factory.go:221] Registration of the systemd container factory successfully Jan 13 22:03:20.589465 kubelet[2561]: I0113 22:03:20.589393 2561 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 22:03:20.597816 kubelet[2561]: I0113 22:03:20.596517 2561 factory.go:221] Registration of the containerd container factory successfully Jan 13 22:03:20.646054 kubelet[2561]: I0113 22:03:20.645697 2561 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 22:03:20.646054 kubelet[2561]: I0113 22:03:20.645716 2561 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 22:03:20.646054 kubelet[2561]: I0113 22:03:20.645732 2561 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:03:20.646054 kubelet[2561]: I0113 22:03:20.645924 2561 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 22:03:20.646054 kubelet[2561]: I0113 22:03:20.645936 2561 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 22:03:20.646054 kubelet[2561]: I0113 22:03:20.645956 2561 policy_none.go:49] "None policy: Start" Jan 13 22:03:20.646899 kubelet[2561]: I0113 22:03:20.646637 2561 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 22:03:20.646899 kubelet[2561]: I0113 22:03:20.646671 2561 state_mem.go:35] "Initializing new in-memory state store" Jan 13 22:03:20.646899 kubelet[2561]: I0113 22:03:20.646841 2561 state_mem.go:75] "Updated machine memory state" Jan 13 22:03:20.653832 kubelet[2561]: I0113 22:03:20.650901 2561 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 22:03:20.653832 kubelet[2561]: I0113 22:03:20.652429 2561 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 22:03:20.653832 kubelet[2561]: I0113 22:03:20.652442 2561 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 22:03:20.653832 kubelet[2561]: I0113 22:03:20.652608 2561 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 22:03:20.693598 kubelet[2561]: W0113 22:03:20.693492 2561 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:03:20.699301 kubelet[2561]: W0113 22:03:20.699274 2561 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:03:20.699703 kubelet[2561]: W0113 22:03:20.699462 2561 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:03:20.699703 kubelet[2561]: E0113 22:03:20.699543 2561 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:20.699703 kubelet[2561]: E0113 22:03:20.699691 2561 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:20.758179 kubelet[2561]: I0113 22:03:20.758136 2561 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:20.781270 kubelet[2561]: I0113 22:03:20.778295 2561 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:20.781270 kubelet[2561]: I0113 22:03:20.778452 2561 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:20.881395 kubelet[2561]: I0113 22:03:20.881284 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6016ed30c524a645ee998767e65a0b15-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"6016ed30c524a645ee998767e65a0b15\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:20.881395 kubelet[2561]: I0113 22:03:20.881370 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6016ed30c524a645ee998767e65a0b15-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"6016ed30c524a645ee998767e65a0b15\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:20.881698 kubelet[2561]: I0113 22:03:20.881421 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12bb02566962c904532fce2870ae8101-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"12bb02566962c904532fce2870ae8101\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:20.881698 kubelet[2561]: I0113 22:03:20.881466 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12bb02566962c904532fce2870ae8101-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"12bb02566962c904532fce2870ae8101\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:20.881698 kubelet[2561]: I0113 22:03:20.881510 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6016ed30c524a645ee998767e65a0b15-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"6016ed30c524a645ee998767e65a0b15\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:20.881698 kubelet[2561]: I0113 22:03:20.881549 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6016ed30c524a645ee998767e65a0b15-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"6016ed30c524a645ee998767e65a0b15\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:20.882032 kubelet[2561]: I0113 22:03:20.881596 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6016ed30c524a645ee998767e65a0b15-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"6016ed30c524a645ee998767e65a0b15\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:20.882032 kubelet[2561]: I0113 22:03:20.881636 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e50e6b81bfc146d0cd5c5dfe6cbd02a-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"6e50e6b81bfc146d0cd5c5dfe6cbd02a\") " pod="kube-system/kube-scheduler-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:20.882032 kubelet[2561]: I0113 22:03:20.881674 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12bb02566962c904532fce2870ae8101-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal\" (UID: \"12bb02566962c904532fce2870ae8101\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:21.548858 kubelet[2561]: I0113 22:03:21.546402 2561 apiserver.go:52] "Watching apiserver" Jan 13 22:03:21.580351 kubelet[2561]: I0113 22:03:21.580258 2561 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 22:03:21.634209 kubelet[2561]: W0113 22:03:21.634170 2561 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:03:21.634323 kubelet[2561]: E0113 22:03:21.634249 2561 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:03:21.708345 kubelet[2561]: I0113 22:03:21.708258 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-2-0f60d24a30.novalocal" podStartSLOduration=1.708241955 podStartE2EDuration="1.708241955s" podCreationTimestamp="2025-01-13 22:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:03:21.706565922 +0000 UTC m=+1.278440275" watchObservedRunningTime="2025-01-13 22:03:21.708241955 +0000 UTC m=+1.280116308" Jan 13 22:03:21.708547 kubelet[2561]: I0113 22:03:21.708409 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-2-0f60d24a30.novalocal" podStartSLOduration=1.7084016050000002 podStartE2EDuration="1.708401605s" podCreationTimestamp="2025-01-13 22:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:03:21.674273317 +0000 UTC m=+1.246147670" watchObservedRunningTime="2025-01-13 22:03:21.708401605 +0000 UTC m=+1.280275958" Jan 13 22:03:21.724142 kubelet[2561]: I0113 22:03:21.723996 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-2-0f60d24a30.novalocal" podStartSLOduration=3.723977497 podStartE2EDuration="3.723977497s" podCreationTimestamp="2025-01-13 22:03:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:03:21.721945045 +0000 UTC m=+1.293819398" watchObservedRunningTime="2025-01-13 22:03:21.723977497 +0000 UTC m=+1.295851850" Jan 13 22:03:24.056805 kubelet[2561]: I0113 22:03:24.056566 2561 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 22:03:24.057384 containerd[1458]: time="2025-01-13T22:03:24.057055263Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 22:03:24.059821 kubelet[2561]: I0113 22:03:24.059495 2561 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 22:03:24.606974 kubelet[2561]: I0113 22:03:24.606825 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92446572-65f4-4476-9210-1adb5d6b03d6-xtables-lock\") pod \"kube-proxy-zw8z2\" (UID: \"92446572-65f4-4476-9210-1adb5d6b03d6\") " pod="kube-system/kube-proxy-zw8z2" Jan 13 22:03:24.606974 kubelet[2561]: I0113 22:03:24.606862 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92446572-65f4-4476-9210-1adb5d6b03d6-kube-proxy\") pod \"kube-proxy-zw8z2\" (UID: \"92446572-65f4-4476-9210-1adb5d6b03d6\") " pod="kube-system/kube-proxy-zw8z2" Jan 13 22:03:24.606974 kubelet[2561]: I0113 22:03:24.606880 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92446572-65f4-4476-9210-1adb5d6b03d6-lib-modules\") pod \"kube-proxy-zw8z2\" (UID: \"92446572-65f4-4476-9210-1adb5d6b03d6\") " pod="kube-system/kube-proxy-zw8z2" Jan 13 22:03:24.606974 kubelet[2561]: I0113 22:03:24.606909 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc7w5\" (UniqueName: \"kubernetes.io/projected/92446572-65f4-4476-9210-1adb5d6b03d6-kube-api-access-gc7w5\") pod \"kube-proxy-zw8z2\" (UID: \"92446572-65f4-4476-9210-1adb5d6b03d6\") " pod="kube-system/kube-proxy-zw8z2" Jan 13 22:03:24.610402 systemd[1]: Created slice kubepods-besteffort-pod92446572_65f4_4476_9210_1adb5d6b03d6.slice - libcontainer container kubepods-besteffort-pod92446572_65f4_4476_9210_1adb5d6b03d6.slice. Jan 13 22:03:24.892184 systemd[1]: Created slice kubepods-besteffort-pod642de3c4_b142_45bd_bd2c_bf435c907c93.slice - libcontainer container kubepods-besteffort-pod642de3c4_b142_45bd_bd2c_bf435c907c93.slice. Jan 13 22:03:24.909853 kubelet[2561]: I0113 22:03:24.909754 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/642de3c4-b142-45bd-bd2c-bf435c907c93-var-lib-calico\") pod \"tigera-operator-76c4976dd7-4fvs6\" (UID: \"642de3c4-b142-45bd-bd2c-bf435c907c93\") " pod="tigera-operator/tigera-operator-76c4976dd7-4fvs6" Jan 13 22:03:24.909853 kubelet[2561]: I0113 22:03:24.909809 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg54r\" (UniqueName: \"kubernetes.io/projected/642de3c4-b142-45bd-bd2c-bf435c907c93-kube-api-access-dg54r\") pod \"tigera-operator-76c4976dd7-4fvs6\" (UID: \"642de3c4-b142-45bd-bd2c-bf435c907c93\") " pod="tigera-operator/tigera-operator-76c4976dd7-4fvs6" Jan 13 22:03:24.919815 containerd[1458]: time="2025-01-13T22:03:24.919755575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zw8z2,Uid:92446572-65f4-4476-9210-1adb5d6b03d6,Namespace:kube-system,Attempt:0,}" Jan 13 22:03:24.958451 containerd[1458]: time="2025-01-13T22:03:24.958069296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:03:24.958451 containerd[1458]: time="2025-01-13T22:03:24.958157744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:03:24.958451 containerd[1458]: time="2025-01-13T22:03:24.958173214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:03:24.958451 containerd[1458]: time="2025-01-13T22:03:24.958252545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:03:24.983954 systemd[1]: Started cri-containerd-9251f214b3e8d010dfc48a7ce55277d9fcb0e752483aae6e2ea9d18676d43acb.scope - libcontainer container 9251f214b3e8d010dfc48a7ce55277d9fcb0e752483aae6e2ea9d18676d43acb. Jan 13 22:03:25.008065 containerd[1458]: time="2025-01-13T22:03:25.008015126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zw8z2,Uid:92446572-65f4-4476-9210-1adb5d6b03d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9251f214b3e8d010dfc48a7ce55277d9fcb0e752483aae6e2ea9d18676d43acb\"" Jan 13 22:03:25.012421 containerd[1458]: time="2025-01-13T22:03:25.012388741Z" level=info msg="CreateContainer within sandbox \"9251f214b3e8d010dfc48a7ce55277d9fcb0e752483aae6e2ea9d18676d43acb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 22:03:25.041933 containerd[1458]: time="2025-01-13T22:03:25.041832055Z" level=info msg="CreateContainer within sandbox \"9251f214b3e8d010dfc48a7ce55277d9fcb0e752483aae6e2ea9d18676d43acb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"76f11d0d0bf2e053f8992198a767e9521d5fde392880d6b6061f8a11546d817d\"" Jan 13 22:03:25.044545 containerd[1458]: time="2025-01-13T22:03:25.042928763Z" level=info msg="StartContainer for \"76f11d0d0bf2e053f8992198a767e9521d5fde392880d6b6061f8a11546d817d\"" Jan 13 22:03:25.077924 systemd[1]: Started cri-containerd-76f11d0d0bf2e053f8992198a767e9521d5fde392880d6b6061f8a11546d817d.scope - libcontainer container 76f11d0d0bf2e053f8992198a767e9521d5fde392880d6b6061f8a11546d817d. Jan 13 22:03:25.117745 containerd[1458]: time="2025-01-13T22:03:25.117701380Z" level=info msg="StartContainer for \"76f11d0d0bf2e053f8992198a767e9521d5fde392880d6b6061f8a11546d817d\" returns successfully" Jan 13 22:03:25.195762 containerd[1458]: time="2025-01-13T22:03:25.195519655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-4fvs6,Uid:642de3c4-b142-45bd-bd2c-bf435c907c93,Namespace:tigera-operator,Attempt:0,}" Jan 13 22:03:25.247955 containerd[1458]: time="2025-01-13T22:03:25.247730317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:03:25.248143 containerd[1458]: time="2025-01-13T22:03:25.247944054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:03:25.248143 containerd[1458]: time="2025-01-13T22:03:25.247997665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:03:25.250254 containerd[1458]: time="2025-01-13T22:03:25.248684382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:03:25.272984 systemd[1]: Started cri-containerd-4fc2490d4af8965e6d3a5f730e313b9734c4e4d0fd1a569af1f9f4fefc83fdc2.scope - libcontainer container 4fc2490d4af8965e6d3a5f730e313b9734c4e4d0fd1a569af1f9f4fefc83fdc2. Jan 13 22:03:25.321765 containerd[1458]: time="2025-01-13T22:03:25.321712279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-4fvs6,Uid:642de3c4-b142-45bd-bd2c-bf435c907c93,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4fc2490d4af8965e6d3a5f730e313b9734c4e4d0fd1a569af1f9f4fefc83fdc2\"" Jan 13 22:03:25.324088 containerd[1458]: time="2025-01-13T22:03:25.324062833Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 22:03:25.656923 kubelet[2561]: I0113 22:03:25.656718 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zw8z2" podStartSLOduration=1.65670098 podStartE2EDuration="1.65670098s" podCreationTimestamp="2025-01-13 22:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:03:25.655074704 +0000 UTC m=+5.226949067" watchObservedRunningTime="2025-01-13 22:03:25.65670098 +0000 UTC m=+5.228575333" Jan 13 22:03:26.752362 sudo[1686]: pam_unix(sudo:session): session closed for user root Jan 13 22:03:26.966197 sshd[1683]: pam_unix(sshd:session): session closed for user core Jan 13 22:03:26.972732 systemd[1]: sshd@6-172.24.4.131:22-172.24.4.1:49976.service: Deactivated successfully. Jan 13 22:03:26.976199 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 22:03:26.976589 systemd[1]: session-9.scope: Consumed 6.388s CPU time, 159.2M memory peak, 0B memory swap peak. Jan 13 22:03:26.980227 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Jan 13 22:03:26.983412 systemd-logind[1440]: Removed session 9. Jan 13 22:03:31.359482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3748615870.mount: Deactivated successfully. Jan 13 22:03:31.996709 containerd[1458]: time="2025-01-13T22:03:31.996626277Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:31.997803 containerd[1458]: time="2025-01-13T22:03:31.997631070Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764297" Jan 13 22:03:31.999038 containerd[1458]: time="2025-01-13T22:03:31.998994774Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:32.001609 containerd[1458]: time="2025-01-13T22:03:32.001567938Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:32.002834 containerd[1458]: time="2025-01-13T22:03:32.002298180Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 6.678182116s" Jan 13 22:03:32.002834 containerd[1458]: time="2025-01-13T22:03:32.002340190Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 22:03:32.005132 containerd[1458]: time="2025-01-13T22:03:32.005104624Z" level=info msg="CreateContainer within sandbox \"4fc2490d4af8965e6d3a5f730e313b9734c4e4d0fd1a569af1f9f4fefc83fdc2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 22:03:32.022881 containerd[1458]: time="2025-01-13T22:03:32.022837467Z" level=info msg="CreateContainer within sandbox \"4fc2490d4af8965e6d3a5f730e313b9734c4e4d0fd1a569af1f9f4fefc83fdc2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"22fdff330a5c25d71f2423c67eeb3f477c2501e3d146c7b26f345c902d816f57\"" Jan 13 22:03:32.024386 containerd[1458]: time="2025-01-13T22:03:32.023848581Z" level=info msg="StartContainer for \"22fdff330a5c25d71f2423c67eeb3f477c2501e3d146c7b26f345c902d816f57\"" Jan 13 22:03:32.048919 systemd[1]: Started cri-containerd-22fdff330a5c25d71f2423c67eeb3f477c2501e3d146c7b26f345c902d816f57.scope - libcontainer container 22fdff330a5c25d71f2423c67eeb3f477c2501e3d146c7b26f345c902d816f57. Jan 13 22:03:32.075876 containerd[1458]: time="2025-01-13T22:03:32.075827052Z" level=info msg="StartContainer for \"22fdff330a5c25d71f2423c67eeb3f477c2501e3d146c7b26f345c902d816f57\" returns successfully" Jan 13 22:03:32.684826 kubelet[2561]: I0113 22:03:32.684109 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-4fvs6" podStartSLOduration=2.003839854 podStartE2EDuration="8.684072746s" podCreationTimestamp="2025-01-13 22:03:24 +0000 UTC" firstStartedPulling="2025-01-13 22:03:25.322830988 +0000 UTC m=+4.894705341" lastFinishedPulling="2025-01-13 22:03:32.00306387 +0000 UTC m=+11.574938233" observedRunningTime="2025-01-13 22:03:32.683832361 +0000 UTC m=+12.255706764" watchObservedRunningTime="2025-01-13 22:03:32.684072746 +0000 UTC m=+12.255947149" Jan 13 22:03:35.433318 systemd[1]: Created slice kubepods-besteffort-pod35ac4db9_c7cf_4737_8772_5016dda6c0e5.slice - libcontainer container kubepods-besteffort-pod35ac4db9_c7cf_4737_8772_5016dda6c0e5.slice. Jan 13 22:03:35.481627 kubelet[2561]: I0113 22:03:35.481485 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/35ac4db9-c7cf-4737-8772-5016dda6c0e5-typha-certs\") pod \"calico-typha-6fb48c5865-fxk52\" (UID: \"35ac4db9-c7cf-4737-8772-5016dda6c0e5\") " pod="calico-system/calico-typha-6fb48c5865-fxk52" Jan 13 22:03:35.481627 kubelet[2561]: I0113 22:03:35.481533 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35ac4db9-c7cf-4737-8772-5016dda6c0e5-tigera-ca-bundle\") pod \"calico-typha-6fb48c5865-fxk52\" (UID: \"35ac4db9-c7cf-4737-8772-5016dda6c0e5\") " pod="calico-system/calico-typha-6fb48c5865-fxk52" Jan 13 22:03:35.481627 kubelet[2561]: I0113 22:03:35.481559 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r95g\" (UniqueName: \"kubernetes.io/projected/35ac4db9-c7cf-4737-8772-5016dda6c0e5-kube-api-access-5r95g\") pod \"calico-typha-6fb48c5865-fxk52\" (UID: \"35ac4db9-c7cf-4737-8772-5016dda6c0e5\") " pod="calico-system/calico-typha-6fb48c5865-fxk52" Jan 13 22:03:35.524691 systemd[1]: Created slice kubepods-besteffort-pod637a41a9_533f_4457_9846_4620cc36b5c5.slice - libcontainer container kubepods-besteffort-pod637a41a9_533f_4457_9846_4620cc36b5c5.slice. Jan 13 22:03:35.585817 kubelet[2561]: I0113 22:03:35.582565 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/637a41a9-533f-4457-9846-4620cc36b5c5-cni-log-dir\") pod \"calico-node-zr624\" (UID: \"637a41a9-533f-4457-9846-4620cc36b5c5\") " pod="calico-system/calico-node-zr624" Jan 13 22:03:35.585817 kubelet[2561]: I0113 22:03:35.582611 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/637a41a9-533f-4457-9846-4620cc36b5c5-flexvol-driver-host\") pod \"calico-node-zr624\" (UID: \"637a41a9-533f-4457-9846-4620cc36b5c5\") " pod="calico-system/calico-node-zr624" Jan 13 22:03:35.585817 kubelet[2561]: I0113 22:03:35.582637 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/637a41a9-533f-4457-9846-4620cc36b5c5-var-run-calico\") pod \"calico-node-zr624\" (UID: \"637a41a9-533f-4457-9846-4620cc36b5c5\") " pod="calico-system/calico-node-zr624" Jan 13 22:03:35.585817 kubelet[2561]: I0113 22:03:35.582657 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/637a41a9-533f-4457-9846-4620cc36b5c5-lib-modules\") pod \"calico-node-zr624\" (UID: \"637a41a9-533f-4457-9846-4620cc36b5c5\") " pod="calico-system/calico-node-zr624" Jan 13 22:03:35.585817 kubelet[2561]: I0113 22:03:35.582678 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/637a41a9-533f-4457-9846-4620cc36b5c5-tigera-ca-bundle\") pod \"calico-node-zr624\" (UID: \"637a41a9-533f-4457-9846-4620cc36b5c5\") " pod="calico-system/calico-node-zr624" Jan 13 22:03:35.586049 kubelet[2561]: I0113 22:03:35.582696 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/637a41a9-533f-4457-9846-4620cc36b5c5-var-lib-calico\") pod \"calico-node-zr624\" (UID: \"637a41a9-533f-4457-9846-4620cc36b5c5\") " pod="calico-system/calico-node-zr624" Jan 13 22:03:35.586049 kubelet[2561]: I0113 22:03:35.582713 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/637a41a9-533f-4457-9846-4620cc36b5c5-cni-bin-dir\") pod \"calico-node-zr624\" (UID: \"637a41a9-533f-4457-9846-4620cc36b5c5\") " pod="calico-system/calico-node-zr624" Jan 13 22:03:35.586049 kubelet[2561]: I0113 22:03:35.582746 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/637a41a9-533f-4457-9846-4620cc36b5c5-xtables-lock\") pod \"calico-node-zr624\" (UID: \"637a41a9-533f-4457-9846-4620cc36b5c5\") " pod="calico-system/calico-node-zr624" Jan 13 22:03:35.586049 kubelet[2561]: I0113 22:03:35.582793 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/637a41a9-533f-4457-9846-4620cc36b5c5-policysync\") pod \"calico-node-zr624\" (UID: \"637a41a9-533f-4457-9846-4620cc36b5c5\") " pod="calico-system/calico-node-zr624" Jan 13 22:03:35.586049 kubelet[2561]: I0113 22:03:35.582814 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/637a41a9-533f-4457-9846-4620cc36b5c5-cni-net-dir\") pod \"calico-node-zr624\" (UID: \"637a41a9-533f-4457-9846-4620cc36b5c5\") " pod="calico-system/calico-node-zr624" Jan 13 22:03:35.586204 kubelet[2561]: I0113 22:03:35.582849 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/637a41a9-533f-4457-9846-4620cc36b5c5-node-certs\") pod \"calico-node-zr624\" (UID: \"637a41a9-533f-4457-9846-4620cc36b5c5\") " pod="calico-system/calico-node-zr624" Jan 13 22:03:35.586204 kubelet[2561]: I0113 22:03:35.582880 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk4b6\" (UniqueName: \"kubernetes.io/projected/637a41a9-533f-4457-9846-4620cc36b5c5-kube-api-access-pk4b6\") pod \"calico-node-zr624\" (UID: \"637a41a9-533f-4457-9846-4620cc36b5c5\") " pod="calico-system/calico-node-zr624" Jan 13 22:03:35.688501 kubelet[2561]: E0113 22:03:35.688477 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.688646 kubelet[2561]: W0113 22:03:35.688630 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.688753 kubelet[2561]: E0113 22:03:35.688738 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.690061 kubelet[2561]: E0113 22:03:35.689958 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.690114 kubelet[2561]: W0113 22:03:35.690071 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.690114 kubelet[2561]: E0113 22:03:35.690104 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.692140 kubelet[2561]: E0113 22:03:35.691845 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.692140 kubelet[2561]: W0113 22:03:35.691859 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.692140 kubelet[2561]: E0113 22:03:35.691967 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.692140 kubelet[2561]: E0113 22:03:35.692077 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.692140 kubelet[2561]: W0113 22:03:35.692085 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.692140 kubelet[2561]: E0113 22:03:35.692140 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.692589 kubelet[2561]: E0113 22:03:35.692302 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.692589 kubelet[2561]: W0113 22:03:35.692310 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.692589 kubelet[2561]: E0113 22:03:35.692358 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.692589 kubelet[2561]: E0113 22:03:35.692528 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.692589 kubelet[2561]: W0113 22:03:35.692537 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.692589 kubelet[2561]: E0113 22:03:35.692555 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.693211 kubelet[2561]: E0113 22:03:35.693025 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.693211 kubelet[2561]: W0113 22:03:35.693036 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.693856 kubelet[2561]: E0113 22:03:35.693820 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.696544 kubelet[2561]: E0113 22:03:35.696468 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mt5rr" podUID="d18d3a49-282b-4a21-8964-114828657572" Jan 13 22:03:35.700757 kubelet[2561]: E0113 22:03:35.700248 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.700757 kubelet[2561]: W0113 22:03:35.700271 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.700757 kubelet[2561]: E0113 22:03:35.700521 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.705038 kubelet[2561]: E0113 22:03:35.701735 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.705038 kubelet[2561]: W0113 22:03:35.701750 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.705038 kubelet[2561]: E0113 22:03:35.701802 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.705038 kubelet[2561]: E0113 22:03:35.702020 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.705038 kubelet[2561]: W0113 22:03:35.702035 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.705038 kubelet[2561]: E0113 22:03:35.702044 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.720505 kubelet[2561]: E0113 22:03:35.720483 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.720628 kubelet[2561]: W0113 22:03:35.720613 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.720695 kubelet[2561]: E0113 22:03:35.720684 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.738897 containerd[1458]: time="2025-01-13T22:03:35.738862659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fb48c5865-fxk52,Uid:35ac4db9-c7cf-4737-8772-5016dda6c0e5,Namespace:calico-system,Attempt:0,}" Jan 13 22:03:35.781855 kubelet[2561]: E0113 22:03:35.781820 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.781855 kubelet[2561]: W0113 22:03:35.781845 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.782031 kubelet[2561]: E0113 22:03:35.781866 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.784213 kubelet[2561]: E0113 22:03:35.784187 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.784213 kubelet[2561]: W0113 22:03:35.784206 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.784324 kubelet[2561]: E0113 22:03:35.784222 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.784485 kubelet[2561]: E0113 22:03:35.784465 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.784485 kubelet[2561]: W0113 22:03:35.784480 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.784566 kubelet[2561]: E0113 22:03:35.784491 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.784739 kubelet[2561]: E0113 22:03:35.784679 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.784739 kubelet[2561]: W0113 22:03:35.784692 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.784739 kubelet[2561]: E0113 22:03:35.784701 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.785010 kubelet[2561]: E0113 22:03:35.784950 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.785010 kubelet[2561]: W0113 22:03:35.784959 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.785010 kubelet[2561]: E0113 22:03:35.784968 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.785480 kubelet[2561]: E0113 22:03:35.785127 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.785480 kubelet[2561]: W0113 22:03:35.785140 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.785480 kubelet[2561]: E0113 22:03:35.785148 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.785480 kubelet[2561]: E0113 22:03:35.785299 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.785480 kubelet[2561]: W0113 22:03:35.785307 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.785480 kubelet[2561]: E0113 22:03:35.785319 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.785697 kubelet[2561]: E0113 22:03:35.785513 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.785697 kubelet[2561]: W0113 22:03:35.785524 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.785697 kubelet[2561]: E0113 22:03:35.785534 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.786091 kubelet[2561]: E0113 22:03:35.785747 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.786091 kubelet[2561]: W0113 22:03:35.785758 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.786091 kubelet[2561]: E0113 22:03:35.785887 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.786371 kubelet[2561]: E0113 22:03:35.786193 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.786371 kubelet[2561]: W0113 22:03:35.786203 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.786371 kubelet[2561]: E0113 22:03:35.786213 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.786709 kubelet[2561]: E0113 22:03:35.786590 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.786709 kubelet[2561]: W0113 22:03:35.786603 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.786709 kubelet[2561]: E0113 22:03:35.786613 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.787884 kubelet[2561]: E0113 22:03:35.787857 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.787884 kubelet[2561]: W0113 22:03:35.787872 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.787884 kubelet[2561]: E0113 22:03:35.787883 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.788354 kubelet[2561]: E0113 22:03:35.788312 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.788354 kubelet[2561]: W0113 22:03:35.788327 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.788354 kubelet[2561]: E0113 22:03:35.788337 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.788645 kubelet[2561]: E0113 22:03:35.788626 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.788645 kubelet[2561]: W0113 22:03:35.788641 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.789068 kubelet[2561]: E0113 22:03:35.788652 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.789323 kubelet[2561]: E0113 22:03:35.789305 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.789323 kubelet[2561]: W0113 22:03:35.789319 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.789402 kubelet[2561]: E0113 22:03:35.789331 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.790794 kubelet[2561]: E0113 22:03:35.790432 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.790794 kubelet[2561]: W0113 22:03:35.790446 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.790794 kubelet[2561]: E0113 22:03:35.790456 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.790794 kubelet[2561]: E0113 22:03:35.790753 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.790928 kubelet[2561]: W0113 22:03:35.790764 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.790928 kubelet[2561]: E0113 22:03:35.790914 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.791083 kubelet[2561]: E0113 22:03:35.791060 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.791083 kubelet[2561]: W0113 22:03:35.791074 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.791083 kubelet[2561]: E0113 22:03:35.791084 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.791279 containerd[1458]: time="2025-01-13T22:03:35.790177232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:03:35.791684 kubelet[2561]: E0113 22:03:35.791600 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.791684 kubelet[2561]: W0113 22:03:35.791613 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.791684 kubelet[2561]: E0113 22:03:35.791623 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.791963 containerd[1458]: time="2025-01-13T22:03:35.791860584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:03:35.792820 kubelet[2561]: E0113 22:03:35.792090 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.792820 kubelet[2561]: W0113 22:03:35.792105 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.792820 kubelet[2561]: E0113 22:03:35.792117 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.793209 kubelet[2561]: E0113 22:03:35.793192 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.793209 kubelet[2561]: W0113 22:03:35.793206 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.793285 kubelet[2561]: E0113 22:03:35.793217 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.794114 kubelet[2561]: I0113 22:03:35.794091 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d18d3a49-282b-4a21-8964-114828657572-socket-dir\") pod \"csi-node-driver-mt5rr\" (UID: \"d18d3a49-282b-4a21-8964-114828657572\") " pod="calico-system/csi-node-driver-mt5rr" Jan 13 22:03:35.794844 containerd[1458]: time="2025-01-13T22:03:35.794323048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:03:35.794844 containerd[1458]: time="2025-01-13T22:03:35.794429500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:03:35.795055 kubelet[2561]: E0113 22:03:35.795006 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.795055 kubelet[2561]: W0113 22:03:35.795021 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.795055 kubelet[2561]: E0113 22:03:35.795037 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.795225 kubelet[2561]: I0113 22:03:35.795063 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgts7\" (UniqueName: \"kubernetes.io/projected/d18d3a49-282b-4a21-8964-114828657572-kube-api-access-rgts7\") pod \"csi-node-driver-mt5rr\" (UID: \"d18d3a49-282b-4a21-8964-114828657572\") " pod="calico-system/csi-node-driver-mt5rr" Jan 13 22:03:35.795689 kubelet[2561]: E0113 22:03:35.795666 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.795689 kubelet[2561]: W0113 22:03:35.795683 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.795689 kubelet[2561]: E0113 22:03:35.795700 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.796972 kubelet[2561]: I0113 22:03:35.795718 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d18d3a49-282b-4a21-8964-114828657572-kubelet-dir\") pod \"csi-node-driver-mt5rr\" (UID: \"d18d3a49-282b-4a21-8964-114828657572\") " pod="calico-system/csi-node-driver-mt5rr" Jan 13 22:03:35.797892 kubelet[2561]: E0113 22:03:35.797866 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.797892 kubelet[2561]: W0113 22:03:35.797889 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.798086 kubelet[2561]: E0113 22:03:35.798001 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.798086 kubelet[2561]: I0113 22:03:35.798048 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d18d3a49-282b-4a21-8964-114828657572-registration-dir\") pod \"csi-node-driver-mt5rr\" (UID: \"d18d3a49-282b-4a21-8964-114828657572\") " pod="calico-system/csi-node-driver-mt5rr" Jan 13 22:03:35.798284 kubelet[2561]: E0113 22:03:35.798266 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.798284 kubelet[2561]: W0113 22:03:35.798279 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.798391 kubelet[2561]: E0113 22:03:35.798366 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.798492 kubelet[2561]: E0113 22:03:35.798474 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.798492 kubelet[2561]: W0113 22:03:35.798488 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.798609 kubelet[2561]: E0113 22:03:35.798587 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.799214 kubelet[2561]: E0113 22:03:35.799194 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.799214 kubelet[2561]: W0113 22:03:35.799208 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.800078 kubelet[2561]: E0113 22:03:35.799346 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.800078 kubelet[2561]: W0113 22:03:35.799355 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.800078 kubelet[2561]: E0113 22:03:35.799488 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.800078 kubelet[2561]: W0113 22:03:35.799497 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.800078 kubelet[2561]: E0113 22:03:35.799655 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.800078 kubelet[2561]: W0113 22:03:35.799664 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.800078 kubelet[2561]: E0113 22:03:35.799828 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.800078 kubelet[2561]: E0113 22:03:35.799674 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.800078 kubelet[2561]: I0113 22:03:35.799857 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d18d3a49-282b-4a21-8964-114828657572-varrun\") pod \"csi-node-driver-mt5rr\" (UID: \"d18d3a49-282b-4a21-8964-114828657572\") " pod="calico-system/csi-node-driver-mt5rr" Jan 13 22:03:35.800302 kubelet[2561]: E0113 22:03:35.799873 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.800302 kubelet[2561]: E0113 22:03:35.799886 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.801942 kubelet[2561]: E0113 22:03:35.800511 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.801942 kubelet[2561]: W0113 22:03:35.800527 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.801942 kubelet[2561]: E0113 22:03:35.800554 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.801942 kubelet[2561]: E0113 22:03:35.801835 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.801942 kubelet[2561]: W0113 22:03:35.801846 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.801942 kubelet[2561]: E0113 22:03:35.801863 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.802365 kubelet[2561]: E0113 22:03:35.802246 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.802365 kubelet[2561]: W0113 22:03:35.802269 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.802365 kubelet[2561]: E0113 22:03:35.802280 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.804071 kubelet[2561]: E0113 22:03:35.803961 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.804071 kubelet[2561]: W0113 22:03:35.803975 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.804071 kubelet[2561]: E0113 22:03:35.803986 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.804326 kubelet[2561]: E0113 22:03:35.804313 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.804423 kubelet[2561]: W0113 22:03:35.804382 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.804423 kubelet[2561]: E0113 22:03:35.804404 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.828985 systemd[1]: Started cri-containerd-11786eb334025774fce43274c197b6ce80d6fea9f746d5495fd3cca3e91e9efc.scope - libcontainer container 11786eb334025774fce43274c197b6ce80d6fea9f746d5495fd3cca3e91e9efc. Jan 13 22:03:35.831158 containerd[1458]: time="2025-01-13T22:03:35.830821076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zr624,Uid:637a41a9-533f-4457-9846-4620cc36b5c5,Namespace:calico-system,Attempt:0,}" Jan 13 22:03:35.869337 containerd[1458]: time="2025-01-13T22:03:35.869018375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:03:35.869337 containerd[1458]: time="2025-01-13T22:03:35.869073519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:03:35.869337 containerd[1458]: time="2025-01-13T22:03:35.869087926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:03:35.869992 containerd[1458]: time="2025-01-13T22:03:35.869351505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:03:35.893616 systemd[1]: Started cri-containerd-fe5677647b09f8714cd927d46f9a11ad7e721443bbbcd21a562f3d9e6f4c321f.scope - libcontainer container fe5677647b09f8714cd927d46f9a11ad7e721443bbbcd21a562f3d9e6f4c321f. Jan 13 22:03:35.901876 kubelet[2561]: E0113 22:03:35.900657 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.901876 kubelet[2561]: W0113 22:03:35.900675 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.901876 kubelet[2561]: E0113 22:03:35.900693 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.901876 kubelet[2561]: E0113 22:03:35.901276 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.901876 kubelet[2561]: W0113 22:03:35.901287 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.901876 kubelet[2561]: E0113 22:03:35.901297 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.904632 kubelet[2561]: E0113 22:03:35.904403 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.904632 kubelet[2561]: W0113 22:03:35.904421 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.904632 kubelet[2561]: E0113 22:03:35.904434 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.904632 kubelet[2561]: E0113 22:03:35.904699 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.904632 kubelet[2561]: W0113 22:03:35.904719 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.904632 kubelet[2561]: E0113 22:03:35.904742 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.905818 kubelet[2561]: E0113 22:03:35.905523 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.905818 kubelet[2561]: W0113 22:03:35.905533 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.905818 kubelet[2561]: E0113 22:03:35.905726 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.906383 kubelet[2561]: E0113 22:03:35.906151 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.906383 kubelet[2561]: W0113 22:03:35.906164 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.906383 kubelet[2561]: E0113 22:03:35.906183 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.907428 kubelet[2561]: E0113 22:03:35.907409 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.907428 kubelet[2561]: W0113 22:03:35.907423 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.908568 kubelet[2561]: E0113 22:03:35.908538 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.908568 kubelet[2561]: W0113 22:03:35.908552 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.909358 kubelet[2561]: E0113 22:03:35.908709 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.909358 kubelet[2561]: E0113 22:03:35.909138 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.909358 kubelet[2561]: W0113 22:03:35.909147 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.909435 kubelet[2561]: E0113 22:03:35.909424 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.909462 kubelet[2561]: E0113 22:03:35.909440 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.909812 kubelet[2561]: E0113 22:03:35.909761 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.909812 kubelet[2561]: W0113 22:03:35.909789 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.910104 kubelet[2561]: E0113 22:03:35.910084 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.911007 kubelet[2561]: E0113 22:03:35.910984 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.911007 kubelet[2561]: W0113 22:03:35.911000 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.911198 kubelet[2561]: E0113 22:03:35.911169 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.911198 kubelet[2561]: W0113 22:03:35.911179 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.911454 kubelet[2561]: E0113 22:03:35.911276 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.911454 kubelet[2561]: E0113 22:03:35.911306 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.911717 kubelet[2561]: E0113 22:03:35.911583 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.911717 kubelet[2561]: W0113 22:03:35.911593 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.911717 kubelet[2561]: E0113 22:03:35.911690 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.912191 kubelet[2561]: E0113 22:03:35.911874 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.912191 kubelet[2561]: W0113 22:03:35.911884 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.912191 kubelet[2561]: E0113 22:03:35.911942 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.912191 kubelet[2561]: E0113 22:03:35.912155 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.912191 kubelet[2561]: W0113 22:03:35.912165 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.912756 kubelet[2561]: E0113 22:03:35.912589 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.913059 kubelet[2561]: E0113 22:03:35.912965 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.913059 kubelet[2561]: W0113 22:03:35.912978 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.913059 kubelet[2561]: E0113 22:03:35.913003 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.913482 kubelet[2561]: E0113 22:03:35.913156 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.913482 kubelet[2561]: W0113 22:03:35.913167 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.913482 kubelet[2561]: E0113 22:03:35.913300 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.913959 kubelet[2561]: E0113 22:03:35.913938 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.913959 kubelet[2561]: W0113 22:03:35.913953 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.914276 kubelet[2561]: E0113 22:03:35.914043 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.914276 kubelet[2561]: E0113 22:03:35.914233 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.914276 kubelet[2561]: W0113 22:03:35.914241 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.914352 kubelet[2561]: E0113 22:03:35.914307 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.914923 kubelet[2561]: E0113 22:03:35.914865 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.914923 kubelet[2561]: W0113 22:03:35.914879 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.915320 kubelet[2561]: E0113 22:03:35.915140 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.915909 kubelet[2561]: E0113 22:03:35.915888 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.915909 kubelet[2561]: W0113 22:03:35.915902 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.915909 kubelet[2561]: E0113 22:03:35.915934 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.917294 kubelet[2561]: E0113 22:03:35.917064 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.917294 kubelet[2561]: W0113 22:03:35.917080 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.917294 kubelet[2561]: E0113 22:03:35.917124 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.917720 kubelet[2561]: E0113 22:03:35.917707 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.918753 kubelet[2561]: W0113 22:03:35.917903 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.919533 kubelet[2561]: E0113 22:03:35.919020 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.919533 kubelet[2561]: W0113 22:03:35.919032 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.919533 kubelet[2561]: E0113 22:03:35.919388 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.919533 kubelet[2561]: W0113 22:03:35.919398 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.919533 kubelet[2561]: E0113 22:03:35.919410 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.919533 kubelet[2561]: E0113 22:03:35.919436 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.919533 kubelet[2561]: E0113 22:03:35.919447 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.923651 containerd[1458]: time="2025-01-13T22:03:35.923577331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fb48c5865-fxk52,Uid:35ac4db9-c7cf-4737-8772-5016dda6c0e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"11786eb334025774fce43274c197b6ce80d6fea9f746d5495fd3cca3e91e9efc\"" Jan 13 22:03:35.927163 containerd[1458]: time="2025-01-13T22:03:35.927117843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 22:03:35.936910 kubelet[2561]: E0113 22:03:35.936877 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:35.936910 kubelet[2561]: W0113 22:03:35.936897 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:35.936910 kubelet[2561]: E0113 22:03:35.936915 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:35.956831 containerd[1458]: time="2025-01-13T22:03:35.956444686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zr624,Uid:637a41a9-533f-4457-9846-4620cc36b5c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe5677647b09f8714cd927d46f9a11ad7e721443bbbcd21a562f3d9e6f4c321f\"" Jan 13 22:03:37.586125 kubelet[2561]: E0113 22:03:37.585940 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mt5rr" podUID="d18d3a49-282b-4a21-8964-114828657572" Jan 13 22:03:38.304378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4209797591.mount: Deactivated successfully. Jan 13 22:03:39.586187 kubelet[2561]: E0113 22:03:39.585954 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mt5rr" podUID="d18d3a49-282b-4a21-8964-114828657572" Jan 13 22:03:39.695032 containerd[1458]: time="2025-01-13T22:03:39.694974739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:39.696843 containerd[1458]: time="2025-01-13T22:03:39.696798160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 13 22:03:39.699843 containerd[1458]: time="2025-01-13T22:03:39.699009692Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:39.703927 containerd[1458]: time="2025-01-13T22:03:39.703864583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:39.705959 containerd[1458]: time="2025-01-13T22:03:39.705920471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.778767533s" Jan 13 22:03:39.705959 containerd[1458]: time="2025-01-13T22:03:39.705956901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 22:03:39.709453 containerd[1458]: time="2025-01-13T22:03:39.708997247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 22:03:39.731062 containerd[1458]: time="2025-01-13T22:03:39.731026001Z" level=info msg="CreateContainer within sandbox \"11786eb334025774fce43274c197b6ce80d6fea9f746d5495fd3cca3e91e9efc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 22:03:39.777721 containerd[1458]: time="2025-01-13T22:03:39.777681703Z" level=info msg="CreateContainer within sandbox \"11786eb334025774fce43274c197b6ce80d6fea9f746d5495fd3cca3e91e9efc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9a31024edbd91360a508957bbea36fcc6226a44f2c5bd4dfbb10063de7291c73\"" Jan 13 22:03:39.778456 containerd[1458]: time="2025-01-13T22:03:39.778423854Z" level=info msg="StartContainer for \"9a31024edbd91360a508957bbea36fcc6226a44f2c5bd4dfbb10063de7291c73\"" Jan 13 22:03:39.816942 systemd[1]: Started cri-containerd-9a31024edbd91360a508957bbea36fcc6226a44f2c5bd4dfbb10063de7291c73.scope - libcontainer container 9a31024edbd91360a508957bbea36fcc6226a44f2c5bd4dfbb10063de7291c73. Jan 13 22:03:39.887248 containerd[1458]: time="2025-01-13T22:03:39.886920879Z" level=info msg="StartContainer for \"9a31024edbd91360a508957bbea36fcc6226a44f2c5bd4dfbb10063de7291c73\" returns successfully" Jan 13 22:03:40.722188 systemd[1]: run-containerd-runc-k8s.io-9a31024edbd91360a508957bbea36fcc6226a44f2c5bd4dfbb10063de7291c73-runc.TZfyPF.mount: Deactivated successfully. Jan 13 22:03:40.741634 kubelet[2561]: E0113 22:03:40.741428 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.741634 kubelet[2561]: W0113 22:03:40.741622 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.743752 kubelet[2561]: E0113 22:03:40.741701 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.750547 kubelet[2561]: E0113 22:03:40.750490 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.752152 kubelet[2561]: W0113 22:03:40.752101 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.752600 kubelet[2561]: E0113 22:03:40.752234 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.760074 kubelet[2561]: E0113 22:03:40.759986 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.761237 kubelet[2561]: W0113 22:03:40.760175 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.761237 kubelet[2561]: E0113 22:03:40.760216 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.764228 kubelet[2561]: E0113 22:03:40.764174 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.764228 kubelet[2561]: W0113 22:03:40.764207 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.764841 kubelet[2561]: E0113 22:03:40.764233 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.765275 kubelet[2561]: E0113 22:03:40.765067 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.765275 kubelet[2561]: W0113 22:03:40.765252 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.765673 kubelet[2561]: E0113 22:03:40.765277 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.768804 kubelet[2561]: E0113 22:03:40.768723 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.769329 kubelet[2561]: W0113 22:03:40.769149 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.769329 kubelet[2561]: E0113 22:03:40.769176 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.769963 kubelet[2561]: E0113 22:03:40.769767 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.769963 kubelet[2561]: W0113 22:03:40.769946 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.769963 kubelet[2561]: E0113 22:03:40.769963 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.770872 kubelet[2561]: E0113 22:03:40.770830 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.770872 kubelet[2561]: W0113 22:03:40.770854 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.771476 kubelet[2561]: E0113 22:03:40.770872 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.772512 kubelet[2561]: E0113 22:03:40.772468 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.772512 kubelet[2561]: W0113 22:03:40.772503 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.772597 kubelet[2561]: E0113 22:03:40.772521 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.773296 kubelet[2561]: E0113 22:03:40.773099 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.773296 kubelet[2561]: W0113 22:03:40.773135 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.773426 kubelet[2561]: E0113 22:03:40.773392 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.773819 kubelet[2561]: E0113 22:03:40.773801 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.773819 kubelet[2561]: W0113 22:03:40.773817 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.773895 kubelet[2561]: E0113 22:03:40.773828 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.774189 kubelet[2561]: E0113 22:03:40.774162 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.774189 kubelet[2561]: W0113 22:03:40.774178 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.774189 kubelet[2561]: E0113 22:03:40.774187 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.775006 kubelet[2561]: E0113 22:03:40.774986 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.775006 kubelet[2561]: W0113 22:03:40.775000 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.775201 kubelet[2561]: E0113 22:03:40.775011 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.776064 kubelet[2561]: E0113 22:03:40.776007 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.776280 kubelet[2561]: W0113 22:03:40.776129 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.776280 kubelet[2561]: E0113 22:03:40.776153 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.776928 kubelet[2561]: E0113 22:03:40.776915 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.777115 kubelet[2561]: W0113 22:03:40.777093 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.777115 kubelet[2561]: E0113 22:03:40.777114 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.777433 kubelet[2561]: E0113 22:03:40.777397 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.777433 kubelet[2561]: W0113 22:03:40.777410 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.777509 kubelet[2561]: E0113 22:03:40.777440 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.777919 kubelet[2561]: E0113 22:03:40.777902 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.777919 kubelet[2561]: W0113 22:03:40.777916 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.778010 kubelet[2561]: E0113 22:03:40.777927 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.778605 kubelet[2561]: E0113 22:03:40.778245 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.778605 kubelet[2561]: W0113 22:03:40.778258 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.778605 kubelet[2561]: E0113 22:03:40.778268 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.778605 kubelet[2561]: E0113 22:03:40.778492 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.778605 kubelet[2561]: W0113 22:03:40.778502 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.778605 kubelet[2561]: E0113 22:03:40.778511 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.778807 kubelet[2561]: E0113 22:03:40.778690 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.778807 kubelet[2561]: W0113 22:03:40.778699 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.778807 kubelet[2561]: E0113 22:03:40.778754 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.778968 kubelet[2561]: E0113 22:03:40.778954 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.778968 kubelet[2561]: W0113 22:03:40.778966 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.779029 kubelet[2561]: E0113 22:03:40.779019 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.779192 kubelet[2561]: E0113 22:03:40.779177 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.779192 kubelet[2561]: W0113 22:03:40.779190 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.779269 kubelet[2561]: E0113 22:03:40.779204 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.779397 kubelet[2561]: E0113 22:03:40.779381 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.779397 kubelet[2561]: W0113 22:03:40.779394 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.779462 kubelet[2561]: E0113 22:03:40.779425 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.779591 kubelet[2561]: E0113 22:03:40.779576 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.779591 kubelet[2561]: W0113 22:03:40.779589 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.779655 kubelet[2561]: E0113 22:03:40.779602 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.779904 kubelet[2561]: E0113 22:03:40.779886 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.779904 kubelet[2561]: W0113 22:03:40.779899 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.779985 kubelet[2561]: E0113 22:03:40.779914 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.780380 kubelet[2561]: E0113 22:03:40.780243 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.780380 kubelet[2561]: W0113 22:03:40.780257 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.780380 kubelet[2561]: E0113 22:03:40.780278 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.780545 kubelet[2561]: E0113 22:03:40.780533 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.780609 kubelet[2561]: W0113 22:03:40.780598 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.780692 kubelet[2561]: E0113 22:03:40.780669 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.780909 kubelet[2561]: E0113 22:03:40.780896 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.781051 kubelet[2561]: W0113 22:03:40.780969 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.781051 kubelet[2561]: E0113 22:03:40.780999 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.781185 kubelet[2561]: E0113 22:03:40.781173 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.781248 kubelet[2561]: W0113 22:03:40.781237 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.781314 kubelet[2561]: E0113 22:03:40.781303 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.781539 kubelet[2561]: E0113 22:03:40.781513 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.781539 kubelet[2561]: W0113 22:03:40.781535 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.781643 kubelet[2561]: E0113 22:03:40.781548 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.781719 kubelet[2561]: E0113 22:03:40.781706 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.781719 kubelet[2561]: W0113 22:03:40.781717 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.781798 kubelet[2561]: E0113 22:03:40.781726 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.781943 kubelet[2561]: E0113 22:03:40.781929 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.781943 kubelet[2561]: W0113 22:03:40.781941 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.782006 kubelet[2561]: E0113 22:03:40.781950 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:40.782299 kubelet[2561]: E0113 22:03:40.782285 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:40.782299 kubelet[2561]: W0113 22:03:40.782297 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:40.782362 kubelet[2561]: E0113 22:03:40.782306 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.587327 kubelet[2561]: E0113 22:03:41.585816 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mt5rr" podUID="d18d3a49-282b-4a21-8964-114828657572" Jan 13 22:03:41.710594 kubelet[2561]: I0113 22:03:41.710539 2561 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:03:41.785207 kubelet[2561]: E0113 22:03:41.784919 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.785207 kubelet[2561]: W0113 22:03:41.784961 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.785207 kubelet[2561]: E0113 22:03:41.784996 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.787748 kubelet[2561]: E0113 22:03:41.786909 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.787748 kubelet[2561]: W0113 22:03:41.786939 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.787748 kubelet[2561]: E0113 22:03:41.787019 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.787748 kubelet[2561]: E0113 22:03:41.787403 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.787748 kubelet[2561]: W0113 22:03:41.787423 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.787748 kubelet[2561]: E0113 22:03:41.787446 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.788834 kubelet[2561]: E0113 22:03:41.788371 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.788834 kubelet[2561]: W0113 22:03:41.788401 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.788834 kubelet[2561]: E0113 22:03:41.788425 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.790178 kubelet[2561]: E0113 22:03:41.789964 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.790178 kubelet[2561]: W0113 22:03:41.789992 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.790178 kubelet[2561]: E0113 22:03:41.790016 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.790551 kubelet[2561]: E0113 22:03:41.790521 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.790700 kubelet[2561]: W0113 22:03:41.790674 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.790885 kubelet[2561]: E0113 22:03:41.790857 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.792542 kubelet[2561]: E0113 22:03:41.792519 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.793799 kubelet[2561]: W0113 22:03:41.792650 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.793799 kubelet[2561]: E0113 22:03:41.792676 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.793998 kubelet[2561]: E0113 22:03:41.793937 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.794064 kubelet[2561]: W0113 22:03:41.793998 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.794064 kubelet[2561]: E0113 22:03:41.794035 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.794553 kubelet[2561]: E0113 22:03:41.794516 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.794632 kubelet[2561]: W0113 22:03:41.794552 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.794632 kubelet[2561]: E0113 22:03:41.794577 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.795267 kubelet[2561]: E0113 22:03:41.795228 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.795267 kubelet[2561]: W0113 22:03:41.795259 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.795391 kubelet[2561]: E0113 22:03:41.795283 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.796001 kubelet[2561]: E0113 22:03:41.795745 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.796085 kubelet[2561]: W0113 22:03:41.796003 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.796085 kubelet[2561]: E0113 22:03:41.796029 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.796523 kubelet[2561]: E0113 22:03:41.796492 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.796564 kubelet[2561]: W0113 22:03:41.796528 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.796564 kubelet[2561]: E0113 22:03:41.796552 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.797980 kubelet[2561]: E0113 22:03:41.797958 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.798433 kubelet[2561]: W0113 22:03:41.798413 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.798494 kubelet[2561]: E0113 22:03:41.798483 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.798829 kubelet[2561]: E0113 22:03:41.798818 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.798906 kubelet[2561]: W0113 22:03:41.798896 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.799009 kubelet[2561]: E0113 22:03:41.798997 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.799222 kubelet[2561]: E0113 22:03:41.799212 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.799295 kubelet[2561]: W0113 22:03:41.799275 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.799476 kubelet[2561]: E0113 22:03:41.799465 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.799728 kubelet[2561]: E0113 22:03:41.799716 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.799835 kubelet[2561]: W0113 22:03:41.799811 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.799904 kubelet[2561]: E0113 22:03:41.799893 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.800294 kubelet[2561]: E0113 22:03:41.800282 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.800671 kubelet[2561]: W0113 22:03:41.800565 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.800671 kubelet[2561]: E0113 22:03:41.800582 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.800911 kubelet[2561]: E0113 22:03:41.800832 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.800911 kubelet[2561]: W0113 22:03:41.800853 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.800911 kubelet[2561]: E0113 22:03:41.800869 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.801154 kubelet[2561]: E0113 22:03:41.801132 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.801198 kubelet[2561]: W0113 22:03:41.801158 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.801198 kubelet[2561]: E0113 22:03:41.801185 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.801576 kubelet[2561]: E0113 22:03:41.801449 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.801640 kubelet[2561]: W0113 22:03:41.801581 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.801640 kubelet[2561]: E0113 22:03:41.801610 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.802072 kubelet[2561]: E0113 22:03:41.802049 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.802116 kubelet[2561]: W0113 22:03:41.802071 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.802116 kubelet[2561]: E0113 22:03:41.802097 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.803314 kubelet[2561]: E0113 22:03:41.802448 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.803314 kubelet[2561]: W0113 22:03:41.802470 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.803314 kubelet[2561]: E0113 22:03:41.802845 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.803314 kubelet[2561]: W0113 22:03:41.802862 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.803314 kubelet[2561]: E0113 22:03:41.803107 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.803314 kubelet[2561]: W0113 22:03:41.803122 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.803314 kubelet[2561]: E0113 22:03:41.803175 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.803500 kubelet[2561]: E0113 22:03:41.803401 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.803500 kubelet[2561]: W0113 22:03:41.803417 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.803500 kubelet[2561]: E0113 22:03:41.803433 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.803948 kubelet[2561]: E0113 22:03:41.803591 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.803948 kubelet[2561]: E0113 22:03:41.803650 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.803948 kubelet[2561]: W0113 22:03:41.803665 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.803948 kubelet[2561]: E0113 22:03:41.803680 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.803948 kubelet[2561]: E0113 22:03:41.803922 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.804109 kubelet[2561]: E0113 22:03:41.804022 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.804109 kubelet[2561]: W0113 22:03:41.804038 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.804109 kubelet[2561]: E0113 22:03:41.804063 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.804462 kubelet[2561]: E0113 22:03:41.804450 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.804666 kubelet[2561]: W0113 22:03:41.804654 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.804733 kubelet[2561]: E0113 22:03:41.804722 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.805420 kubelet[2561]: E0113 22:03:41.805403 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.805482 kubelet[2561]: W0113 22:03:41.805472 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.805564 kubelet[2561]: E0113 22:03:41.805542 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.805720 kubelet[2561]: E0113 22:03:41.805705 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.805720 kubelet[2561]: W0113 22:03:41.805717 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.805858 kubelet[2561]: E0113 22:03:41.805732 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.806694 kubelet[2561]: E0113 22:03:41.806675 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.806694 kubelet[2561]: W0113 22:03:41.806688 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.806783 kubelet[2561]: E0113 22:03:41.806697 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.806880 kubelet[2561]: E0113 22:03:41.806863 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.806880 kubelet[2561]: W0113 22:03:41.806876 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.806942 kubelet[2561]: E0113 22:03:41.806884 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:41.808413 kubelet[2561]: E0113 22:03:41.807403 2561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:03:41.808413 kubelet[2561]: W0113 22:03:41.807415 2561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:03:41.808413 kubelet[2561]: E0113 22:03:41.807424 2561 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:03:42.130676 containerd[1458]: time="2025-01-13T22:03:42.130460401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:42.166613 containerd[1458]: time="2025-01-13T22:03:42.166508826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 13 22:03:42.169207 containerd[1458]: time="2025-01-13T22:03:42.169098948Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:42.173857 containerd[1458]: time="2025-01-13T22:03:42.173675494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:42.177932 containerd[1458]: time="2025-01-13T22:03:42.177829815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.468131674s" Jan 13 22:03:42.177932 containerd[1458]: time="2025-01-13T22:03:42.177907641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 22:03:42.190704 containerd[1458]: time="2025-01-13T22:03:42.190398674Z" level=info msg="CreateContainer within sandbox \"fe5677647b09f8714cd927d46f9a11ad7e721443bbbcd21a562f3d9e6f4c321f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 22:03:42.463612 containerd[1458]: time="2025-01-13T22:03:42.463494796Z" level=info msg="CreateContainer within sandbox \"fe5677647b09f8714cd927d46f9a11ad7e721443bbbcd21a562f3d9e6f4c321f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"522baa1d8b2fbe741b77d1aa06d83eb57273d6c3d289cc5066baa1aa5cb5b3e9\"" Jan 13 22:03:42.465419 containerd[1458]: time="2025-01-13T22:03:42.465072110Z" level=info msg="StartContainer for \"522baa1d8b2fbe741b77d1aa06d83eb57273d6c3d289cc5066baa1aa5cb5b3e9\"" Jan 13 22:03:42.552551 systemd[1]: run-containerd-runc-k8s.io-522baa1d8b2fbe741b77d1aa06d83eb57273d6c3d289cc5066baa1aa5cb5b3e9-runc.4rfH3I.mount: Deactivated successfully. Jan 13 22:03:42.565907 systemd[1]: Started cri-containerd-522baa1d8b2fbe741b77d1aa06d83eb57273d6c3d289cc5066baa1aa5cb5b3e9.scope - libcontainer container 522baa1d8b2fbe741b77d1aa06d83eb57273d6c3d289cc5066baa1aa5cb5b3e9. Jan 13 22:03:42.609568 containerd[1458]: time="2025-01-13T22:03:42.609486297Z" level=info msg="StartContainer for \"522baa1d8b2fbe741b77d1aa06d83eb57273d6c3d289cc5066baa1aa5cb5b3e9\" returns successfully" Jan 13 22:03:42.617209 systemd[1]: cri-containerd-522baa1d8b2fbe741b77d1aa06d83eb57273d6c3d289cc5066baa1aa5cb5b3e9.scope: Deactivated successfully. Jan 13 22:03:42.640911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-522baa1d8b2fbe741b77d1aa06d83eb57273d6c3d289cc5066baa1aa5cb5b3e9-rootfs.mount: Deactivated successfully. Jan 13 22:03:42.783446 kubelet[2561]: I0113 22:03:42.783159 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fb48c5865-fxk52" podStartSLOduration=4.001677153 podStartE2EDuration="7.783130187s" podCreationTimestamp="2025-01-13 22:03:35 +0000 UTC" firstStartedPulling="2025-01-13 22:03:35.926166905 +0000 UTC m=+15.498041258" lastFinishedPulling="2025-01-13 22:03:39.707619909 +0000 UTC m=+19.279494292" observedRunningTime="2025-01-13 22:03:40.757003443 +0000 UTC m=+20.328877846" watchObservedRunningTime="2025-01-13 22:03:42.783130187 +0000 UTC m=+22.355004590" Jan 13 22:03:43.061869 kubelet[2561]: I0113 22:03:43.060639 2561 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:03:43.386590 containerd[1458]: time="2025-01-13T22:03:43.385764973Z" level=info msg="shim disconnected" id=522baa1d8b2fbe741b77d1aa06d83eb57273d6c3d289cc5066baa1aa5cb5b3e9 namespace=k8s.io Jan 13 22:03:43.386590 containerd[1458]: time="2025-01-13T22:03:43.385986931Z" level=warning msg="cleaning up after shim disconnected" id=522baa1d8b2fbe741b77d1aa06d83eb57273d6c3d289cc5066baa1aa5cb5b3e9 namespace=k8s.io Jan 13 22:03:43.386590 containerd[1458]: time="2025-01-13T22:03:43.386425989Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:03:43.586009 kubelet[2561]: E0113 22:03:43.585326 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mt5rr" podUID="d18d3a49-282b-4a21-8964-114828657572" Jan 13 22:03:43.728991 containerd[1458]: time="2025-01-13T22:03:43.727364734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 22:03:45.585788 kubelet[2561]: E0113 22:03:45.585205 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mt5rr" podUID="d18d3a49-282b-4a21-8964-114828657572" Jan 13 22:03:47.585702 kubelet[2561]: E0113 22:03:47.585600 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mt5rr" podUID="d18d3a49-282b-4a21-8964-114828657572" Jan 13 22:03:49.585752 kubelet[2561]: E0113 22:03:49.585672 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mt5rr" podUID="d18d3a49-282b-4a21-8964-114828657572" Jan 13 22:03:49.610427 containerd[1458]: time="2025-01-13T22:03:49.610345802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:49.612044 containerd[1458]: time="2025-01-13T22:03:49.611811629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 22:03:49.613627 containerd[1458]: time="2025-01-13T22:03:49.613375171Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:49.616025 containerd[1458]: time="2025-01-13T22:03:49.615991573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:03:49.616971 containerd[1458]: time="2025-01-13T22:03:49.616938906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.887957125s" Jan 13 22:03:49.617069 containerd[1458]: time="2025-01-13T22:03:49.617050977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 22:03:49.621095 containerd[1458]: time="2025-01-13T22:03:49.620923263Z" level=info msg="CreateContainer within sandbox \"fe5677647b09f8714cd927d46f9a11ad7e721443bbbcd21a562f3d9e6f4c321f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 22:03:49.647272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249674127.mount: Deactivated successfully. Jan 13 22:03:49.654540 containerd[1458]: time="2025-01-13T22:03:49.654420876Z" level=info msg="CreateContainer within sandbox \"fe5677647b09f8714cd927d46f9a11ad7e721443bbbcd21a562f3d9e6f4c321f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c5c7ce434948943101f06ffbebc949703122ba8519b40509d0cefa4b77d507ea\"" Jan 13 22:03:49.655977 containerd[1458]: time="2025-01-13T22:03:49.655912843Z" level=info msg="StartContainer for \"c5c7ce434948943101f06ffbebc949703122ba8519b40509d0cefa4b77d507ea\"" Jan 13 22:03:49.709939 systemd[1]: Started cri-containerd-c5c7ce434948943101f06ffbebc949703122ba8519b40509d0cefa4b77d507ea.scope - libcontainer container c5c7ce434948943101f06ffbebc949703122ba8519b40509d0cefa4b77d507ea. Jan 13 22:03:49.746450 containerd[1458]: time="2025-01-13T22:03:49.745933793Z" level=info msg="StartContainer for \"c5c7ce434948943101f06ffbebc949703122ba8519b40509d0cefa4b77d507ea\" returns successfully" Jan 13 22:03:51.026683 containerd[1458]: time="2025-01-13T22:03:51.026567742Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 22:03:51.032992 systemd[1]: cri-containerd-c5c7ce434948943101f06ffbebc949703122ba8519b40509d0cefa4b77d507ea.scope: Deactivated successfully. Jan 13 22:03:51.054979 kubelet[2561]: I0113 22:03:51.052185 2561 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 22:03:51.089004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5c7ce434948943101f06ffbebc949703122ba8519b40509d0cefa4b77d507ea-rootfs.mount: Deactivated successfully. Jan 13 22:03:51.381941 systemd[1]: Created slice kubepods-burstable-podd9885fd8_16b7_4463_95bb_7f4600467308.slice - libcontainer container kubepods-burstable-podd9885fd8_16b7_4463_95bb_7f4600467308.slice. Jan 13 22:03:51.474609 kubelet[2561]: I0113 22:03:51.472486 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfsdj\" (UniqueName: \"kubernetes.io/projected/d9885fd8-16b7-4463-95bb-7f4600467308-kube-api-access-mfsdj\") pod \"coredns-6f6b679f8f-lttx5\" (UID: \"d9885fd8-16b7-4463-95bb-7f4600467308\") " pod="kube-system/coredns-6f6b679f8f-lttx5" Jan 13 22:03:51.474609 kubelet[2561]: I0113 22:03:51.472569 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9885fd8-16b7-4463-95bb-7f4600467308-config-volume\") pod \"coredns-6f6b679f8f-lttx5\" (UID: \"d9885fd8-16b7-4463-95bb-7f4600467308\") " pod="kube-system/coredns-6f6b679f8f-lttx5" Jan 13 22:03:51.486864 systemd[1]: Created slice kubepods-besteffort-podbe85207e_f371_4ee7_9430_e6fb2baafa7b.slice - libcontainer container kubepods-besteffort-podbe85207e_f371_4ee7_9430_e6fb2baafa7b.slice. Jan 13 22:03:51.491357 systemd[1]: Created slice kubepods-besteffort-pod89431cd0_5f5b_411e_adcd_b6574162f169.slice - libcontainer container kubepods-besteffort-pod89431cd0_5f5b_411e_adcd_b6574162f169.slice. Jan 13 22:03:51.497655 systemd[1]: Created slice kubepods-burstable-pod6d94271b_296f_4e1d_8a91_44ce5605a368.slice - libcontainer container kubepods-burstable-pod6d94271b_296f_4e1d_8a91_44ce5605a368.slice. Jan 13 22:03:51.502736 systemd[1]: Created slice kubepods-besteffort-pod2984ddea_13f1_4959_bb56_e4d645dd9d89.slice - libcontainer container kubepods-besteffort-pod2984ddea_13f1_4959_bb56_e4d645dd9d89.slice. Jan 13 22:03:51.573734 kubelet[2561]: I0113 22:03:51.573630 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/be85207e-f371-4ee7-9430-e6fb2baafa7b-calico-apiserver-certs\") pod \"calico-apiserver-6c74fff689-gwsjw\" (UID: \"be85207e-f371-4ee7-9430-e6fb2baafa7b\") " pod="calico-apiserver/calico-apiserver-6c74fff689-gwsjw" Jan 13 22:03:51.573734 kubelet[2561]: I0113 22:03:51.573734 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d94271b-296f-4e1d-8a91-44ce5605a368-config-volume\") pod \"coredns-6f6b679f8f-fbjtp\" (UID: \"6d94271b-296f-4e1d-8a91-44ce5605a368\") " pod="kube-system/coredns-6f6b679f8f-fbjtp" Jan 13 22:03:51.574053 kubelet[2561]: I0113 22:03:51.573887 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2984ddea-13f1-4959-bb56-e4d645dd9d89-tigera-ca-bundle\") pod \"calico-kube-controllers-74d6865bb-8mm8k\" (UID: \"2984ddea-13f1-4959-bb56-e4d645dd9d89\") " pod="calico-system/calico-kube-controllers-74d6865bb-8mm8k" Jan 13 22:03:51.574053 kubelet[2561]: I0113 22:03:51.573938 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ch65\" (UniqueName: \"kubernetes.io/projected/6d94271b-296f-4e1d-8a91-44ce5605a368-kube-api-access-9ch65\") pod \"coredns-6f6b679f8f-fbjtp\" (UID: \"6d94271b-296f-4e1d-8a91-44ce5605a368\") " pod="kube-system/coredns-6f6b679f8f-fbjtp" Jan 13 22:03:51.574053 kubelet[2561]: I0113 22:03:51.573985 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hphqj\" (UniqueName: \"kubernetes.io/projected/be85207e-f371-4ee7-9430-e6fb2baafa7b-kube-api-access-hphqj\") pod \"calico-apiserver-6c74fff689-gwsjw\" (UID: \"be85207e-f371-4ee7-9430-e6fb2baafa7b\") " pod="calico-apiserver/calico-apiserver-6c74fff689-gwsjw" Jan 13 22:03:51.574053 kubelet[2561]: I0113 22:03:51.574036 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/89431cd0-5f5b-411e-adcd-b6574162f169-calico-apiserver-certs\") pod \"calico-apiserver-6c74fff689-j6m89\" (UID: \"89431cd0-5f5b-411e-adcd-b6574162f169\") " pod="calico-apiserver/calico-apiserver-6c74fff689-j6m89" Jan 13 22:03:51.574319 kubelet[2561]: I0113 22:03:51.574082 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snt4r\" (UniqueName: \"kubernetes.io/projected/89431cd0-5f5b-411e-adcd-b6574162f169-kube-api-access-snt4r\") pod \"calico-apiserver-6c74fff689-j6m89\" (UID: \"89431cd0-5f5b-411e-adcd-b6574162f169\") " pod="calico-apiserver/calico-apiserver-6c74fff689-j6m89" Jan 13 22:03:51.574319 kubelet[2561]: I0113 22:03:51.574126 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxvbr\" (UniqueName: \"kubernetes.io/projected/2984ddea-13f1-4959-bb56-e4d645dd9d89-kube-api-access-nxvbr\") pod \"calico-kube-controllers-74d6865bb-8mm8k\" (UID: \"2984ddea-13f1-4959-bb56-e4d645dd9d89\") " pod="calico-system/calico-kube-controllers-74d6865bb-8mm8k" Jan 13 22:03:51.601443 systemd[1]: Created slice kubepods-besteffort-podd18d3a49_282b_4a21_8964_114828657572.slice - libcontainer container kubepods-besteffort-podd18d3a49_282b_4a21_8964_114828657572.slice. Jan 13 22:03:51.617258 containerd[1458]: time="2025-01-13T22:03:51.617161084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mt5rr,Uid:d18d3a49-282b-4a21-8964-114828657572,Namespace:calico-system,Attempt:0,}" Jan 13 22:03:51.736007 containerd[1458]: time="2025-01-13T22:03:51.735954992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lttx5,Uid:d9885fd8-16b7-4463-95bb-7f4600467308,Namespace:kube-system,Attempt:0,}" Jan 13 22:03:51.790430 containerd[1458]: time="2025-01-13T22:03:51.790290252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c74fff689-gwsjw,Uid:be85207e-f371-4ee7-9430-e6fb2baafa7b,Namespace:calico-apiserver,Attempt:0,}" Jan 13 22:03:51.795959 containerd[1458]: time="2025-01-13T22:03:51.795744670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c74fff689-j6m89,Uid:89431cd0-5f5b-411e-adcd-b6574162f169,Namespace:calico-apiserver,Attempt:0,}" Jan 13 22:03:51.802033 containerd[1458]: time="2025-01-13T22:03:51.801874418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fbjtp,Uid:6d94271b-296f-4e1d-8a91-44ce5605a368,Namespace:kube-system,Attempt:0,}" Jan 13 22:03:51.806427 containerd[1458]: time="2025-01-13T22:03:51.806301636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d6865bb-8mm8k,Uid:2984ddea-13f1-4959-bb56-e4d645dd9d89,Namespace:calico-system,Attempt:0,}" Jan 13 22:03:51.984323 containerd[1458]: time="2025-01-13T22:03:51.984138727Z" level=info msg="shim disconnected" id=c5c7ce434948943101f06ffbebc949703122ba8519b40509d0cefa4b77d507ea namespace=k8s.io Jan 13 22:03:51.984323 containerd[1458]: time="2025-01-13T22:03:51.984234478Z" level=warning msg="cleaning up after shim disconnected" id=c5c7ce434948943101f06ffbebc949703122ba8519b40509d0cefa4b77d507ea namespace=k8s.io Jan 13 22:03:51.984323 containerd[1458]: time="2025-01-13T22:03:51.984256479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:03:52.280541 containerd[1458]: time="2025-01-13T22:03:52.280410331Z" level=error msg="Failed to destroy network for sandbox \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.281461 containerd[1458]: time="2025-01-13T22:03:52.281115457Z" level=error msg="encountered an error cleaning up failed sandbox \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.281461 containerd[1458]: time="2025-01-13T22:03:52.281169699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fbjtp,Uid:6d94271b-296f-4e1d-8a91-44ce5605a368,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.281541 kubelet[2561]: E0113 22:03:52.281415 2561 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.281541 kubelet[2561]: E0113 22:03:52.281498 2561 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fbjtp" Jan 13 22:03:52.281541 kubelet[2561]: E0113 22:03:52.281519 2561 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fbjtp" Jan 13 22:03:52.281830 kubelet[2561]: E0113 22:03:52.281567 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-fbjtp_kube-system(6d94271b-296f-4e1d-8a91-44ce5605a368)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-fbjtp_kube-system(6d94271b-296f-4e1d-8a91-44ce5605a368)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fbjtp" podUID="6d94271b-296f-4e1d-8a91-44ce5605a368" Jan 13 22:03:52.313066 containerd[1458]: time="2025-01-13T22:03:52.312498109Z" level=error msg="Failed to destroy network for sandbox \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.313066 containerd[1458]: time="2025-01-13T22:03:52.312891870Z" level=error msg="encountered an error cleaning up failed sandbox \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.313066 containerd[1458]: time="2025-01-13T22:03:52.312953395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lttx5,Uid:d9885fd8-16b7-4463-95bb-7f4600467308,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.313499 kubelet[2561]: E0113 22:03:52.313440 2561 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.313562 kubelet[2561]: E0113 22:03:52.313518 2561 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lttx5" Jan 13 22:03:52.313562 kubelet[2561]: E0113 22:03:52.313541 2561 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lttx5" Jan 13 22:03:52.314190 kubelet[2561]: E0113 22:03:52.313836 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-lttx5_kube-system(d9885fd8-16b7-4463-95bb-7f4600467308)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-lttx5_kube-system(d9885fd8-16b7-4463-95bb-7f4600467308)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lttx5" podUID="d9885fd8-16b7-4463-95bb-7f4600467308" Jan 13 22:03:52.324115 containerd[1458]: time="2025-01-13T22:03:52.324026557Z" level=error msg="Failed to destroy network for sandbox \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.325034 containerd[1458]: time="2025-01-13T22:03:52.324898227Z" level=error msg="encountered an error cleaning up failed sandbox \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.325146 containerd[1458]: time="2025-01-13T22:03:52.325109313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c74fff689-gwsjw,Uid:be85207e-f371-4ee7-9430-e6fb2baafa7b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.325576 kubelet[2561]: E0113 22:03:52.325531 2561 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.325644 kubelet[2561]: E0113 22:03:52.325597 2561 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c74fff689-gwsjw" Jan 13 22:03:52.325644 kubelet[2561]: E0113 22:03:52.325622 2561 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c74fff689-gwsjw" Jan 13 22:03:52.325709 kubelet[2561]: E0113 22:03:52.325665 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c74fff689-gwsjw_calico-apiserver(be85207e-f371-4ee7-9430-e6fb2baafa7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c74fff689-gwsjw_calico-apiserver(be85207e-f371-4ee7-9430-e6fb2baafa7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c74fff689-gwsjw" podUID="be85207e-f371-4ee7-9430-e6fb2baafa7b" Jan 13 22:03:52.326691 containerd[1458]: time="2025-01-13T22:03:52.326640652Z" level=error msg="Failed to destroy network for sandbox \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.327076 containerd[1458]: time="2025-01-13T22:03:52.327045123Z" level=error msg="encountered an error cleaning up failed sandbox \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.327136 containerd[1458]: time="2025-01-13T22:03:52.327098573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d6865bb-8mm8k,Uid:2984ddea-13f1-4959-bb56-e4d645dd9d89,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.327561 kubelet[2561]: E0113 22:03:52.327377 2561 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.327561 kubelet[2561]: E0113 22:03:52.327445 2561 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74d6865bb-8mm8k" Jan 13 22:03:52.327561 kubelet[2561]: E0113 22:03:52.327468 2561 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74d6865bb-8mm8k" Jan 13 22:03:52.327692 kubelet[2561]: E0113 22:03:52.327509 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74d6865bb-8mm8k_calico-system(2984ddea-13f1-4959-bb56-e4d645dd9d89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74d6865bb-8mm8k_calico-system(2984ddea-13f1-4959-bb56-e4d645dd9d89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74d6865bb-8mm8k" podUID="2984ddea-13f1-4959-bb56-e4d645dd9d89" Jan 13 22:03:52.335363 containerd[1458]: time="2025-01-13T22:03:52.335312830Z" level=error msg="Failed to destroy network for sandbox \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.336226 containerd[1458]: time="2025-01-13T22:03:52.335763337Z" level=error msg="encountered an error cleaning up failed sandbox \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.336226 containerd[1458]: time="2025-01-13T22:03:52.335841053Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c74fff689-j6m89,Uid:89431cd0-5f5b-411e-adcd-b6574162f169,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.336361 kubelet[2561]: E0113 22:03:52.336059 2561 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.336361 kubelet[2561]: E0113 22:03:52.336112 2561 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c74fff689-j6m89" Jan 13 22:03:52.336361 kubelet[2561]: E0113 22:03:52.336135 2561 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c74fff689-j6m89" Jan 13 22:03:52.336956 kubelet[2561]: E0113 22:03:52.336692 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c74fff689-j6m89_calico-apiserver(89431cd0-5f5b-411e-adcd-b6574162f169)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c74fff689-j6m89_calico-apiserver(89431cd0-5f5b-411e-adcd-b6574162f169)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c74fff689-j6m89" podUID="89431cd0-5f5b-411e-adcd-b6574162f169" Jan 13 22:03:52.338956 containerd[1458]: time="2025-01-13T22:03:52.338917738Z" level=error msg="Failed to destroy network for sandbox \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.339278 containerd[1458]: time="2025-01-13T22:03:52.339251155Z" level=error msg="encountered an error cleaning up failed sandbox \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.339465 containerd[1458]: time="2025-01-13T22:03:52.339375209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mt5rr,Uid:d18d3a49-282b-4a21-8964-114828657572,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.339589 kubelet[2561]: E0113 22:03:52.339561 2561 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.339805 kubelet[2561]: E0113 22:03:52.339670 2561 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mt5rr" Jan 13 22:03:52.339805 kubelet[2561]: E0113 22:03:52.339698 2561 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mt5rr" Jan 13 22:03:52.339805 kubelet[2561]: E0113 22:03:52.339743 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mt5rr_calico-system(d18d3a49-282b-4a21-8964-114828657572)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mt5rr_calico-system(d18d3a49-282b-4a21-8964-114828657572)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mt5rr" podUID="d18d3a49-282b-4a21-8964-114828657572" Jan 13 22:03:52.762325 kubelet[2561]: I0113 22:03:52.762261 2561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Jan 13 22:03:52.765226 containerd[1458]: time="2025-01-13T22:03:52.765118384Z" level=info msg="StopPodSandbox for \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\"" Jan 13 22:03:52.765527 containerd[1458]: time="2025-01-13T22:03:52.765468834Z" level=info msg="Ensure that sandbox 3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3 in task-service has been cleanup successfully" Jan 13 22:03:52.769217 kubelet[2561]: I0113 22:03:52.769014 2561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Jan 13 22:03:52.773211 containerd[1458]: time="2025-01-13T22:03:52.772209228Z" level=info msg="StopPodSandbox for \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\"" Jan 13 22:03:52.775616 containerd[1458]: time="2025-01-13T22:03:52.775568545Z" level=info msg="Ensure that sandbox e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56 in task-service has been cleanup successfully" Jan 13 22:03:52.779190 kubelet[2561]: I0113 22:03:52.778981 2561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Jan 13 22:03:52.785351 containerd[1458]: time="2025-01-13T22:03:52.784750912Z" level=info msg="StopPodSandbox for \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\"" Jan 13 22:03:52.785497 containerd[1458]: time="2025-01-13T22:03:52.785427985Z" level=info msg="Ensure that sandbox 6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a in task-service has been cleanup successfully" Jan 13 22:03:52.790403 kubelet[2561]: I0113 22:03:52.790198 2561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Jan 13 22:03:52.793878 containerd[1458]: time="2025-01-13T22:03:52.793346886Z" level=info msg="StopPodSandbox for \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\"" Jan 13 22:03:52.798470 containerd[1458]: time="2025-01-13T22:03:52.798377494Z" level=info msg="Ensure that sandbox 677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57 in task-service has been cleanup successfully" Jan 13 22:03:52.818818 containerd[1458]: time="2025-01-13T22:03:52.818672317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 22:03:52.820600 kubelet[2561]: I0113 22:03:52.820553 2561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Jan 13 22:03:52.822916 containerd[1458]: time="2025-01-13T22:03:52.822862677Z" level=info msg="StopPodSandbox for \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\"" Jan 13 22:03:52.823633 containerd[1458]: time="2025-01-13T22:03:52.823208196Z" level=info msg="Ensure that sandbox 6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88 in task-service has been cleanup successfully" Jan 13 22:03:52.841055 kubelet[2561]: I0113 22:03:52.840971 2561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Jan 13 22:03:52.845187 containerd[1458]: time="2025-01-13T22:03:52.845148553Z" level=info msg="StopPodSandbox for \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\"" Jan 13 22:03:52.845456 containerd[1458]: time="2025-01-13T22:03:52.845334723Z" level=info msg="Ensure that sandbox 0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5 in task-service has been cleanup successfully" Jan 13 22:03:52.893589 containerd[1458]: time="2025-01-13T22:03:52.893544757Z" level=error msg="StopPodSandbox for \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\" failed" error="failed to destroy network for sandbox \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.894107 kubelet[2561]: E0113 22:03:52.893952 2561 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Jan 13 22:03:52.894107 kubelet[2561]: E0113 22:03:52.894027 2561 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3"} Jan 13 22:03:52.894199 kubelet[2561]: E0113 22:03:52.894145 2561 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d18d3a49-282b-4a21-8964-114828657572\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:03:52.894275 kubelet[2561]: E0113 22:03:52.894195 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d18d3a49-282b-4a21-8964-114828657572\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mt5rr" podUID="d18d3a49-282b-4a21-8964-114828657572" Jan 13 22:03:52.913514 containerd[1458]: time="2025-01-13T22:03:52.913362803Z" level=error msg="StopPodSandbox for \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\" failed" error="failed to destroy network for sandbox \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.913638 kubelet[2561]: E0113 22:03:52.913597 2561 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Jan 13 22:03:52.913683 kubelet[2561]: E0113 22:03:52.913649 2561 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57"} Jan 13 22:03:52.913722 kubelet[2561]: E0113 22:03:52.913686 2561 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89431cd0-5f5b-411e-adcd-b6574162f169\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:03:52.913722 kubelet[2561]: E0113 22:03:52.913712 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89431cd0-5f5b-411e-adcd-b6574162f169\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c74fff689-j6m89" podUID="89431cd0-5f5b-411e-adcd-b6574162f169" Jan 13 22:03:52.916796 containerd[1458]: time="2025-01-13T22:03:52.916445911Z" level=error msg="StopPodSandbox for \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\" failed" error="failed to destroy network for sandbox \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.916860 kubelet[2561]: E0113 22:03:52.916693 2561 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Jan 13 22:03:52.916860 kubelet[2561]: E0113 22:03:52.916737 2561 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a"} Jan 13 22:03:52.916924 kubelet[2561]: E0113 22:03:52.916863 2561 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6d94271b-296f-4e1d-8a91-44ce5605a368\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:03:52.916924 kubelet[2561]: E0113 22:03:52.916896 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6d94271b-296f-4e1d-8a91-44ce5605a368\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fbjtp" podUID="6d94271b-296f-4e1d-8a91-44ce5605a368" Jan 13 22:03:52.924439 containerd[1458]: time="2025-01-13T22:03:52.924375211Z" level=error msg="StopPodSandbox for \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\" failed" error="failed to destroy network for sandbox \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.924610 kubelet[2561]: E0113 22:03:52.924576 2561 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Jan 13 22:03:52.924687 kubelet[2561]: E0113 22:03:52.924627 2561 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88"} Jan 13 22:03:52.924687 kubelet[2561]: E0113 22:03:52.924660 2561 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be85207e-f371-4ee7-9430-e6fb2baafa7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:03:52.924851 kubelet[2561]: E0113 22:03:52.924690 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be85207e-f371-4ee7-9430-e6fb2baafa7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c74fff689-gwsjw" podUID="be85207e-f371-4ee7-9430-e6fb2baafa7b" Jan 13 22:03:52.926799 containerd[1458]: time="2025-01-13T22:03:52.926742121Z" level=error msg="StopPodSandbox for \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\" failed" error="failed to destroy network for sandbox \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.926956 kubelet[2561]: E0113 22:03:52.926915 2561 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Jan 13 22:03:52.927002 kubelet[2561]: E0113 22:03:52.926966 2561 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56"} Jan 13 22:03:52.927055 kubelet[2561]: E0113 22:03:52.927023 2561 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2984ddea-13f1-4959-bb56-e4d645dd9d89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:03:52.927919 kubelet[2561]: E0113 22:03:52.927048 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2984ddea-13f1-4959-bb56-e4d645dd9d89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74d6865bb-8mm8k" podUID="2984ddea-13f1-4959-bb56-e4d645dd9d89" Jan 13 22:03:52.934021 containerd[1458]: time="2025-01-13T22:03:52.933950546Z" level=error msg="StopPodSandbox for \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\" failed" error="failed to destroy network for sandbox \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:03:52.934165 kubelet[2561]: E0113 22:03:52.934112 2561 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Jan 13 22:03:52.934165 kubelet[2561]: E0113 22:03:52.934155 2561 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5"} Jan 13 22:03:52.934259 kubelet[2561]: E0113 22:03:52.934200 2561 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9885fd8-16b7-4463-95bb-7f4600467308\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:03:52.934259 kubelet[2561]: E0113 22:03:52.934226 2561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9885fd8-16b7-4463-95bb-7f4600467308\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lttx5" podUID="d9885fd8-16b7-4463-95bb-7f4600467308" Jan 13 22:03:53.094040 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88-shm.mount: Deactivated successfully. Jan 13 22:03:53.094281 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5-shm.mount: Deactivated successfully. Jan 13 22:03:53.094445 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3-shm.mount: Deactivated successfully. Jan 13 22:03:53.094605 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a-shm.mount: Deactivated successfully. Jan 13 22:04:01.857064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2585732471.mount: Deactivated successfully. Jan 13 22:04:01.892751 containerd[1458]: time="2025-01-13T22:04:01.892695473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:01.894013 containerd[1458]: time="2025-01-13T22:04:01.893943929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 22:04:01.895557 containerd[1458]: time="2025-01-13T22:04:01.895419620Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:01.898322 containerd[1458]: time="2025-01-13T22:04:01.898225030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:01.899563 containerd[1458]: time="2025-01-13T22:04:01.899263962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.080210408s" Jan 13 22:04:01.899563 containerd[1458]: time="2025-01-13T22:04:01.899346566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 22:04:01.928806 containerd[1458]: time="2025-01-13T22:04:01.928654253Z" level=info msg="CreateContainer within sandbox \"fe5677647b09f8714cd927d46f9a11ad7e721443bbbcd21a562f3d9e6f4c321f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 22:04:01.994376 containerd[1458]: time="2025-01-13T22:04:01.994294992Z" level=info msg="CreateContainer within sandbox \"fe5677647b09f8714cd927d46f9a11ad7e721443bbbcd21a562f3d9e6f4c321f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"62d4389cda77f7950e700f405e25ab24c1594892c9c1b19297c984470092e38e\"" Jan 13 22:04:01.995467 containerd[1458]: time="2025-01-13T22:04:01.995422459Z" level=info msg="StartContainer for \"62d4389cda77f7950e700f405e25ab24c1594892c9c1b19297c984470092e38e\"" Jan 13 22:04:02.047441 systemd[1]: Started cri-containerd-62d4389cda77f7950e700f405e25ab24c1594892c9c1b19297c984470092e38e.scope - libcontainer container 62d4389cda77f7950e700f405e25ab24c1594892c9c1b19297c984470092e38e. Jan 13 22:04:02.100696 containerd[1458]: time="2025-01-13T22:04:02.100643182Z" level=info msg="StartContainer for \"62d4389cda77f7950e700f405e25ab24c1594892c9c1b19297c984470092e38e\" returns successfully" Jan 13 22:04:02.170213 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 22:04:02.170464 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 22:04:02.942460 systemd[1]: run-containerd-runc-k8s.io-62d4389cda77f7950e700f405e25ab24c1594892c9c1b19297c984470092e38e-runc.Dp1BSl.mount: Deactivated successfully. Jan 13 22:04:02.949858 kubelet[2561]: I0113 22:04:02.949378 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zr624" podStartSLOduration=2.00737302 podStartE2EDuration="27.949298395s" podCreationTimestamp="2025-01-13 22:03:35 +0000 UTC" firstStartedPulling="2025-01-13 22:03:35.959910957 +0000 UTC m=+15.531785321" lastFinishedPulling="2025-01-13 22:04:01.901836333 +0000 UTC m=+41.473710696" observedRunningTime="2025-01-13 22:04:02.939700389 +0000 UTC m=+42.511574782" watchObservedRunningTime="2025-01-13 22:04:02.949298395 +0000 UTC m=+42.521172798" Jan 13 22:04:03.919837 kernel: bpftool[3890]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 22:04:04.212435 systemd-networkd[1371]: vxlan.calico: Link UP Jan 13 22:04:04.212455 systemd-networkd[1371]: vxlan.calico: Gained carrier Jan 13 22:04:05.588980 containerd[1458]: time="2025-01-13T22:04:05.588007023Z" level=info msg="StopPodSandbox for \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\"" Jan 13 22:04:05.588980 containerd[1458]: time="2025-01-13T22:04:05.588327525Z" level=info msg="StopPodSandbox for \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\"" Jan 13 22:04:05.593851 containerd[1458]: time="2025-01-13T22:04:05.593534221Z" level=info msg="StopPodSandbox for \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\"" Jan 13 22:04:05.831786 containerd[1458]: 2025-01-13 22:04:05.775 [INFO][3998] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Jan 13 22:04:05.831786 containerd[1458]: 2025-01-13 22:04:05.775 [INFO][3998] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" iface="eth0" netns="/var/run/netns/cni-aa51a0b1-8b57-7ce7-927e-1d0cdcb64d7c" Jan 13 22:04:05.831786 containerd[1458]: 2025-01-13 22:04:05.776 [INFO][3998] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" iface="eth0" netns="/var/run/netns/cni-aa51a0b1-8b57-7ce7-927e-1d0cdcb64d7c" Jan 13 22:04:05.831786 containerd[1458]: 2025-01-13 22:04:05.780 [INFO][3998] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" iface="eth0" netns="/var/run/netns/cni-aa51a0b1-8b57-7ce7-927e-1d0cdcb64d7c" Jan 13 22:04:05.831786 containerd[1458]: 2025-01-13 22:04:05.780 [INFO][3998] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Jan 13 22:04:05.831786 containerd[1458]: 2025-01-13 22:04:05.780 [INFO][3998] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Jan 13 22:04:05.831786 containerd[1458]: 2025-01-13 22:04:05.818 [INFO][4022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" HandleID="k8s-pod-network.0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:05.831786 containerd[1458]: 2025-01-13 22:04:05.819 [INFO][4022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:05.831786 containerd[1458]: 2025-01-13 22:04:05.819 [INFO][4022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:05.831786 containerd[1458]: 2025-01-13 22:04:05.826 [WARNING][4022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" HandleID="k8s-pod-network.0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:05.831786 containerd[1458]: 2025-01-13 22:04:05.826 [INFO][4022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" HandleID="k8s-pod-network.0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:05.831786 containerd[1458]: 2025-01-13 22:04:05.828 [INFO][4022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:05.831786 containerd[1458]: 2025-01-13 22:04:05.830 [INFO][3998] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Jan 13 22:04:05.835119 containerd[1458]: time="2025-01-13T22:04:05.833854820Z" level=info msg="TearDown network for sandbox \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\" successfully" Jan 13 22:04:05.835119 containerd[1458]: time="2025-01-13T22:04:05.833883013Z" level=info msg="StopPodSandbox for \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\" returns successfully" Jan 13 22:04:05.835887 containerd[1458]: time="2025-01-13T22:04:05.835477578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lttx5,Uid:d9885fd8-16b7-4463-95bb-7f4600467308,Namespace:kube-system,Attempt:1,}" Jan 13 22:04:05.836679 systemd[1]: run-netns-cni\x2daa51a0b1\x2d8b57\x2d7ce7\x2d927e\x2d1d0cdcb64d7c.mount: Deactivated successfully. Jan 13 22:04:05.850849 containerd[1458]: 2025-01-13 22:04:05.772 [INFO][4009] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Jan 13 22:04:05.850849 containerd[1458]: 2025-01-13 22:04:05.773 [INFO][4009] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" iface="eth0" netns="/var/run/netns/cni-9fd7ac31-069b-3ccd-7dfc-b59a76bb50db" Jan 13 22:04:05.850849 containerd[1458]: 2025-01-13 22:04:05.773 [INFO][4009] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" iface="eth0" netns="/var/run/netns/cni-9fd7ac31-069b-3ccd-7dfc-b59a76bb50db" Jan 13 22:04:05.850849 containerd[1458]: 2025-01-13 22:04:05.773 [INFO][4009] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" iface="eth0" netns="/var/run/netns/cni-9fd7ac31-069b-3ccd-7dfc-b59a76bb50db" Jan 13 22:04:05.850849 containerd[1458]: 2025-01-13 22:04:05.773 [INFO][4009] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Jan 13 22:04:05.850849 containerd[1458]: 2025-01-13 22:04:05.775 [INFO][4009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Jan 13 22:04:05.850849 containerd[1458]: 2025-01-13 22:04:05.823 [INFO][4021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" HandleID="k8s-pod-network.e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:05.850849 containerd[1458]: 2025-01-13 22:04:05.823 [INFO][4021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:05.850849 containerd[1458]: 2025-01-13 22:04:05.828 [INFO][4021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:05.850849 containerd[1458]: 2025-01-13 22:04:05.838 [WARNING][4021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" HandleID="k8s-pod-network.e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:05.850849 containerd[1458]: 2025-01-13 22:04:05.839 [INFO][4021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" HandleID="k8s-pod-network.e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:05.850849 containerd[1458]: 2025-01-13 22:04:05.841 [INFO][4021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:05.850849 containerd[1458]: 2025-01-13 22:04:05.844 [INFO][4009] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Jan 13 22:04:05.850849 containerd[1458]: time="2025-01-13T22:04:05.847432364Z" level=info msg="TearDown network for sandbox \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\" successfully" Jan 13 22:04:05.850849 containerd[1458]: time="2025-01-13T22:04:05.847456970Z" level=info msg="StopPodSandbox for \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\" returns successfully" Jan 13 22:04:05.854297 containerd[1458]: time="2025-01-13T22:04:05.852562376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d6865bb-8mm8k,Uid:2984ddea-13f1-4959-bb56-e4d645dd9d89,Namespace:calico-system,Attempt:1,}" Jan 13 22:04:05.853142 systemd[1]: run-netns-cni\x2d9fd7ac31\x2d069b\x2d3ccd\x2d7dfc\x2db59a76bb50db.mount: Deactivated successfully. Jan 13 22:04:05.863949 containerd[1458]: 2025-01-13 22:04:05.769 [INFO][3992] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Jan 13 22:04:05.863949 containerd[1458]: 2025-01-13 22:04:05.770 [INFO][3992] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" iface="eth0" netns="/var/run/netns/cni-cd7ef35f-6243-495c-7293-252a0344f264" Jan 13 22:04:05.863949 containerd[1458]: 2025-01-13 22:04:05.770 [INFO][3992] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" iface="eth0" netns="/var/run/netns/cni-cd7ef35f-6243-495c-7293-252a0344f264" Jan 13 22:04:05.863949 containerd[1458]: 2025-01-13 22:04:05.772 [INFO][3992] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" iface="eth0" netns="/var/run/netns/cni-cd7ef35f-6243-495c-7293-252a0344f264" Jan 13 22:04:05.863949 containerd[1458]: 2025-01-13 22:04:05.772 [INFO][3992] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Jan 13 22:04:05.863949 containerd[1458]: 2025-01-13 22:04:05.772 [INFO][3992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Jan 13 22:04:05.863949 containerd[1458]: 2025-01-13 22:04:05.825 [INFO][4020] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" HandleID="k8s-pod-network.3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:05.863949 containerd[1458]: 2025-01-13 22:04:05.825 [INFO][4020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:05.863949 containerd[1458]: 2025-01-13 22:04:05.841 [INFO][4020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:05.863949 containerd[1458]: 2025-01-13 22:04:05.858 [WARNING][4020] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" HandleID="k8s-pod-network.3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:05.863949 containerd[1458]: 2025-01-13 22:04:05.859 [INFO][4020] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" HandleID="k8s-pod-network.3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:05.863949 containerd[1458]: 2025-01-13 22:04:05.861 [INFO][4020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:05.863949 containerd[1458]: 2025-01-13 22:04:05.862 [INFO][3992] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Jan 13 22:04:05.864537 containerd[1458]: time="2025-01-13T22:04:05.864124866Z" level=info msg="TearDown network for sandbox \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\" successfully" Jan 13 22:04:05.864537 containerd[1458]: time="2025-01-13T22:04:05.864156085Z" level=info msg="StopPodSandbox for \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\" returns successfully" Jan 13 22:04:05.865352 containerd[1458]: time="2025-01-13T22:04:05.865061604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mt5rr,Uid:d18d3a49-282b-4a21-8964-114828657572,Namespace:calico-system,Attempt:1,}" Jan 13 22:04:05.868623 systemd[1]: run-netns-cni\x2dcd7ef35f\x2d6243\x2d495c\x2d7293\x2d252a0344f264.mount: Deactivated successfully. Jan 13 22:04:06.106084 systemd-networkd[1371]: cali60cb3444374: Link UP Jan 13 22:04:06.107939 systemd-networkd[1371]: cali60cb3444374: Gained carrier Jan 13 22:04:06.126925 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:05.945 [INFO][4040] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0 coredns-6f6b679f8f- kube-system d9885fd8-16b7-4463-95bb-7f4600467308 782 0 2025-01-13 22:03:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-2-0f60d24a30.novalocal coredns-6f6b679f8f-lttx5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali60cb3444374 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" Namespace="kube-system" Pod="coredns-6f6b679f8f-lttx5" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-" Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:05.948 [INFO][4040] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" Namespace="kube-system" Pod="coredns-6f6b679f8f-lttx5" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.015 [INFO][4076] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" HandleID="k8s-pod-network.5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.033 [INFO][4076] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" HandleID="k8s-pod-network.5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fc0d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-2-0f60d24a30.novalocal", "pod":"coredns-6f6b679f8f-lttx5", "timestamp":"2025-01-13 22:04:06.015379552 +0000 UTC"}, Hostname:"ci-4081-3-0-2-0f60d24a30.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.033 [INFO][4076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.033 [INFO][4076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.033 [INFO][4076] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-0f60d24a30.novalocal' Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.039 [INFO][4076] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.046 [INFO][4076] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.056 [INFO][4076] ipam/ipam.go 489: Trying affinity for 192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.059 [INFO][4076] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.066 [INFO][4076] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.066 [INFO][4076] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.128/26 handle="k8s-pod-network.5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.069 [INFO][4076] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889 Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.081 [INFO][4076] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.128/26 handle="k8s-pod-network.5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.094 [INFO][4076] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.129/26] block=192.168.1.128/26 handle="k8s-pod-network.5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.095 [INFO][4076] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.129/26] handle="k8s-pod-network.5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.095 [INFO][4076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:06.131735 containerd[1458]: 2025-01-13 22:04:06.095 [INFO][4076] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.129/26] IPv6=[] ContainerID="5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" HandleID="k8s-pod-network.5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:06.133115 containerd[1458]: 2025-01-13 22:04:06.097 [INFO][4040] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" Namespace="kube-system" Pod="coredns-6f6b679f8f-lttx5" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d9885fd8-16b7-4463-95bb-7f4600467308", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-lttx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60cb3444374", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:06.133115 containerd[1458]: 2025-01-13 22:04:06.097 [INFO][4040] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.129/32] ContainerID="5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" Namespace="kube-system" Pod="coredns-6f6b679f8f-lttx5" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:06.133115 containerd[1458]: 2025-01-13 22:04:06.097 [INFO][4040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60cb3444374 ContainerID="5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" Namespace="kube-system" Pod="coredns-6f6b679f8f-lttx5" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:06.133115 containerd[1458]: 2025-01-13 22:04:06.108 [INFO][4040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" Namespace="kube-system" Pod="coredns-6f6b679f8f-lttx5" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:06.133115 containerd[1458]: 2025-01-13 22:04:06.110 [INFO][4040] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" Namespace="kube-system" Pod="coredns-6f6b679f8f-lttx5" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d9885fd8-16b7-4463-95bb-7f4600467308", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889", Pod:"coredns-6f6b679f8f-lttx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60cb3444374", MAC:"ba:9f:af:b3:1b:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:06.133115 containerd[1458]: 2025-01-13 22:04:06.124 [INFO][4040] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889" Namespace="kube-system" Pod="coredns-6f6b679f8f-lttx5" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:06.195126 containerd[1458]: time="2025-01-13T22:04:06.194857339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:04:06.195484 containerd[1458]: time="2025-01-13T22:04:06.195154005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:04:06.195484 containerd[1458]: time="2025-01-13T22:04:06.195224078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:04:06.195484 containerd[1458]: time="2025-01-13T22:04:06.195393666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:04:06.228053 systemd-networkd[1371]: calia5619d388dc: Link UP Jan 13 22:04:06.230692 systemd-networkd[1371]: calia5619d388dc: Gained carrier Jan 13 22:04:06.231983 systemd[1]: Started cri-containerd-5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889.scope - libcontainer container 5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889. Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:05.966 [INFO][4049] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0 csi-node-driver- calico-system d18d3a49-282b-4a21-8964-114828657572 781 0 2025-01-13 22:03:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-2-0f60d24a30.novalocal csi-node-driver-mt5rr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia5619d388dc [] []}} ContainerID="a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" Namespace="calico-system" Pod="csi-node-driver-mt5rr" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-" Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:05.966 [INFO][4049] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" Namespace="calico-system" Pod="csi-node-driver-mt5rr" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.056 [INFO][4080] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" HandleID="k8s-pod-network.a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.130 [INFO][4080] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" HandleID="k8s-pod-network.a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318180), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-2-0f60d24a30.novalocal", "pod":"csi-node-driver-mt5rr", "timestamp":"2025-01-13 22:04:06.056853389 +0000 UTC"}, Hostname:"ci-4081-3-0-2-0f60d24a30.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.130 [INFO][4080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.130 [INFO][4080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.130 [INFO][4080] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-0f60d24a30.novalocal' Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.146 [INFO][4080] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.164 [INFO][4080] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.175 [INFO][4080] ipam/ipam.go 489: Trying affinity for 192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.178 [INFO][4080] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.182 [INFO][4080] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.182 [INFO][4080] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.128/26 handle="k8s-pod-network.a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.188 [INFO][4080] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931 Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.199 [INFO][4080] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.128/26 handle="k8s-pod-network.a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.214 [INFO][4080] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.130/26] block=192.168.1.128/26 handle="k8s-pod-network.a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.214 [INFO][4080] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.130/26] handle="k8s-pod-network.a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.214 [INFO][4080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:06.265418 containerd[1458]: 2025-01-13 22:04:06.216 [INFO][4080] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.130/26] IPv6=[] ContainerID="a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" HandleID="k8s-pod-network.a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:06.266117 containerd[1458]: 2025-01-13 22:04:06.220 [INFO][4049] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" Namespace="calico-system" Pod="csi-node-driver-mt5rr" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d18d3a49-282b-4a21-8964-114828657572", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"", Pod:"csi-node-driver-mt5rr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5619d388dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:06.266117 containerd[1458]: 2025-01-13 22:04:06.220 [INFO][4049] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.130/32] ContainerID="a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" Namespace="calico-system" Pod="csi-node-driver-mt5rr" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:06.266117 containerd[1458]: 2025-01-13 22:04:06.221 [INFO][4049] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5619d388dc ContainerID="a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" Namespace="calico-system" Pod="csi-node-driver-mt5rr" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:06.266117 containerd[1458]: 2025-01-13 22:04:06.230 [INFO][4049] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" Namespace="calico-system" Pod="csi-node-driver-mt5rr" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:06.266117 containerd[1458]: 2025-01-13 22:04:06.231 [INFO][4049] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" Namespace="calico-system" Pod="csi-node-driver-mt5rr" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d18d3a49-282b-4a21-8964-114828657572", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931", Pod:"csi-node-driver-mt5rr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5619d388dc", MAC:"d2:9f:1e:02:d1:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:06.266117 containerd[1458]: 2025-01-13 22:04:06.259 [INFO][4049] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931" Namespace="calico-system" Pod="csi-node-driver-mt5rr" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:06.314172 containerd[1458]: time="2025-01-13T22:04:06.313152787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:04:06.314172 containerd[1458]: time="2025-01-13T22:04:06.313594917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:04:06.314172 containerd[1458]: time="2025-01-13T22:04:06.313614283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:04:06.314172 containerd[1458]: time="2025-01-13T22:04:06.313717567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:04:06.333702 systemd-networkd[1371]: caliad805b36fc2: Link UP Jan 13 22:04:06.334129 systemd-networkd[1371]: caliad805b36fc2: Gained carrier Jan 13 22:04:06.363041 containerd[1458]: time="2025-01-13T22:04:06.361499386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lttx5,Uid:d9885fd8-16b7-4463-95bb-7f4600467308,Namespace:kube-system,Attempt:1,} returns sandbox id \"5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889\"" Jan 13 22:04:06.371134 systemd[1]: Started cri-containerd-a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931.scope - libcontainer container a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931. Jan 13 22:04:06.375497 containerd[1458]: time="2025-01-13T22:04:06.375443847Z" level=info msg="CreateContainer within sandbox \"5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:05.992 [INFO][4059] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0 calico-kube-controllers-74d6865bb- calico-system 2984ddea-13f1-4959-bb56-e4d645dd9d89 783 0 2025-01-13 22:03:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74d6865bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-2-0f60d24a30.novalocal calico-kube-controllers-74d6865bb-8mm8k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliad805b36fc2 [] []}} ContainerID="ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" Namespace="calico-system" Pod="calico-kube-controllers-74d6865bb-8mm8k" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-" Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:05.995 [INFO][4059] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" Namespace="calico-system" Pod="calico-kube-controllers-74d6865bb-8mm8k" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.087 [INFO][4086] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" HandleID="k8s-pod-network.ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.160 [INFO][4086] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" HandleID="k8s-pod-network.ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002856f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-2-0f60d24a30.novalocal", "pod":"calico-kube-controllers-74d6865bb-8mm8k", "timestamp":"2025-01-13 22:04:06.087542287 +0000 UTC"}, Hostname:"ci-4081-3-0-2-0f60d24a30.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.160 [INFO][4086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.215 [INFO][4086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.215 [INFO][4086] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-0f60d24a30.novalocal' Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.242 [INFO][4086] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.256 [INFO][4086] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.275 [INFO][4086] ipam/ipam.go 489: Trying affinity for 192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.281 [INFO][4086] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.284 [INFO][4086] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.284 [INFO][4086] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.128/26 handle="k8s-pod-network.ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.287 [INFO][4086] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62 Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.296 [INFO][4086] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.128/26 handle="k8s-pod-network.ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.311 [INFO][4086] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.131/26] block=192.168.1.128/26 handle="k8s-pod-network.ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.311 [INFO][4086] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.131/26] handle="k8s-pod-network.ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.311 [INFO][4086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:06.394119 containerd[1458]: 2025-01-13 22:04:06.312 [INFO][4086] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.131/26] IPv6=[] ContainerID="ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" HandleID="k8s-pod-network.ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:06.394703 containerd[1458]: 2025-01-13 22:04:06.320 [INFO][4059] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" Namespace="calico-system" Pod="calico-kube-controllers-74d6865bb-8mm8k" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0", GenerateName:"calico-kube-controllers-74d6865bb-", Namespace:"calico-system", SelfLink:"", UID:"2984ddea-13f1-4959-bb56-e4d645dd9d89", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74d6865bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"", Pod:"calico-kube-controllers-74d6865bb-8mm8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad805b36fc2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:06.394703 containerd[1458]: 2025-01-13 22:04:06.320 [INFO][4059] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.131/32] ContainerID="ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" Namespace="calico-system" Pod="calico-kube-controllers-74d6865bb-8mm8k" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:06.394703 containerd[1458]: 2025-01-13 22:04:06.320 [INFO][4059] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad805b36fc2 ContainerID="ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" Namespace="calico-system" Pod="calico-kube-controllers-74d6865bb-8mm8k" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:06.394703 containerd[1458]: 2025-01-13 22:04:06.335 [INFO][4059] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" Namespace="calico-system" Pod="calico-kube-controllers-74d6865bb-8mm8k" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:06.394703 containerd[1458]: 2025-01-13 22:04:06.336 [INFO][4059] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" Namespace="calico-system" Pod="calico-kube-controllers-74d6865bb-8mm8k" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0", GenerateName:"calico-kube-controllers-74d6865bb-", Namespace:"calico-system", SelfLink:"", UID:"2984ddea-13f1-4959-bb56-e4d645dd9d89", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74d6865bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62", Pod:"calico-kube-controllers-74d6865bb-8mm8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad805b36fc2", MAC:"a2:20:3a:44:ba:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:06.394703 containerd[1458]: 2025-01-13 22:04:06.378 [INFO][4059] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62" Namespace="calico-system" Pod="calico-kube-controllers-74d6865bb-8mm8k" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:06.419307 containerd[1458]: time="2025-01-13T22:04:06.418357046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mt5rr,Uid:d18d3a49-282b-4a21-8964-114828657572,Namespace:calico-system,Attempt:1,} returns sandbox id \"a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931\"" Jan 13 22:04:06.422885 containerd[1458]: time="2025-01-13T22:04:06.422749212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 22:04:06.436526 containerd[1458]: time="2025-01-13T22:04:06.435453414Z" level=info msg="CreateContainer within sandbox \"5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"572dbe2817f84e6f7d2a89e65d2692dd620ff65b4085c2e2d4053643f8c13b18\"" Jan 13 22:04:06.438689 containerd[1458]: time="2025-01-13T22:04:06.438611383Z" level=info msg="StartContainer for \"572dbe2817f84e6f7d2a89e65d2692dd620ff65b4085c2e2d4053643f8c13b18\"" Jan 13 22:04:06.462648 containerd[1458]: time="2025-01-13T22:04:06.454828892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:04:06.462648 containerd[1458]: time="2025-01-13T22:04:06.454889666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:04:06.462648 containerd[1458]: time="2025-01-13T22:04:06.454908601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:04:06.462648 containerd[1458]: time="2025-01-13T22:04:06.454995013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:04:06.485013 systemd[1]: Started cri-containerd-ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62.scope - libcontainer container ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62. Jan 13 22:04:06.495152 systemd[1]: Started cri-containerd-572dbe2817f84e6f7d2a89e65d2692dd620ff65b4085c2e2d4053643f8c13b18.scope - libcontainer container 572dbe2817f84e6f7d2a89e65d2692dd620ff65b4085c2e2d4053643f8c13b18. Jan 13 22:04:06.550713 containerd[1458]: time="2025-01-13T22:04:06.550565425Z" level=info msg="StartContainer for \"572dbe2817f84e6f7d2a89e65d2692dd620ff65b4085c2e2d4053643f8c13b18\" returns successfully" Jan 13 22:04:06.581287 containerd[1458]: time="2025-01-13T22:04:06.581154596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d6865bb-8mm8k,Uid:2984ddea-13f1-4959-bb56-e4d645dd9d89,Namespace:calico-system,Attempt:1,} returns sandbox id \"ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62\"" Jan 13 22:04:06.587485 containerd[1458]: time="2025-01-13T22:04:06.586702011Z" level=info msg="StopPodSandbox for \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\"" Jan 13 22:04:06.701394 containerd[1458]: 2025-01-13 22:04:06.661 [INFO][4304] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Jan 13 22:04:06.701394 containerd[1458]: 2025-01-13 22:04:06.661 [INFO][4304] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" iface="eth0" netns="/var/run/netns/cni-813e0546-0e99-25e7-5617-8d3d54ed508a" Jan 13 22:04:06.701394 containerd[1458]: 2025-01-13 22:04:06.662 [INFO][4304] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" iface="eth0" netns="/var/run/netns/cni-813e0546-0e99-25e7-5617-8d3d54ed508a" Jan 13 22:04:06.701394 containerd[1458]: 2025-01-13 22:04:06.662 [INFO][4304] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" iface="eth0" netns="/var/run/netns/cni-813e0546-0e99-25e7-5617-8d3d54ed508a" Jan 13 22:04:06.701394 containerd[1458]: 2025-01-13 22:04:06.662 [INFO][4304] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Jan 13 22:04:06.701394 containerd[1458]: 2025-01-13 22:04:06.662 [INFO][4304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Jan 13 22:04:06.701394 containerd[1458]: 2025-01-13 22:04:06.685 [INFO][4312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" HandleID="k8s-pod-network.6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:06.701394 containerd[1458]: 2025-01-13 22:04:06.685 [INFO][4312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:06.701394 containerd[1458]: 2025-01-13 22:04:06.685 [INFO][4312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:06.701394 containerd[1458]: 2025-01-13 22:04:06.694 [WARNING][4312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" HandleID="k8s-pod-network.6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:06.701394 containerd[1458]: 2025-01-13 22:04:06.694 [INFO][4312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" HandleID="k8s-pod-network.6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:06.701394 containerd[1458]: 2025-01-13 22:04:06.698 [INFO][4312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:06.701394 containerd[1458]: 2025-01-13 22:04:06.699 [INFO][4304] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Jan 13 22:04:06.702886 containerd[1458]: time="2025-01-13T22:04:06.702019838Z" level=info msg="TearDown network for sandbox \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\" successfully" Jan 13 22:04:06.702886 containerd[1458]: time="2025-01-13T22:04:06.702154922Z" level=info msg="StopPodSandbox for \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\" returns successfully" Jan 13 22:04:06.703687 containerd[1458]: time="2025-01-13T22:04:06.703565350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c74fff689-gwsjw,Uid:be85207e-f371-4ee7-9430-e6fb2baafa7b,Namespace:calico-apiserver,Attempt:1,}" Jan 13 22:04:06.856163 systemd[1]: run-netns-cni\x2d813e0546\x2d0e99\x2d25e7\x2d5617\x2d8d3d54ed508a.mount: Deactivated successfully. Jan 13 22:04:06.898016 systemd-networkd[1371]: cali88e2f0c4e22: Link UP Jan 13 22:04:06.898947 systemd-networkd[1371]: cali88e2f0c4e22: Gained carrier Jan 13 22:04:06.909427 kubelet[2561]: I0113 22:04:06.908385 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lttx5" podStartSLOduration=42.908366821 podStartE2EDuration="42.908366821s" podCreationTimestamp="2025-01-13 22:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:04:06.9072313 +0000 UTC m=+46.479105663" watchObservedRunningTime="2025-01-13 22:04:06.908366821 +0000 UTC m=+46.480241174" Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.788 [INFO][4319] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0 calico-apiserver-6c74fff689- calico-apiserver be85207e-f371-4ee7-9430-e6fb2baafa7b 799 0 2025-01-13 22:03:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c74fff689 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-2-0f60d24a30.novalocal calico-apiserver-6c74fff689-gwsjw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali88e2f0c4e22 [] []}} ContainerID="18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-gwsjw" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-" Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.788 [INFO][4319] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-gwsjw" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.825 [INFO][4330] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" HandleID="k8s-pod-network.18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.842 [INFO][4330] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" HandleID="k8s-pod-network.18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291520), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-2-0f60d24a30.novalocal", "pod":"calico-apiserver-6c74fff689-gwsjw", "timestamp":"2025-01-13 22:04:06.825276701 +0000 UTC"}, Hostname:"ci-4081-3-0-2-0f60d24a30.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.842 [INFO][4330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.842 [INFO][4330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.842 [INFO][4330] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-0f60d24a30.novalocal' Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.845 [INFO][4330] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.854 [INFO][4330] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.865 [INFO][4330] ipam/ipam.go 489: Trying affinity for 192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.868 [INFO][4330] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.871 [INFO][4330] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.871 [INFO][4330] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.128/26 handle="k8s-pod-network.18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.873 [INFO][4330] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.879 [INFO][4330] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.128/26 handle="k8s-pod-network.18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.891 [INFO][4330] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.132/26] block=192.168.1.128/26 handle="k8s-pod-network.18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.891 [INFO][4330] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.132/26] handle="k8s-pod-network.18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.891 [INFO][4330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:06.921979 containerd[1458]: 2025-01-13 22:04:06.891 [INFO][4330] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.132/26] IPv6=[] ContainerID="18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" HandleID="k8s-pod-network.18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:06.923247 containerd[1458]: 2025-01-13 22:04:06.894 [INFO][4319] cni-plugin/k8s.go 386: Populated endpoint ContainerID="18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-gwsjw" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0", GenerateName:"calico-apiserver-6c74fff689-", Namespace:"calico-apiserver", SelfLink:"", UID:"be85207e-f371-4ee7-9430-e6fb2baafa7b", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c74fff689", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"", Pod:"calico-apiserver-6c74fff689-gwsjw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88e2f0c4e22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:06.923247 containerd[1458]: 2025-01-13 22:04:06.895 [INFO][4319] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.132/32] ContainerID="18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-gwsjw" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:06.923247 containerd[1458]: 2025-01-13 22:04:06.895 [INFO][4319] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88e2f0c4e22 ContainerID="18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-gwsjw" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:06.923247 containerd[1458]: 2025-01-13 22:04:06.898 [INFO][4319] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-gwsjw" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:06.923247 containerd[1458]: 2025-01-13 22:04:06.900 [INFO][4319] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-gwsjw" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0", GenerateName:"calico-apiserver-6c74fff689-", Namespace:"calico-apiserver", SelfLink:"", UID:"be85207e-f371-4ee7-9430-e6fb2baafa7b", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c74fff689", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b", Pod:"calico-apiserver-6c74fff689-gwsjw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88e2f0c4e22", MAC:"6e:d0:a0:35:cf:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:06.923247 containerd[1458]: 2025-01-13 22:04:06.919 [INFO][4319] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-gwsjw" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:06.977765 containerd[1458]: time="2025-01-13T22:04:06.968428345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:04:06.977765 containerd[1458]: time="2025-01-13T22:04:06.969498185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:04:06.977765 containerd[1458]: time="2025-01-13T22:04:06.969529694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:04:06.977765 containerd[1458]: time="2025-01-13T22:04:06.971424811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:04:07.006966 systemd[1]: Started cri-containerd-18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b.scope - libcontainer container 18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b. Jan 13 22:04:07.046907 containerd[1458]: time="2025-01-13T22:04:07.046871947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c74fff689-gwsjw,Uid:be85207e-f371-4ee7-9430-e6fb2baafa7b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b\"" Jan 13 22:04:07.149941 systemd-networkd[1371]: cali60cb3444374: Gained IPv6LL Jan 13 22:04:07.534126 systemd-networkd[1371]: calia5619d388dc: Gained IPv6LL Jan 13 22:04:07.587366 containerd[1458]: time="2025-01-13T22:04:07.586980775Z" level=info msg="StopPodSandbox for \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\"" Jan 13 22:04:07.726969 systemd-networkd[1371]: caliad805b36fc2: Gained IPv6LL Jan 13 22:04:07.744976 containerd[1458]: 2025-01-13 22:04:07.694 [INFO][4406] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Jan 13 22:04:07.744976 containerd[1458]: 2025-01-13 22:04:07.696 [INFO][4406] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" iface="eth0" netns="/var/run/netns/cni-f683467e-748c-0c07-6770-70c2ee727b62" Jan 13 22:04:07.744976 containerd[1458]: 2025-01-13 22:04:07.696 [INFO][4406] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" iface="eth0" netns="/var/run/netns/cni-f683467e-748c-0c07-6770-70c2ee727b62" Jan 13 22:04:07.744976 containerd[1458]: 2025-01-13 22:04:07.696 [INFO][4406] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" iface="eth0" netns="/var/run/netns/cni-f683467e-748c-0c07-6770-70c2ee727b62" Jan 13 22:04:07.744976 containerd[1458]: 2025-01-13 22:04:07.697 [INFO][4406] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Jan 13 22:04:07.744976 containerd[1458]: 2025-01-13 22:04:07.697 [INFO][4406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Jan 13 22:04:07.744976 containerd[1458]: 2025-01-13 22:04:07.729 [INFO][4412] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" HandleID="k8s-pod-network.677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:07.744976 containerd[1458]: 2025-01-13 22:04:07.729 [INFO][4412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:07.744976 containerd[1458]: 2025-01-13 22:04:07.729 [INFO][4412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:07.744976 containerd[1458]: 2025-01-13 22:04:07.737 [WARNING][4412] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" HandleID="k8s-pod-network.677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:07.744976 containerd[1458]: 2025-01-13 22:04:07.737 [INFO][4412] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" HandleID="k8s-pod-network.677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:07.744976 containerd[1458]: 2025-01-13 22:04:07.741 [INFO][4412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:07.744976 containerd[1458]: 2025-01-13 22:04:07.743 [INFO][4406] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Jan 13 22:04:07.747335 containerd[1458]: time="2025-01-13T22:04:07.746945055Z" level=info msg="TearDown network for sandbox \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\" successfully" Jan 13 22:04:07.747335 containerd[1458]: time="2025-01-13T22:04:07.746990031Z" level=info msg="StopPodSandbox for \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\" returns successfully" Jan 13 22:04:07.748804 containerd[1458]: time="2025-01-13T22:04:07.747567715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c74fff689-j6m89,Uid:89431cd0-5f5b-411e-adcd-b6574162f169,Namespace:calico-apiserver,Attempt:1,}" Jan 13 22:04:07.749266 systemd[1]: run-netns-cni\x2df683467e\x2d748c\x2d0c07\x2d6770\x2d70c2ee727b62.mount: Deactivated successfully. Jan 13 22:04:07.925882 systemd-networkd[1371]: cali6b839bb48dc: Link UP Jan 13 22:04:07.926264 systemd-networkd[1371]: cali6b839bb48dc: Gained carrier Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.813 [INFO][4422] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0 calico-apiserver-6c74fff689- calico-apiserver 89431cd0-5f5b-411e-adcd-b6574162f169 815 0 2025-01-13 22:03:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c74fff689 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-2-0f60d24a30.novalocal calico-apiserver-6c74fff689-j6m89 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6b839bb48dc [] []}} ContainerID="da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-j6m89" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-" Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.814 [INFO][4422] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-j6m89" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.863 [INFO][4429] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" HandleID="k8s-pod-network.da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.878 [INFO][4429] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" HandleID="k8s-pod-network.da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318a90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-2-0f60d24a30.novalocal", "pod":"calico-apiserver-6c74fff689-j6m89", "timestamp":"2025-01-13 22:04:07.863015763 +0000 UTC"}, Hostname:"ci-4081-3-0-2-0f60d24a30.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.878 [INFO][4429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.878 [INFO][4429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.878 [INFO][4429] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-0f60d24a30.novalocal' Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.881 [INFO][4429] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.886 [INFO][4429] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.892 [INFO][4429] ipam/ipam.go 489: Trying affinity for 192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.896 [INFO][4429] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.901 [INFO][4429] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.902 [INFO][4429] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.128/26 handle="k8s-pod-network.da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.904 [INFO][4429] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.910 [INFO][4429] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.128/26 handle="k8s-pod-network.da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.918 [INFO][4429] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.133/26] block=192.168.1.128/26 handle="k8s-pod-network.da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.919 [INFO][4429] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.133/26] handle="k8s-pod-network.da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.919 [INFO][4429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:07.942818 containerd[1458]: 2025-01-13 22:04:07.919 [INFO][4429] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.133/26] IPv6=[] ContainerID="da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" HandleID="k8s-pod-network.da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:07.943567 containerd[1458]: 2025-01-13 22:04:07.920 [INFO][4422] cni-plugin/k8s.go 386: Populated endpoint ContainerID="da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-j6m89" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0", GenerateName:"calico-apiserver-6c74fff689-", Namespace:"calico-apiserver", SelfLink:"", UID:"89431cd0-5f5b-411e-adcd-b6574162f169", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c74fff689", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"", Pod:"calico-apiserver-6c74fff689-j6m89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b839bb48dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:07.943567 containerd[1458]: 2025-01-13 22:04:07.920 [INFO][4422] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.133/32] ContainerID="da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-j6m89" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:07.943567 containerd[1458]: 2025-01-13 22:04:07.921 [INFO][4422] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b839bb48dc ContainerID="da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-j6m89" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:07.943567 containerd[1458]: 2025-01-13 22:04:07.924 [INFO][4422] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-j6m89" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:07.943567 containerd[1458]: 2025-01-13 22:04:07.924 [INFO][4422] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-j6m89" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0", GenerateName:"calico-apiserver-6c74fff689-", Namespace:"calico-apiserver", SelfLink:"", UID:"89431cd0-5f5b-411e-adcd-b6574162f169", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c74fff689", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f", Pod:"calico-apiserver-6c74fff689-j6m89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b839bb48dc", MAC:"ea:3b:cf:a8:68:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:07.943567 containerd[1458]: 2025-01-13 22:04:07.938 [INFO][4422] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f" Namespace="calico-apiserver" Pod="calico-apiserver-6c74fff689-j6m89" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:07.976518 containerd[1458]: time="2025-01-13T22:04:07.969851772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:04:07.976518 containerd[1458]: time="2025-01-13T22:04:07.969902848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:04:07.976518 containerd[1458]: time="2025-01-13T22:04:07.969931462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:04:07.976518 containerd[1458]: time="2025-01-13T22:04:07.970018675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:04:08.017160 systemd[1]: Started cri-containerd-da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f.scope - libcontainer container da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f. Jan 13 22:04:08.104733 containerd[1458]: time="2025-01-13T22:04:08.104446744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c74fff689-j6m89,Uid:89431cd0-5f5b-411e-adcd-b6574162f169,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f\"" Jan 13 22:04:08.586726 containerd[1458]: time="2025-01-13T22:04:08.586639617Z" level=info msg="StopPodSandbox for \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\"" Jan 13 22:04:08.705933 containerd[1458]: 2025-01-13 22:04:08.653 [INFO][4525] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Jan 13 22:04:08.705933 containerd[1458]: 2025-01-13 22:04:08.654 [INFO][4525] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" iface="eth0" netns="/var/run/netns/cni-47311937-8042-804d-bc53-81f0589b38f7" Jan 13 22:04:08.705933 containerd[1458]: 2025-01-13 22:04:08.654 [INFO][4525] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" iface="eth0" netns="/var/run/netns/cni-47311937-8042-804d-bc53-81f0589b38f7" Jan 13 22:04:08.705933 containerd[1458]: 2025-01-13 22:04:08.655 [INFO][4525] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" iface="eth0" netns="/var/run/netns/cni-47311937-8042-804d-bc53-81f0589b38f7" Jan 13 22:04:08.705933 containerd[1458]: 2025-01-13 22:04:08.655 [INFO][4525] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Jan 13 22:04:08.705933 containerd[1458]: 2025-01-13 22:04:08.655 [INFO][4525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Jan 13 22:04:08.705933 containerd[1458]: 2025-01-13 22:04:08.679 [INFO][4531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" HandleID="k8s-pod-network.6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:08.705933 containerd[1458]: 2025-01-13 22:04:08.684 [INFO][4531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:08.705933 containerd[1458]: 2025-01-13 22:04:08.684 [INFO][4531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:08.705933 containerd[1458]: 2025-01-13 22:04:08.698 [WARNING][4531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" HandleID="k8s-pod-network.6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:08.705933 containerd[1458]: 2025-01-13 22:04:08.698 [INFO][4531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" HandleID="k8s-pod-network.6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:08.705933 containerd[1458]: 2025-01-13 22:04:08.700 [INFO][4531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:08.705933 containerd[1458]: 2025-01-13 22:04:08.703 [INFO][4525] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Jan 13 22:04:08.713576 containerd[1458]: time="2025-01-13T22:04:08.706320877Z" level=info msg="TearDown network for sandbox \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\" successfully" Jan 13 22:04:08.713576 containerd[1458]: time="2025-01-13T22:04:08.706357135Z" level=info msg="StopPodSandbox for \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\" returns successfully" Jan 13 22:04:08.713576 containerd[1458]: time="2025-01-13T22:04:08.710509782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fbjtp,Uid:6d94271b-296f-4e1d-8a91-44ce5605a368,Namespace:kube-system,Attempt:1,}" Jan 13 22:04:08.709562 systemd[1]: run-netns-cni\x2d47311937\x2d8042\x2d804d\x2dbc53\x2d81f0589b38f7.mount: Deactivated successfully. Jan 13 22:04:08.750140 systemd-networkd[1371]: cali88e2f0c4e22: Gained IPv6LL Jan 13 22:04:08.891612 systemd-networkd[1371]: cali8f6eccf0080: Link UP Jan 13 22:04:08.892122 systemd-networkd[1371]: cali8f6eccf0080: Gained carrier Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.789 [INFO][4537] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0 coredns-6f6b679f8f- kube-system 6d94271b-296f-4e1d-8a91-44ce5605a368 825 0 2025-01-13 22:03:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-2-0f60d24a30.novalocal coredns-6f6b679f8f-fbjtp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8f6eccf0080 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbjtp" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-" Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.789 [INFO][4537] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbjtp" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.820 [INFO][4547] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" HandleID="k8s-pod-network.2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.832 [INFO][4547] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" HandleID="k8s-pod-network.2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291a50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-2-0f60d24a30.novalocal", "pod":"coredns-6f6b679f8f-fbjtp", "timestamp":"2025-01-13 22:04:08.8203935 +0000 UTC"}, Hostname:"ci-4081-3-0-2-0f60d24a30.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.832 [INFO][4547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.832 [INFO][4547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.832 [INFO][4547] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-2-0f60d24a30.novalocal' Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.835 [INFO][4547] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.849 [INFO][4547] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.856 [INFO][4547] ipam/ipam.go 489: Trying affinity for 192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.859 [INFO][4547] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.862 [INFO][4547] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.128/26 host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.862 [INFO][4547] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.128/26 handle="k8s-pod-network.2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.864 [INFO][4547] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.872 [INFO][4547] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.128/26 handle="k8s-pod-network.2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.882 [INFO][4547] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.134/26] block=192.168.1.128/26 handle="k8s-pod-network.2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.882 [INFO][4547] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.134/26] handle="k8s-pod-network.2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" host="ci-4081-3-0-2-0f60d24a30.novalocal" Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.882 [INFO][4547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:08.909174 containerd[1458]: 2025-01-13 22:04:08.882 [INFO][4547] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.134/26] IPv6=[] ContainerID="2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" HandleID="k8s-pod-network.2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:08.918573 containerd[1458]: 2025-01-13 22:04:08.884 [INFO][4537] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbjtp" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6d94271b-296f-4e1d-8a91-44ce5605a368", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-fbjtp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f6eccf0080", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:08.918573 containerd[1458]: 2025-01-13 22:04:08.884 [INFO][4537] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.134/32] ContainerID="2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbjtp" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:08.918573 containerd[1458]: 2025-01-13 22:04:08.884 [INFO][4537] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f6eccf0080 ContainerID="2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbjtp" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:08.918573 containerd[1458]: 2025-01-13 22:04:08.886 [INFO][4537] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbjtp" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:08.918573 containerd[1458]: 2025-01-13 22:04:08.886 [INFO][4537] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbjtp" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6d94271b-296f-4e1d-8a91-44ce5605a368", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef", Pod:"coredns-6f6b679f8f-fbjtp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f6eccf0080", MAC:"8a:64:37:36:28:86", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:08.918573 containerd[1458]: 2025-01-13 22:04:08.900 [INFO][4537] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbjtp" WorkloadEndpoint="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:08.950245 containerd[1458]: time="2025-01-13T22:04:08.950001215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:04:08.950245 containerd[1458]: time="2025-01-13T22:04:08.950076156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:04:08.950245 containerd[1458]: time="2025-01-13T22:04:08.950095291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:04:08.950245 containerd[1458]: time="2025-01-13T22:04:08.950179910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:04:08.972916 systemd[1]: Started cri-containerd-2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef.scope - libcontainer container 2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef. Jan 13 22:04:09.030863 containerd[1458]: time="2025-01-13T22:04:09.030816913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fbjtp,Uid:6d94271b-296f-4e1d-8a91-44ce5605a368,Namespace:kube-system,Attempt:1,} returns sandbox id \"2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef\"" Jan 13 22:04:09.033508 containerd[1458]: time="2025-01-13T22:04:09.033153178Z" level=info msg="CreateContainer within sandbox \"2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 22:04:09.114505 containerd[1458]: time="2025-01-13T22:04:09.114390732Z" level=info msg="CreateContainer within sandbox \"2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"33738d0c48589b490fe9693eabf7e4d8bd066393cfd840fdd14769df5cc72d50\"" Jan 13 22:04:09.115415 containerd[1458]: time="2025-01-13T22:04:09.115383575Z" level=info msg="StartContainer for \"33738d0c48589b490fe9693eabf7e4d8bd066393cfd840fdd14769df5cc72d50\"" Jan 13 22:04:09.152184 systemd[1]: Started cri-containerd-33738d0c48589b490fe9693eabf7e4d8bd066393cfd840fdd14769df5cc72d50.scope - libcontainer container 33738d0c48589b490fe9693eabf7e4d8bd066393cfd840fdd14769df5cc72d50. Jan 13 22:04:09.195861 containerd[1458]: time="2025-01-13T22:04:09.195203398Z" level=info msg="StartContainer for \"33738d0c48589b490fe9693eabf7e4d8bd066393cfd840fdd14769df5cc72d50\" returns successfully" Jan 13 22:04:09.340931 containerd[1458]: time="2025-01-13T22:04:09.340874963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:09.342941 containerd[1458]: time="2025-01-13T22:04:09.342891719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 22:04:09.344349 containerd[1458]: time="2025-01-13T22:04:09.344309640Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:09.346971 containerd[1458]: time="2025-01-13T22:04:09.346908239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:09.347735 containerd[1458]: time="2025-01-13T22:04:09.347631436Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.924829706s" Jan 13 22:04:09.347735 containerd[1458]: time="2025-01-13T22:04:09.347664568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 22:04:09.349631 containerd[1458]: time="2025-01-13T22:04:09.349444279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 22:04:09.350894 containerd[1458]: time="2025-01-13T22:04:09.350862020Z" level=info msg="CreateContainer within sandbox \"a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 22:04:09.374850 containerd[1458]: time="2025-01-13T22:04:09.374809152Z" level=info msg="CreateContainer within sandbox \"a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"43011b6b1253d91e8c5345cb187fc756eddedb0964458348d53e81848c73ea81\"" Jan 13 22:04:09.375810 containerd[1458]: time="2025-01-13T22:04:09.375309501Z" level=info msg="StartContainer for \"43011b6b1253d91e8c5345cb187fc756eddedb0964458348d53e81848c73ea81\"" Jan 13 22:04:09.408934 systemd[1]: Started cri-containerd-43011b6b1253d91e8c5345cb187fc756eddedb0964458348d53e81848c73ea81.scope - libcontainer container 43011b6b1253d91e8c5345cb187fc756eddedb0964458348d53e81848c73ea81. Jan 13 22:04:09.517915 systemd-networkd[1371]: cali6b839bb48dc: Gained IPv6LL Jan 13 22:04:09.634430 containerd[1458]: time="2025-01-13T22:04:09.634328976Z" level=info msg="StartContainer for \"43011b6b1253d91e8c5345cb187fc756eddedb0964458348d53e81848c73ea81\" returns successfully" Jan 13 22:04:09.963440 kubelet[2561]: I0113 22:04:09.962353 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fbjtp" podStartSLOduration=45.962333375 podStartE2EDuration="45.962333375s" podCreationTimestamp="2025-01-13 22:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:04:09.946584702 +0000 UTC m=+49.518459055" watchObservedRunningTime="2025-01-13 22:04:09.962333375 +0000 UTC m=+49.534207728" Jan 13 22:04:10.350336 systemd-networkd[1371]: cali8f6eccf0080: Gained IPv6LL Jan 13 22:04:13.785433 containerd[1458]: time="2025-01-13T22:04:13.785396048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:13.787586 containerd[1458]: time="2025-01-13T22:04:13.787556212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 22:04:13.788978 containerd[1458]: time="2025-01-13T22:04:13.788938896Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:13.791868 containerd[1458]: time="2025-01-13T22:04:13.791827337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:13.792531 containerd[1458]: time="2025-01-13T22:04:13.792495011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.443016748s" Jan 13 22:04:13.792578 containerd[1458]: time="2025-01-13T22:04:13.792531790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 22:04:13.799519 containerd[1458]: time="2025-01-13T22:04:13.799498805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 22:04:13.811339 containerd[1458]: time="2025-01-13T22:04:13.811301737Z" level=info msg="CreateContainer within sandbox \"ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 22:04:13.847340 containerd[1458]: time="2025-01-13T22:04:13.847302491Z" level=info msg="CreateContainer within sandbox \"ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0fc7ffa37d6dd8752afcb198005ec049f5cf7c236c8f7c30c7fe0028d8edff95\"" Jan 13 22:04:13.848994 containerd[1458]: time="2025-01-13T22:04:13.848956946Z" level=info msg="StartContainer for \"0fc7ffa37d6dd8752afcb198005ec049f5cf7c236c8f7c30c7fe0028d8edff95\"" Jan 13 22:04:13.892802 systemd[1]: Started cri-containerd-0fc7ffa37d6dd8752afcb198005ec049f5cf7c236c8f7c30c7fe0028d8edff95.scope - libcontainer container 0fc7ffa37d6dd8752afcb198005ec049f5cf7c236c8f7c30c7fe0028d8edff95. Jan 13 22:04:13.941347 containerd[1458]: time="2025-01-13T22:04:13.941250740Z" level=info msg="StartContainer for \"0fc7ffa37d6dd8752afcb198005ec049f5cf7c236c8f7c30c7fe0028d8edff95\" returns successfully" Jan 13 22:04:13.969017 kubelet[2561]: I0113 22:04:13.968625 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74d6865bb-8mm8k" podStartSLOduration=31.752051431 podStartE2EDuration="38.968609636s" podCreationTimestamp="2025-01-13 22:03:35 +0000 UTC" firstStartedPulling="2025-01-13 22:04:06.582736376 +0000 UTC m=+46.154610729" lastFinishedPulling="2025-01-13 22:04:13.799294581 +0000 UTC m=+53.371168934" observedRunningTime="2025-01-13 22:04:13.967569203 +0000 UTC m=+53.539443556" watchObservedRunningTime="2025-01-13 22:04:13.968609636 +0000 UTC m=+53.540483989" Jan 13 22:04:18.873754 containerd[1458]: time="2025-01-13T22:04:18.873500286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:18.875697 containerd[1458]: time="2025-01-13T22:04:18.875656552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 22:04:18.877708 containerd[1458]: time="2025-01-13T22:04:18.877665502Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:18.882087 containerd[1458]: time="2025-01-13T22:04:18.881880230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:18.882883 containerd[1458]: time="2025-01-13T22:04:18.882329252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 5.082723487s" Jan 13 22:04:18.882883 containerd[1458]: time="2025-01-13T22:04:18.882368247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 22:04:18.884170 containerd[1458]: time="2025-01-13T22:04:18.884004416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 22:04:18.884838 containerd[1458]: time="2025-01-13T22:04:18.884815869Z" level=info msg="CreateContainer within sandbox \"18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 22:04:18.903918 containerd[1458]: time="2025-01-13T22:04:18.903883177Z" level=info msg="CreateContainer within sandbox \"18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4ac9c3d260c73ce2d07c230eb38b82032dbc8b87084acfb82c6b5db704e3f172\"" Jan 13 22:04:18.904593 containerd[1458]: time="2025-01-13T22:04:18.904474397Z" level=info msg="StartContainer for \"4ac9c3d260c73ce2d07c230eb38b82032dbc8b87084acfb82c6b5db704e3f172\"" Jan 13 22:04:18.948921 systemd[1]: Started cri-containerd-4ac9c3d260c73ce2d07c230eb38b82032dbc8b87084acfb82c6b5db704e3f172.scope - libcontainer container 4ac9c3d260c73ce2d07c230eb38b82032dbc8b87084acfb82c6b5db704e3f172. Jan 13 22:04:18.993507 containerd[1458]: time="2025-01-13T22:04:18.993286054Z" level=info msg="StartContainer for \"4ac9c3d260c73ce2d07c230eb38b82032dbc8b87084acfb82c6b5db704e3f172\" returns successfully" Jan 13 22:04:19.387446 containerd[1458]: time="2025-01-13T22:04:19.387402107Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:19.389945 containerd[1458]: time="2025-01-13T22:04:19.389896387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 22:04:19.394564 containerd[1458]: time="2025-01-13T22:04:19.394517397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 510.483285ms" Jan 13 22:04:19.394647 containerd[1458]: time="2025-01-13T22:04:19.394621903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 22:04:19.398384 containerd[1458]: time="2025-01-13T22:04:19.398351282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 22:04:19.399549 containerd[1458]: time="2025-01-13T22:04:19.399521377Z" level=info msg="CreateContainer within sandbox \"da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 22:04:19.430573 containerd[1458]: time="2025-01-13T22:04:19.430531392Z" level=info msg="CreateContainer within sandbox \"da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"56ce1e9db239c2194b17beb2d555b8dd6c0dcfadba06fef0a3c837f58d9c403d\"" Jan 13 22:04:19.431653 containerd[1458]: time="2025-01-13T22:04:19.431598575Z" level=info msg="StartContainer for \"56ce1e9db239c2194b17beb2d555b8dd6c0dcfadba06fef0a3c837f58d9c403d\"" Jan 13 22:04:19.471949 systemd[1]: Started cri-containerd-56ce1e9db239c2194b17beb2d555b8dd6c0dcfadba06fef0a3c837f58d9c403d.scope - libcontainer container 56ce1e9db239c2194b17beb2d555b8dd6c0dcfadba06fef0a3c837f58d9c403d. Jan 13 22:04:19.572902 containerd[1458]: time="2025-01-13T22:04:19.571453673Z" level=info msg="StartContainer for \"56ce1e9db239c2194b17beb2d555b8dd6c0dcfadba06fef0a3c837f58d9c403d\" returns successfully" Jan 13 22:04:20.053685 kubelet[2561]: I0113 22:04:20.053157 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c74fff689-j6m89" podStartSLOduration=33.763449999 podStartE2EDuration="45.053143633s" podCreationTimestamp="2025-01-13 22:03:35 +0000 UTC" firstStartedPulling="2025-01-13 22:04:08.108008209 +0000 UTC m=+47.679882572" lastFinishedPulling="2025-01-13 22:04:19.397701843 +0000 UTC m=+58.969576206" observedRunningTime="2025-01-13 22:04:20.051677281 +0000 UTC m=+59.623551634" watchObservedRunningTime="2025-01-13 22:04:20.053143633 +0000 UTC m=+59.625017996" Jan 13 22:04:20.069173 kubelet[2561]: I0113 22:04:20.069103 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c74fff689-gwsjw" podStartSLOduration=33.23440583 podStartE2EDuration="45.069083721s" podCreationTimestamp="2025-01-13 22:03:35 +0000 UTC" firstStartedPulling="2025-01-13 22:04:07.048575896 +0000 UTC m=+46.620450259" lastFinishedPulling="2025-01-13 22:04:18.883253797 +0000 UTC m=+58.455128150" observedRunningTime="2025-01-13 22:04:20.067796095 +0000 UTC m=+59.639670458" watchObservedRunningTime="2025-01-13 22:04:20.069083721 +0000 UTC m=+59.640958074" Jan 13 22:04:20.613488 containerd[1458]: time="2025-01-13T22:04:20.613426303Z" level=info msg="StopPodSandbox for \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\"" Jan 13 22:04:20.763492 containerd[1458]: 2025-01-13 22:04:20.684 [WARNING][4883] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0", GenerateName:"calico-kube-controllers-74d6865bb-", Namespace:"calico-system", SelfLink:"", UID:"2984ddea-13f1-4959-bb56-e4d645dd9d89", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74d6865bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62", Pod:"calico-kube-controllers-74d6865bb-8mm8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad805b36fc2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:20.763492 containerd[1458]: 2025-01-13 22:04:20.684 [INFO][4883] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Jan 13 22:04:20.763492 containerd[1458]: 2025-01-13 22:04:20.684 [INFO][4883] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" iface="eth0" netns="" Jan 13 22:04:20.763492 containerd[1458]: 2025-01-13 22:04:20.684 [INFO][4883] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Jan 13 22:04:20.763492 containerd[1458]: 2025-01-13 22:04:20.685 [INFO][4883] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Jan 13 22:04:20.763492 containerd[1458]: 2025-01-13 22:04:20.743 [INFO][4889] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" HandleID="k8s-pod-network.e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:20.763492 containerd[1458]: 2025-01-13 22:04:20.744 [INFO][4889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:20.763492 containerd[1458]: 2025-01-13 22:04:20.744 [INFO][4889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:20.763492 containerd[1458]: 2025-01-13 22:04:20.757 [WARNING][4889] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" HandleID="k8s-pod-network.e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:20.763492 containerd[1458]: 2025-01-13 22:04:20.757 [INFO][4889] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" HandleID="k8s-pod-network.e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:20.763492 containerd[1458]: 2025-01-13 22:04:20.761 [INFO][4889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:20.763492 containerd[1458]: 2025-01-13 22:04:20.762 [INFO][4883] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Jan 13 22:04:20.763492 containerd[1458]: time="2025-01-13T22:04:20.763330960Z" level=info msg="TearDown network for sandbox \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\" successfully" Jan 13 22:04:20.763492 containerd[1458]: time="2025-01-13T22:04:20.763382717Z" level=info msg="StopPodSandbox for \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\" returns successfully" Jan 13 22:04:20.765123 containerd[1458]: time="2025-01-13T22:04:20.764026475Z" level=info msg="RemovePodSandbox for \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\"" Jan 13 22:04:20.765123 containerd[1458]: time="2025-01-13T22:04:20.764051843Z" level=info msg="Forcibly stopping sandbox \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\"" Jan 13 22:04:20.857491 containerd[1458]: 2025-01-13 22:04:20.817 [WARNING][4907] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0", GenerateName:"calico-kube-controllers-74d6865bb-", Namespace:"calico-system", SelfLink:"", UID:"2984ddea-13f1-4959-bb56-e4d645dd9d89", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74d6865bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"ad55eb6bbc05e1f332b2abaa957fa8b6ed2646894bf352fbf62d1f449a409e62", Pod:"calico-kube-controllers-74d6865bb-8mm8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad805b36fc2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:20.857491 containerd[1458]: 2025-01-13 22:04:20.817 [INFO][4907] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Jan 13 22:04:20.857491 containerd[1458]: 2025-01-13 22:04:20.817 [INFO][4907] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" iface="eth0" netns="" Jan 13 22:04:20.857491 containerd[1458]: 2025-01-13 22:04:20.817 [INFO][4907] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Jan 13 22:04:20.857491 containerd[1458]: 2025-01-13 22:04:20.817 [INFO][4907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Jan 13 22:04:20.857491 containerd[1458]: 2025-01-13 22:04:20.843 [INFO][4913] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" HandleID="k8s-pod-network.e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:20.857491 containerd[1458]: 2025-01-13 22:04:20.843 [INFO][4913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:20.857491 containerd[1458]: 2025-01-13 22:04:20.843 [INFO][4913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:20.857491 containerd[1458]: 2025-01-13 22:04:20.850 [WARNING][4913] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" HandleID="k8s-pod-network.e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:20.857491 containerd[1458]: 2025-01-13 22:04:20.851 [INFO][4913] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" HandleID="k8s-pod-network.e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--kube--controllers--74d6865bb--8mm8k-eth0" Jan 13 22:04:20.857491 containerd[1458]: 2025-01-13 22:04:20.852 [INFO][4913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:20.857491 containerd[1458]: 2025-01-13 22:04:20.855 [INFO][4907] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56" Jan 13 22:04:20.858014 containerd[1458]: time="2025-01-13T22:04:20.857527211Z" level=info msg="TearDown network for sandbox \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\" successfully" Jan 13 22:04:20.877697 containerd[1458]: time="2025-01-13T22:04:20.877530142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:04:20.877697 containerd[1458]: time="2025-01-13T22:04:20.877653443Z" level=info msg="RemovePodSandbox \"e791230010278d26e1f3f2800445ceca005bfdc9b5d0c54fd299bb579fb1ba56\" returns successfully" Jan 13 22:04:20.880880 containerd[1458]: time="2025-01-13T22:04:20.879067957Z" level=info msg="StopPodSandbox for \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\"" Jan 13 22:04:21.001873 containerd[1458]: 2025-01-13 22:04:20.928 [WARNING][4931] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d18d3a49-282b-4a21-8964-114828657572", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931", Pod:"csi-node-driver-mt5rr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5619d388dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:21.001873 containerd[1458]: 2025-01-13 22:04:20.929 [INFO][4931] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Jan 13 22:04:21.001873 containerd[1458]: 2025-01-13 22:04:20.929 [INFO][4931] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" iface="eth0" netns="" Jan 13 22:04:21.001873 containerd[1458]: 2025-01-13 22:04:20.929 [INFO][4931] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Jan 13 22:04:21.001873 containerd[1458]: 2025-01-13 22:04:20.929 [INFO][4931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Jan 13 22:04:21.001873 containerd[1458]: 2025-01-13 22:04:20.961 [INFO][4937] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" HandleID="k8s-pod-network.3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:21.001873 containerd[1458]: 2025-01-13 22:04:20.961 [INFO][4937] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:21.001873 containerd[1458]: 2025-01-13 22:04:20.961 [INFO][4937] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:21.001873 containerd[1458]: 2025-01-13 22:04:20.996 [WARNING][4937] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" HandleID="k8s-pod-network.3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:21.001873 containerd[1458]: 2025-01-13 22:04:20.996 [INFO][4937] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" HandleID="k8s-pod-network.3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:21.001873 containerd[1458]: 2025-01-13 22:04:20.999 [INFO][4937] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:21.001873 containerd[1458]: 2025-01-13 22:04:21.000 [INFO][4931] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Jan 13 22:04:21.106457 containerd[1458]: time="2025-01-13T22:04:21.001920356Z" level=info msg="TearDown network for sandbox \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\" successfully" Jan 13 22:04:21.106457 containerd[1458]: time="2025-01-13T22:04:21.001948007Z" level=info msg="StopPodSandbox for \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\" returns successfully" Jan 13 22:04:21.147053 containerd[1458]: time="2025-01-13T22:04:21.142258325Z" level=info msg="RemovePodSandbox for \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\"" Jan 13 22:04:21.147053 containerd[1458]: time="2025-01-13T22:04:21.142331803Z" level=info msg="Forcibly stopping sandbox \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\"" Jan 13 22:04:21.329293 containerd[1458]: 2025-01-13 22:04:21.223 [WARNING][4955] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d18d3a49-282b-4a21-8964-114828657572", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931", Pod:"csi-node-driver-mt5rr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5619d388dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:21.329293 containerd[1458]: 2025-01-13 22:04:21.223 [INFO][4955] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Jan 13 22:04:21.329293 containerd[1458]: 2025-01-13 22:04:21.223 [INFO][4955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" iface="eth0" netns="" Jan 13 22:04:21.329293 containerd[1458]: 2025-01-13 22:04:21.223 [INFO][4955] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Jan 13 22:04:21.329293 containerd[1458]: 2025-01-13 22:04:21.223 [INFO][4955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Jan 13 22:04:21.329293 containerd[1458]: 2025-01-13 22:04:21.247 [INFO][4961] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" HandleID="k8s-pod-network.3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:21.329293 containerd[1458]: 2025-01-13 22:04:21.248 [INFO][4961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:21.329293 containerd[1458]: 2025-01-13 22:04:21.248 [INFO][4961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:21.329293 containerd[1458]: 2025-01-13 22:04:21.256 [WARNING][4961] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" HandleID="k8s-pod-network.3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:21.329293 containerd[1458]: 2025-01-13 22:04:21.256 [INFO][4961] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" HandleID="k8s-pod-network.3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-csi--node--driver--mt5rr-eth0" Jan 13 22:04:21.329293 containerd[1458]: 2025-01-13 22:04:21.259 [INFO][4961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:21.329293 containerd[1458]: 2025-01-13 22:04:21.261 [INFO][4955] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3" Jan 13 22:04:21.329946 containerd[1458]: time="2025-01-13T22:04:21.329857559Z" level=info msg="TearDown network for sandbox \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\" successfully" Jan 13 22:04:21.410999 kubelet[2561]: I0113 22:04:21.410190 2561 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:04:21.951829 containerd[1458]: time="2025-01-13T22:04:21.949954688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:04:21.951829 containerd[1458]: time="2025-01-13T22:04:21.950014070Z" level=info msg="RemovePodSandbox \"3a1307dcb43460aedc74f0d29526179d52ed4bc9f55acde954453c8f7ba381d3\" returns successfully" Jan 13 22:04:21.967409 containerd[1458]: time="2025-01-13T22:04:21.967359474Z" level=info msg="StopPodSandbox for \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\"" Jan 13 22:04:22.143879 containerd[1458]: 2025-01-13 22:04:22.071 [WARNING][4994] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6d94271b-296f-4e1d-8a91-44ce5605a368", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef", Pod:"coredns-6f6b679f8f-fbjtp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f6eccf0080", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:22.143879 containerd[1458]: 2025-01-13 22:04:22.071 [INFO][4994] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Jan 13 22:04:22.143879 containerd[1458]: 2025-01-13 22:04:22.071 [INFO][4994] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" iface="eth0" netns="" Jan 13 22:04:22.143879 containerd[1458]: 2025-01-13 22:04:22.071 [INFO][4994] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Jan 13 22:04:22.143879 containerd[1458]: 2025-01-13 22:04:22.071 [INFO][4994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Jan 13 22:04:22.143879 containerd[1458]: 2025-01-13 22:04:22.118 [INFO][5014] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" HandleID="k8s-pod-network.6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:22.143879 containerd[1458]: 2025-01-13 22:04:22.118 [INFO][5014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:22.143879 containerd[1458]: 2025-01-13 22:04:22.118 [INFO][5014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:22.143879 containerd[1458]: 2025-01-13 22:04:22.134 [WARNING][5014] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" HandleID="k8s-pod-network.6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:22.143879 containerd[1458]: 2025-01-13 22:04:22.134 [INFO][5014] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" HandleID="k8s-pod-network.6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:22.143879 containerd[1458]: 2025-01-13 22:04:22.137 [INFO][5014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:22.143879 containerd[1458]: 2025-01-13 22:04:22.141 [INFO][4994] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Jan 13 22:04:22.143879 containerd[1458]: time="2025-01-13T22:04:22.143728821Z" level=info msg="TearDown network for sandbox \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\" successfully" Jan 13 22:04:22.144350 containerd[1458]: time="2025-01-13T22:04:22.143947220Z" level=info msg="StopPodSandbox for \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\" returns successfully" Jan 13 22:04:22.145386 containerd[1458]: time="2025-01-13T22:04:22.145067652Z" level=info msg="RemovePodSandbox for \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\"" Jan 13 22:04:22.145386 containerd[1458]: time="2025-01-13T22:04:22.145206132Z" level=info msg="Forcibly stopping sandbox \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\"" Jan 13 22:04:22.310876 containerd[1458]: 2025-01-13 22:04:22.241 [WARNING][5035] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6d94271b-296f-4e1d-8a91-44ce5605a368", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"2fc27d18f8a3177bfa0fa9d83a88cc941c5ea87060ba11636ee024276f1d4cef", Pod:"coredns-6f6b679f8f-fbjtp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f6eccf0080", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:22.310876 containerd[1458]: 2025-01-13 22:04:22.241 [INFO][5035] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Jan 13 22:04:22.310876 containerd[1458]: 2025-01-13 22:04:22.242 [INFO][5035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" iface="eth0" netns="" Jan 13 22:04:22.310876 containerd[1458]: 2025-01-13 22:04:22.242 [INFO][5035] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Jan 13 22:04:22.310876 containerd[1458]: 2025-01-13 22:04:22.242 [INFO][5035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Jan 13 22:04:22.310876 containerd[1458]: 2025-01-13 22:04:22.290 [INFO][5041] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" HandleID="k8s-pod-network.6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:22.310876 containerd[1458]: 2025-01-13 22:04:22.290 [INFO][5041] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:22.310876 containerd[1458]: 2025-01-13 22:04:22.290 [INFO][5041] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:22.310876 containerd[1458]: 2025-01-13 22:04:22.302 [WARNING][5041] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" HandleID="k8s-pod-network.6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:22.310876 containerd[1458]: 2025-01-13 22:04:22.302 [INFO][5041] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" HandleID="k8s-pod-network.6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--fbjtp-eth0" Jan 13 22:04:22.310876 containerd[1458]: 2025-01-13 22:04:22.305 [INFO][5041] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:22.310876 containerd[1458]: 2025-01-13 22:04:22.306 [INFO][5035] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a" Jan 13 22:04:22.310876 containerd[1458]: time="2025-01-13T22:04:22.310184225Z" level=info msg="TearDown network for sandbox \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\" successfully" Jan 13 22:04:22.316893 containerd[1458]: time="2025-01-13T22:04:22.316849381Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:04:22.317319 containerd[1458]: time="2025-01-13T22:04:22.316922107Z" level=info msg="RemovePodSandbox \"6c6b800a41d762044363837333942832df7aad286fd53b7bec606e1317c7f46a\" returns successfully" Jan 13 22:04:22.317370 containerd[1458]: time="2025-01-13T22:04:22.317347315Z" level=info msg="StopPodSandbox for \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\"" Jan 13 22:04:22.433571 containerd[1458]: time="2025-01-13T22:04:22.432569675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:22.436604 containerd[1458]: time="2025-01-13T22:04:22.436560453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 22:04:22.437855 containerd[1458]: 2025-01-13 22:04:22.387 [WARNING][5060] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0", GenerateName:"calico-apiserver-6c74fff689-", Namespace:"calico-apiserver", SelfLink:"", UID:"be85207e-f371-4ee7-9430-e6fb2baafa7b", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c74fff689", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b", Pod:"calico-apiserver-6c74fff689-gwsjw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88e2f0c4e22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:22.437855 containerd[1458]: 2025-01-13 22:04:22.387 [INFO][5060] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Jan 13 22:04:22.437855 containerd[1458]: 2025-01-13 22:04:22.387 [INFO][5060] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" iface="eth0" netns="" Jan 13 22:04:22.437855 containerd[1458]: 2025-01-13 22:04:22.387 [INFO][5060] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Jan 13 22:04:22.437855 containerd[1458]: 2025-01-13 22:04:22.387 [INFO][5060] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Jan 13 22:04:22.437855 containerd[1458]: 2025-01-13 22:04:22.420 [INFO][5066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" HandleID="k8s-pod-network.6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:22.437855 containerd[1458]: 2025-01-13 22:04:22.422 [INFO][5066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:22.437855 containerd[1458]: 2025-01-13 22:04:22.422 [INFO][5066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:22.437855 containerd[1458]: 2025-01-13 22:04:22.430 [WARNING][5066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" HandleID="k8s-pod-network.6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:22.437855 containerd[1458]: 2025-01-13 22:04:22.430 [INFO][5066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" HandleID="k8s-pod-network.6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:22.437855 containerd[1458]: 2025-01-13 22:04:22.434 [INFO][5066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:22.437855 containerd[1458]: 2025-01-13 22:04:22.436 [INFO][5060] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Jan 13 22:04:22.438441 containerd[1458]: time="2025-01-13T22:04:22.438403061Z" level=info msg="TearDown network for sandbox \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\" successfully" Jan 13 22:04:22.438518 containerd[1458]: time="2025-01-13T22:04:22.438501796Z" level=info msg="StopPodSandbox for \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\" returns successfully" Jan 13 22:04:22.439333 containerd[1458]: time="2025-01-13T22:04:22.439311735Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:22.439591 containerd[1458]: time="2025-01-13T22:04:22.439532429Z" level=info msg="RemovePodSandbox for \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\"" Jan 13 22:04:22.439735 containerd[1458]: time="2025-01-13T22:04:22.439719010Z" level=info msg="Forcibly stopping sandbox \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\"" Jan 13 22:04:22.444903 containerd[1458]: time="2025-01-13T22:04:22.444869733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:04:22.446498 containerd[1458]: time="2025-01-13T22:04:22.445961361Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.047573291s" Jan 13 22:04:22.446498 containerd[1458]: time="2025-01-13T22:04:22.446012186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 22:04:22.449450 containerd[1458]: time="2025-01-13T22:04:22.449418116Z" level=info msg="CreateContainer within sandbox \"a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 22:04:22.475449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3808492810.mount: Deactivated successfully. Jan 13 22:04:22.492386 containerd[1458]: time="2025-01-13T22:04:22.492259356Z" level=info msg="CreateContainer within sandbox \"a40b2e0a0ba8d49abe7ca7a955817dd91226c71f29b0c11a688138a97aaa4931\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"507b04befa61e5f517fa047ac41f3be4d870a429c25dc827b4a2235a4f0aaab3\"" Jan 13 22:04:22.494455 containerd[1458]: time="2025-01-13T22:04:22.494389903Z" level=info msg="StartContainer for \"507b04befa61e5f517fa047ac41f3be4d870a429c25dc827b4a2235a4f0aaab3\"" Jan 13 22:04:22.553921 systemd[1]: Started cri-containerd-507b04befa61e5f517fa047ac41f3be4d870a429c25dc827b4a2235a4f0aaab3.scope - libcontainer container 507b04befa61e5f517fa047ac41f3be4d870a429c25dc827b4a2235a4f0aaab3. Jan 13 22:04:22.559025 containerd[1458]: 2025-01-13 22:04:22.506 [WARNING][5084] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0", GenerateName:"calico-apiserver-6c74fff689-", Namespace:"calico-apiserver", SelfLink:"", UID:"be85207e-f371-4ee7-9430-e6fb2baafa7b", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c74fff689", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"18d9412bc5dee468e55525080818704824a878c182ae18881fb415131a15df3b", Pod:"calico-apiserver-6c74fff689-gwsjw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88e2f0c4e22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:22.559025 containerd[1458]: 2025-01-13 22:04:22.506 [INFO][5084] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Jan 13 22:04:22.559025 containerd[1458]: 2025-01-13 22:04:22.506 [INFO][5084] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" iface="eth0" netns="" Jan 13 22:04:22.559025 containerd[1458]: 2025-01-13 22:04:22.506 [INFO][5084] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Jan 13 22:04:22.559025 containerd[1458]: 2025-01-13 22:04:22.506 [INFO][5084] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Jan 13 22:04:22.559025 containerd[1458]: 2025-01-13 22:04:22.541 [INFO][5091] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" HandleID="k8s-pod-network.6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:22.559025 containerd[1458]: 2025-01-13 22:04:22.541 [INFO][5091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:22.559025 containerd[1458]: 2025-01-13 22:04:22.541 [INFO][5091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:22.559025 containerd[1458]: 2025-01-13 22:04:22.552 [WARNING][5091] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" HandleID="k8s-pod-network.6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:22.559025 containerd[1458]: 2025-01-13 22:04:22.552 [INFO][5091] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" HandleID="k8s-pod-network.6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--gwsjw-eth0" Jan 13 22:04:22.559025 containerd[1458]: 2025-01-13 22:04:22.555 [INFO][5091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:22.559025 containerd[1458]: 2025-01-13 22:04:22.557 [INFO][5084] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88" Jan 13 22:04:22.559972 containerd[1458]: time="2025-01-13T22:04:22.559052554Z" level=info msg="TearDown network for sandbox \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\" successfully" Jan 13 22:04:22.564603 containerd[1458]: time="2025-01-13T22:04:22.564469727Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:04:22.564603 containerd[1458]: time="2025-01-13T22:04:22.564547113Z" level=info msg="RemovePodSandbox \"6613323bca8f90b772a5cb404b81c4448a4a4022ef5bd9ff335ebb0b21a8ea88\" returns successfully" Jan 13 22:04:22.565684 containerd[1458]: time="2025-01-13T22:04:22.565446861Z" level=info msg="StopPodSandbox for \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\"" Jan 13 22:04:22.598883 containerd[1458]: time="2025-01-13T22:04:22.598809776Z" level=info msg="StartContainer for \"507b04befa61e5f517fa047ac41f3be4d870a429c25dc827b4a2235a4f0aaab3\" returns successfully" Jan 13 22:04:22.667229 containerd[1458]: 2025-01-13 22:04:22.631 [WARNING][5133] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0", GenerateName:"calico-apiserver-6c74fff689-", Namespace:"calico-apiserver", SelfLink:"", UID:"89431cd0-5f5b-411e-adcd-b6574162f169", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c74fff689", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f", Pod:"calico-apiserver-6c74fff689-j6m89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b839bb48dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:22.667229 containerd[1458]: 2025-01-13 22:04:22.632 [INFO][5133] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Jan 13 22:04:22.667229 containerd[1458]: 2025-01-13 22:04:22.632 [INFO][5133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" iface="eth0" netns="" Jan 13 22:04:22.667229 containerd[1458]: 2025-01-13 22:04:22.632 [INFO][5133] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Jan 13 22:04:22.667229 containerd[1458]: 2025-01-13 22:04:22.632 [INFO][5133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Jan 13 22:04:22.667229 containerd[1458]: 2025-01-13 22:04:22.653 [INFO][5152] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" HandleID="k8s-pod-network.677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:22.667229 containerd[1458]: 2025-01-13 22:04:22.653 [INFO][5152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:22.667229 containerd[1458]: 2025-01-13 22:04:22.653 [INFO][5152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:22.667229 containerd[1458]: 2025-01-13 22:04:22.663 [WARNING][5152] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" HandleID="k8s-pod-network.677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:22.667229 containerd[1458]: 2025-01-13 22:04:22.663 [INFO][5152] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" HandleID="k8s-pod-network.677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:22.667229 containerd[1458]: 2025-01-13 22:04:22.665 [INFO][5152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:22.667229 containerd[1458]: 2025-01-13 22:04:22.666 [INFO][5133] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Jan 13 22:04:22.668073 containerd[1458]: time="2025-01-13T22:04:22.667869967Z" level=info msg="TearDown network for sandbox \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\" successfully" Jan 13 22:04:22.668073 containerd[1458]: time="2025-01-13T22:04:22.667913409Z" level=info msg="StopPodSandbox for \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\" returns successfully" Jan 13 22:04:22.668509 containerd[1458]: time="2025-01-13T22:04:22.668479280Z" level=info msg="RemovePodSandbox for \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\"" Jan 13 22:04:22.668565 containerd[1458]: time="2025-01-13T22:04:22.668517602Z" level=info msg="Forcibly stopping sandbox \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\"" Jan 13 22:04:22.717673 kubelet[2561]: I0113 22:04:22.717207 2561 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 22:04:22.717673 kubelet[2561]: I0113 22:04:22.717247 2561 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 22:04:22.758845 containerd[1458]: 2025-01-13 22:04:22.709 [WARNING][5170] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0", GenerateName:"calico-apiserver-6c74fff689-", Namespace:"calico-apiserver", SelfLink:"", UID:"89431cd0-5f5b-411e-adcd-b6574162f169", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c74fff689", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"da11337b358369bd6771b2f1bb0a7932b8b4ef5a931d3164b970130a053da11f", Pod:"calico-apiserver-6c74fff689-j6m89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b839bb48dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:22.758845 containerd[1458]: 2025-01-13 22:04:22.709 [INFO][5170] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Jan 13 22:04:22.758845 containerd[1458]: 2025-01-13 22:04:22.709 [INFO][5170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" iface="eth0" netns="" Jan 13 22:04:22.758845 containerd[1458]: 2025-01-13 22:04:22.709 [INFO][5170] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Jan 13 22:04:22.758845 containerd[1458]: 2025-01-13 22:04:22.709 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Jan 13 22:04:22.758845 containerd[1458]: 2025-01-13 22:04:22.742 [INFO][5176] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" HandleID="k8s-pod-network.677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:22.758845 containerd[1458]: 2025-01-13 22:04:22.743 [INFO][5176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:22.758845 containerd[1458]: 2025-01-13 22:04:22.743 [INFO][5176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:22.758845 containerd[1458]: 2025-01-13 22:04:22.753 [WARNING][5176] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" HandleID="k8s-pod-network.677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:22.758845 containerd[1458]: 2025-01-13 22:04:22.753 [INFO][5176] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" HandleID="k8s-pod-network.677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-calico--apiserver--6c74fff689--j6m89-eth0" Jan 13 22:04:22.758845 containerd[1458]: 2025-01-13 22:04:22.755 [INFO][5176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:22.758845 containerd[1458]: 2025-01-13 22:04:22.756 [INFO][5170] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57" Jan 13 22:04:22.758845 containerd[1458]: time="2025-01-13T22:04:22.757872230Z" level=info msg="TearDown network for sandbox \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\" successfully" Jan 13 22:04:22.764496 containerd[1458]: time="2025-01-13T22:04:22.764458267Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:04:22.764676 containerd[1458]: time="2025-01-13T22:04:22.764655377Z" level=info msg="RemovePodSandbox \"677d8d02d9b3f45d4cd2cddcf25b91d16df3209eb015d1b4aaea3dea1759aa57\" returns successfully" Jan 13 22:04:22.765237 containerd[1458]: time="2025-01-13T22:04:22.765215768Z" level=info msg="StopPodSandbox for \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\"" Jan 13 22:04:22.847669 containerd[1458]: 2025-01-13 22:04:22.815 [WARNING][5194] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d9885fd8-16b7-4463-95bb-7f4600467308", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889", Pod:"coredns-6f6b679f8f-lttx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60cb3444374", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:22.847669 containerd[1458]: 2025-01-13 22:04:22.815 [INFO][5194] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Jan 13 22:04:22.847669 containerd[1458]: 2025-01-13 22:04:22.815 [INFO][5194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" iface="eth0" netns="" Jan 13 22:04:22.847669 containerd[1458]: 2025-01-13 22:04:22.815 [INFO][5194] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Jan 13 22:04:22.847669 containerd[1458]: 2025-01-13 22:04:22.815 [INFO][5194] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Jan 13 22:04:22.847669 containerd[1458]: 2025-01-13 22:04:22.837 [INFO][5200] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" HandleID="k8s-pod-network.0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:22.847669 containerd[1458]: 2025-01-13 22:04:22.837 [INFO][5200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:22.847669 containerd[1458]: 2025-01-13 22:04:22.837 [INFO][5200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:22.847669 containerd[1458]: 2025-01-13 22:04:22.843 [WARNING][5200] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" HandleID="k8s-pod-network.0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:22.847669 containerd[1458]: 2025-01-13 22:04:22.843 [INFO][5200] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" HandleID="k8s-pod-network.0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:22.847669 containerd[1458]: 2025-01-13 22:04:22.845 [INFO][5200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:22.847669 containerd[1458]: 2025-01-13 22:04:22.846 [INFO][5194] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Jan 13 22:04:22.847669 containerd[1458]: time="2025-01-13T22:04:22.847494580Z" level=info msg="TearDown network for sandbox \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\" successfully" Jan 13 22:04:22.847669 containerd[1458]: time="2025-01-13T22:04:22.847518665Z" level=info msg="StopPodSandbox for \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\" returns successfully" Jan 13 22:04:22.849587 containerd[1458]: time="2025-01-13T22:04:22.849282735Z" level=info msg="RemovePodSandbox for \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\"" Jan 13 22:04:22.849587 containerd[1458]: time="2025-01-13T22:04:22.849310367Z" level=info msg="Forcibly stopping sandbox \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\"" Jan 13 22:04:22.918192 containerd[1458]: 2025-01-13 22:04:22.885 [WARNING][5218] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d9885fd8-16b7-4463-95bb-7f4600467308", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-2-0f60d24a30.novalocal", ContainerID:"5ec7e9a01ac1e4a0b8c0f0e1fdf6320e34c7487f17619a3e253279e633795889", Pod:"coredns-6f6b679f8f-lttx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60cb3444374", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:04:22.918192 containerd[1458]: 2025-01-13 22:04:22.885 [INFO][5218] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Jan 13 22:04:22.918192 containerd[1458]: 2025-01-13 22:04:22.885 [INFO][5218] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" iface="eth0" netns="" Jan 13 22:04:22.918192 containerd[1458]: 2025-01-13 22:04:22.885 [INFO][5218] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Jan 13 22:04:22.918192 containerd[1458]: 2025-01-13 22:04:22.885 [INFO][5218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Jan 13 22:04:22.918192 containerd[1458]: 2025-01-13 22:04:22.906 [INFO][5224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" HandleID="k8s-pod-network.0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:22.918192 containerd[1458]: 2025-01-13 22:04:22.906 [INFO][5224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:04:22.918192 containerd[1458]: 2025-01-13 22:04:22.906 [INFO][5224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:04:22.918192 containerd[1458]: 2025-01-13 22:04:22.914 [WARNING][5224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" HandleID="k8s-pod-network.0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:22.918192 containerd[1458]: 2025-01-13 22:04:22.914 [INFO][5224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" HandleID="k8s-pod-network.0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Workload="ci--4081--3--0--2--0f60d24a30.novalocal-k8s-coredns--6f6b679f8f--lttx5-eth0" Jan 13 22:04:22.918192 containerd[1458]: 2025-01-13 22:04:22.916 [INFO][5224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:04:22.918192 containerd[1458]: 2025-01-13 22:04:22.917 [INFO][5218] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5" Jan 13 22:04:22.919151 containerd[1458]: time="2025-01-13T22:04:22.918680971Z" level=info msg="TearDown network for sandbox \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\" successfully" Jan 13 22:04:22.923616 containerd[1458]: time="2025-01-13T22:04:22.923591764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:04:22.924000 containerd[1458]: time="2025-01-13T22:04:22.923723221Z" level=info msg="RemovePodSandbox \"0324c0e9102eb3618e94866fec57dc99b88f2e06fee4575543f5f8590f1728f5\" returns successfully" Jan 13 22:04:23.461760 kubelet[2561]: I0113 22:04:23.460891 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mt5rr" podStartSLOduration=32.435415161 podStartE2EDuration="48.460859329s" podCreationTimestamp="2025-01-13 22:03:35 +0000 UTC" firstStartedPulling="2025-01-13 22:04:06.421875602 +0000 UTC m=+45.993749955" lastFinishedPulling="2025-01-13 22:04:22.44731977 +0000 UTC m=+62.019194123" observedRunningTime="2025-01-13 22:04:23.459622799 +0000 UTC m=+63.031497202" watchObservedRunningTime="2025-01-13 22:04:23.460859329 +0000 UTC m=+63.032733732" Jan 13 22:04:37.262035 kubelet[2561]: I0113 22:04:37.261238 2561 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:04:37.847103 systemd[1]: Started sshd@7-172.24.4.131:22-172.24.4.1:39682.service - OpenSSH per-connection server daemon (172.24.4.1:39682). Jan 13 22:04:39.251636 sshd[5267]: Accepted publickey for core from 172.24.4.1 port 39682 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:04:39.268473 sshd[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:04:39.321217 systemd-logind[1440]: New session 10 of user core. Jan 13 22:04:39.327095 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 22:04:40.138362 sshd[5267]: pam_unix(sshd:session): session closed for user core Jan 13 22:04:40.148445 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Jan 13 22:04:40.148723 systemd[1]: sshd@7-172.24.4.131:22-172.24.4.1:39682.service: Deactivated successfully. Jan 13 22:04:40.156172 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 22:04:40.160929 systemd-logind[1440]: Removed session 10. Jan 13 22:04:45.152306 systemd[1]: Started sshd@8-172.24.4.131:22-172.24.4.1:49046.service - OpenSSH per-connection server daemon (172.24.4.1:49046). Jan 13 22:04:46.236877 sshd[5309]: Accepted publickey for core from 172.24.4.1 port 49046 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:04:46.237656 sshd[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:04:46.248388 systemd-logind[1440]: New session 11 of user core. Jan 13 22:04:46.252983 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 22:04:46.940820 sshd[5309]: pam_unix(sshd:session): session closed for user core Jan 13 22:04:46.948168 systemd[1]: sshd@8-172.24.4.131:22-172.24.4.1:49046.service: Deactivated successfully. Jan 13 22:04:46.954184 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 22:04:46.956065 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Jan 13 22:04:46.959083 systemd-logind[1440]: Removed session 11. Jan 13 22:04:51.844806 systemd[1]: run-containerd-runc-k8s.io-0fc7ffa37d6dd8752afcb198005ec049f5cf7c236c8f7c30c7fe0028d8edff95-runc.4OfKgS.mount: Deactivated successfully. Jan 13 22:04:51.956123 systemd[1]: Started sshd@9-172.24.4.131:22-172.24.4.1:49058.service - OpenSSH per-connection server daemon (172.24.4.1:49058). Jan 13 22:04:53.172585 sshd[5346]: Accepted publickey for core from 172.24.4.1 port 49058 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:04:53.175386 sshd[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:04:53.185720 systemd-logind[1440]: New session 12 of user core. Jan 13 22:04:53.197691 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 22:04:53.918589 sshd[5346]: pam_unix(sshd:session): session closed for user core Jan 13 22:04:53.928355 systemd[1]: sshd@9-172.24.4.131:22-172.24.4.1:49058.service: Deactivated successfully. Jan 13 22:04:53.936093 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 22:04:53.938125 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Jan 13 22:04:53.941118 systemd-logind[1440]: Removed session 12. Jan 13 22:04:58.942604 systemd[1]: Started sshd@10-172.24.4.131:22-172.24.4.1:56176.service - OpenSSH per-connection server daemon (172.24.4.1:56176). Jan 13 22:05:00.650220 sshd[5361]: Accepted publickey for core from 172.24.4.1 port 56176 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:05:00.653256 sshd[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:05:00.662934 systemd-logind[1440]: New session 13 of user core. Jan 13 22:05:00.673118 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 22:05:01.439342 sshd[5361]: pam_unix(sshd:session): session closed for user core Jan 13 22:05:01.457529 systemd[1]: Started sshd@11-172.24.4.131:22-172.24.4.1:56192.service - OpenSSH per-connection server daemon (172.24.4.1:56192). Jan 13 22:05:01.467724 systemd[1]: sshd@10-172.24.4.131:22-172.24.4.1:56176.service: Deactivated successfully. Jan 13 22:05:01.472491 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 22:05:01.476407 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Jan 13 22:05:01.478971 systemd-logind[1440]: Removed session 13. Jan 13 22:05:02.659661 sshd[5372]: Accepted publickey for core from 172.24.4.1 port 56192 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:05:02.662031 sshd[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:05:02.670324 systemd-logind[1440]: New session 14 of user core. Jan 13 22:05:02.677069 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 22:05:03.407199 sshd[5372]: pam_unix(sshd:session): session closed for user core Jan 13 22:05:03.417472 systemd[1]: sshd@11-172.24.4.131:22-172.24.4.1:56192.service: Deactivated successfully. Jan 13 22:05:03.421613 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 22:05:03.424362 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Jan 13 22:05:03.434477 systemd[1]: Started sshd@12-172.24.4.131:22-172.24.4.1:56208.service - OpenSSH per-connection server daemon (172.24.4.1:56208). Jan 13 22:05:03.438212 systemd-logind[1440]: Removed session 14. Jan 13 22:05:04.729300 sshd[5384]: Accepted publickey for core from 172.24.4.1 port 56208 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:05:04.732266 sshd[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:05:04.745020 systemd-logind[1440]: New session 15 of user core. Jan 13 22:05:04.762111 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 22:05:05.607457 sshd[5384]: pam_unix(sshd:session): session closed for user core Jan 13 22:05:05.613922 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Jan 13 22:05:05.614561 systemd[1]: sshd@12-172.24.4.131:22-172.24.4.1:56208.service: Deactivated successfully. Jan 13 22:05:05.620949 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 22:05:05.626371 systemd-logind[1440]: Removed session 15. Jan 13 22:05:10.633302 systemd[1]: Started sshd@13-172.24.4.131:22-172.24.4.1:54502.service - OpenSSH per-connection server daemon (172.24.4.1:54502). Jan 13 22:05:11.953350 sshd[5422]: Accepted publickey for core from 172.24.4.1 port 54502 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:05:11.956210 sshd[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:05:11.968924 systemd-logind[1440]: New session 16 of user core. Jan 13 22:05:11.977170 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 22:05:12.797062 sshd[5422]: pam_unix(sshd:session): session closed for user core Jan 13 22:05:12.800454 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Jan 13 22:05:12.801186 systemd[1]: sshd@13-172.24.4.131:22-172.24.4.1:54502.service: Deactivated successfully. Jan 13 22:05:12.803367 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 22:05:12.805183 systemd-logind[1440]: Removed session 16. Jan 13 22:05:17.808908 systemd[1]: Started sshd@14-172.24.4.131:22-172.24.4.1:47138.service - OpenSSH per-connection server daemon (172.24.4.1:47138). Jan 13 22:05:19.144992 sshd[5435]: Accepted publickey for core from 172.24.4.1 port 47138 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:05:19.147288 sshd[5435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:05:19.158541 systemd-logind[1440]: New session 17 of user core. Jan 13 22:05:19.165064 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 22:05:19.840967 sshd[5435]: pam_unix(sshd:session): session closed for user core Jan 13 22:05:19.856576 systemd[1]: sshd@14-172.24.4.131:22-172.24.4.1:47138.service: Deactivated successfully. Jan 13 22:05:19.859696 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 22:05:19.861215 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Jan 13 22:05:19.862980 systemd-logind[1440]: Removed session 17. Jan 13 22:05:21.843886 systemd[1]: run-containerd-runc-k8s.io-0fc7ffa37d6dd8752afcb198005ec049f5cf7c236c8f7c30c7fe0028d8edff95-runc.eF0wZ5.mount: Deactivated successfully. Jan 13 22:05:24.846103 systemd[1]: Started sshd@15-172.24.4.131:22-172.24.4.1:53968.service - OpenSSH per-connection server daemon (172.24.4.1:53968). Jan 13 22:05:25.985821 sshd[5475]: Accepted publickey for core from 172.24.4.1 port 53968 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:05:25.989020 sshd[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:05:26.000554 systemd-logind[1440]: New session 18 of user core. Jan 13 22:05:26.007144 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 22:05:26.685208 sshd[5475]: pam_unix(sshd:session): session closed for user core Jan 13 22:05:26.696919 systemd[1]: sshd@15-172.24.4.131:22-172.24.4.1:53968.service: Deactivated successfully. Jan 13 22:05:26.700999 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 22:05:26.705881 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Jan 13 22:05:26.714350 systemd[1]: Started sshd@16-172.24.4.131:22-172.24.4.1:53974.service - OpenSSH per-connection server daemon (172.24.4.1:53974). Jan 13 22:05:26.717623 systemd-logind[1440]: Removed session 18. Jan 13 22:05:28.148927 sshd[5490]: Accepted publickey for core from 172.24.4.1 port 53974 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:05:28.151878 sshd[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:05:28.161910 systemd-logind[1440]: New session 19 of user core. Jan 13 22:05:28.169069 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 22:05:29.120157 sshd[5490]: pam_unix(sshd:session): session closed for user core Jan 13 22:05:29.131989 systemd[1]: sshd@16-172.24.4.131:22-172.24.4.1:53974.service: Deactivated successfully. Jan 13 22:05:29.135429 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 22:05:29.137511 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Jan 13 22:05:29.151518 systemd[1]: Started sshd@17-172.24.4.131:22-172.24.4.1:53986.service - OpenSSH per-connection server daemon (172.24.4.1:53986). Jan 13 22:05:29.156720 systemd-logind[1440]: Removed session 19. Jan 13 22:05:30.436146 sshd[5503]: Accepted publickey for core from 172.24.4.1 port 53986 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:05:30.455446 sshd[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:05:30.503903 systemd-logind[1440]: New session 20 of user core. Jan 13 22:05:30.511130 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 22:05:33.586845 sshd[5503]: pam_unix(sshd:session): session closed for user core Jan 13 22:05:33.607009 systemd[1]: sshd@17-172.24.4.131:22-172.24.4.1:53986.service: Deactivated successfully. Jan 13 22:05:33.612239 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 22:05:33.614635 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Jan 13 22:05:33.624572 systemd[1]: Started sshd@18-172.24.4.131:22-172.24.4.1:43798.service - OpenSSH per-connection server daemon (172.24.4.1:43798). Jan 13 22:05:33.631050 systemd-logind[1440]: Removed session 20. Jan 13 22:05:35.081871 sshd[5520]: Accepted publickey for core from 172.24.4.1 port 43798 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:05:35.085286 sshd[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:05:35.097211 systemd-logind[1440]: New session 21 of user core. Jan 13 22:05:35.101995 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 22:05:36.244055 sshd[5520]: pam_unix(sshd:session): session closed for user core Jan 13 22:05:36.255880 systemd[1]: sshd@18-172.24.4.131:22-172.24.4.1:43798.service: Deactivated successfully. Jan 13 22:05:36.259447 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 22:05:36.263675 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Jan 13 22:05:36.269351 systemd[1]: Started sshd@19-172.24.4.131:22-172.24.4.1:43810.service - OpenSSH per-connection server daemon (172.24.4.1:43810). Jan 13 22:05:36.273033 systemd-logind[1440]: Removed session 21. Jan 13 22:05:37.649429 sshd[5536]: Accepted publickey for core from 172.24.4.1 port 43810 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:05:37.652726 sshd[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:05:37.671689 systemd-logind[1440]: New session 22 of user core. Jan 13 22:05:37.677242 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 22:05:38.330098 sshd[5536]: pam_unix(sshd:session): session closed for user core Jan 13 22:05:38.337349 systemd[1]: sshd@19-172.24.4.131:22-172.24.4.1:43810.service: Deactivated successfully. Jan 13 22:05:38.343098 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 22:05:38.345976 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. Jan 13 22:05:38.348245 systemd-logind[1440]: Removed session 22. Jan 13 22:05:43.344079 systemd[1]: Started sshd@20-172.24.4.131:22-172.24.4.1:43826.service - OpenSSH per-connection server daemon (172.24.4.1:43826). Jan 13 22:05:44.588330 sshd[5606]: Accepted publickey for core from 172.24.4.1 port 43826 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:05:44.595711 sshd[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:05:44.613922 systemd-logind[1440]: New session 23 of user core. Jan 13 22:05:44.621206 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 22:05:45.285506 sshd[5606]: pam_unix(sshd:session): session closed for user core Jan 13 22:05:45.293001 systemd[1]: sshd@20-172.24.4.131:22-172.24.4.1:43826.service: Deactivated successfully. Jan 13 22:05:45.297632 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 22:05:45.299567 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. Jan 13 22:05:45.302529 systemd-logind[1440]: Removed session 23. Jan 13 22:05:50.308300 systemd[1]: Started sshd@21-172.24.4.131:22-172.24.4.1:46256.service - OpenSSH per-connection server daemon (172.24.4.1:46256). Jan 13 22:05:51.564330 sshd[5620]: Accepted publickey for core from 172.24.4.1 port 46256 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:05:51.565867 sshd[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:05:51.571068 systemd-logind[1440]: New session 24 of user core. Jan 13 22:05:51.580042 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 22:05:52.291265 sshd[5620]: pam_unix(sshd:session): session closed for user core Jan 13 22:05:52.297186 systemd[1]: sshd@21-172.24.4.131:22-172.24.4.1:46256.service: Deactivated successfully. Jan 13 22:05:52.300995 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 22:05:52.304855 systemd-logind[1440]: Session 24 logged out. Waiting for processes to exit. Jan 13 22:05:52.307250 systemd-logind[1440]: Removed session 24. Jan 13 22:05:57.312446 systemd[1]: Started sshd@22-172.24.4.131:22-172.24.4.1:52184.service - OpenSSH per-connection server daemon (172.24.4.1:52184). Jan 13 22:05:58.495988 sshd[5654]: Accepted publickey for core from 172.24.4.1 port 52184 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:05:58.497510 sshd[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:05:58.502954 systemd-logind[1440]: New session 25 of user core. Jan 13 22:05:58.511963 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 22:05:59.300243 sshd[5654]: pam_unix(sshd:session): session closed for user core Jan 13 22:05:59.306732 systemd[1]: sshd@22-172.24.4.131:22-172.24.4.1:52184.service: Deactivated successfully. Jan 13 22:05:59.310683 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 22:05:59.313905 systemd-logind[1440]: Session 25 logged out. Waiting for processes to exit. Jan 13 22:05:59.316297 systemd-logind[1440]: Removed session 25. Jan 13 22:06:04.325446 systemd[1]: Started sshd@23-172.24.4.131:22-172.24.4.1:47960.service - OpenSSH per-connection server daemon (172.24.4.1:47960). Jan 13 22:06:05.735480 sshd[5667]: Accepted publickey for core from 172.24.4.1 port 47960 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:06:05.738201 sshd[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:06:05.750047 systemd-logind[1440]: New session 26 of user core. Jan 13 22:06:05.756704 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 22:06:06.408278 sshd[5667]: pam_unix(sshd:session): session closed for user core Jan 13 22:06:06.413432 systemd[1]: sshd@23-172.24.4.131:22-172.24.4.1:47960.service: Deactivated successfully. Jan 13 22:06:06.417604 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 22:06:06.421007 systemd-logind[1440]: Session 26 logged out. Waiting for processes to exit. Jan 13 22:06:06.423652 systemd-logind[1440]: Removed session 26. Jan 13 22:06:11.432341 systemd[1]: Started sshd@24-172.24.4.131:22-172.24.4.1:47968.service - OpenSSH per-connection server daemon (172.24.4.1:47968). Jan 13 22:06:12.681287 sshd[5703]: Accepted publickey for core from 172.24.4.1 port 47968 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:06:12.684047 sshd[5703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:06:12.695038 systemd-logind[1440]: New session 27 of user core. Jan 13 22:06:12.700117 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 22:06:13.294343 sshd[5703]: pam_unix(sshd:session): session closed for user core Jan 13 22:06:13.301690 systemd[1]: sshd@24-172.24.4.131:22-172.24.4.1:47968.service: Deactivated successfully. Jan 13 22:06:13.305534 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 22:06:13.308463 systemd-logind[1440]: Session 27 logged out. Waiting for processes to exit. Jan 13 22:06:13.311702 systemd-logind[1440]: Removed session 27.