Jan 29 12:54:12.970188 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:54:12.970219 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:54:12.970229 kernel: BIOS-provided physical RAM map: Jan 29 12:54:12.970236 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 12:54:12.970243 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 12:54:12.970253 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 12:54:12.970261 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 29 12:54:12.970268 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 29 12:54:12.970275 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 12:54:12.970282 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 12:54:12.970290 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 29 12:54:12.970297 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 12:54:12.970304 kernel: NX (Execute Disable) protection: active Jan 29 12:54:12.970323 kernel: APIC: Static calls initialized Jan 29 12:54:12.970334 kernel: SMBIOS 3.0.0 present. Jan 29 12:54:12.970342 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 29 12:54:12.970350 kernel: Hypervisor detected: KVM Jan 29 12:54:12.970357 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 12:54:12.970365 kernel: kvm-clock: using sched offset of 3435266428 cycles Jan 29 12:54:12.970376 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 12:54:12.970383 kernel: tsc: Detected 1996.249 MHz processor Jan 29 12:54:12.970391 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:54:12.970400 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:54:12.970407 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 29 12:54:12.970416 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 12:54:12.970425 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:54:12.970433 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 29 12:54:12.970441 kernel: ACPI: Early table checksum verification disabled Jan 29 12:54:12.970452 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 29 12:54:12.970461 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:54:12.970469 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:54:12.970477 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:54:12.970485 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 29 12:54:12.970493 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:54:12.970502 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:54:12.970510 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 29 12:54:12.970518 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 29 12:54:12.970528 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 29 12:54:12.970536 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 29 12:54:12.970545 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 29 12:54:12.970556 kernel: No NUMA configuration found Jan 29 12:54:12.970565 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 29 12:54:12.970574 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 29 12:54:12.970584 kernel: Zone ranges: Jan 29 12:54:12.970593 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:54:12.970601 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 12:54:12.970610 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 29 12:54:12.970618 kernel: Movable zone start for each node Jan 29 12:54:12.970627 kernel: Early memory node ranges Jan 29 12:54:12.970635 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 12:54:12.970644 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 29 12:54:12.970654 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 29 12:54:12.970663 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 29 12:54:12.970672 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:54:12.970680 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 12:54:12.970689 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 29 12:54:12.970698 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 12:54:12.970706 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 12:54:12.970715 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:54:12.970723 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 12:54:12.970734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 12:54:12.970742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:54:12.970751 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 12:54:12.970760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 12:54:12.970888 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:54:12.970898 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 12:54:12.970906 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 12:54:12.970915 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 29 12:54:12.970923 kernel: Booting paravirtualized kernel on KVM Jan 29 12:54:12.970935 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:54:12.970944 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 12:54:12.970953 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 12:54:12.970961 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 12:54:12.970970 kernel: pcpu-alloc: [0] 0 1 Jan 29 12:54:12.970978 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 12:54:12.970988 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:54:12.970997 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:54:12.971009 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:54:12.971018 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:54:12.971027 kernel: Fallback order for Node 0: 0 Jan 29 12:54:12.971035 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 29 12:54:12.971044 kernel: Policy zone: Normal Jan 29 12:54:12.971052 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:54:12.971061 kernel: software IO TLB: area num 2. Jan 29 12:54:12.971070 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 227308K reserved, 0K cma-reserved) Jan 29 12:54:12.971079 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:54:12.971089 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:54:12.971098 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:54:12.971106 kernel: Dynamic Preempt: voluntary Jan 29 12:54:12.971115 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:54:12.971124 kernel: rcu: RCU event tracing is enabled. Jan 29 12:54:12.971133 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:54:12.971142 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:54:12.971151 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:54:12.971159 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:54:12.971170 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:54:12.971178 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:54:12.971187 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 12:54:12.971195 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:54:12.971204 kernel: Console: colour VGA+ 80x25 Jan 29 12:54:12.971213 kernel: printk: console [tty0] enabled Jan 29 12:54:12.971221 kernel: printk: console [ttyS0] enabled Jan 29 12:54:12.971230 kernel: ACPI: Core revision 20230628 Jan 29 12:54:12.971239 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:54:12.971250 kernel: x2apic enabled Jan 29 12:54:12.971258 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 12:54:12.971267 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 12:54:12.971275 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 12:54:12.971284 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 29 12:54:12.971293 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 12:54:12.971302 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 12:54:12.971310 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:54:12.971319 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:54:12.971328 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:54:12.971338 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:54:12.971347 kernel: Speculative Store Bypass: Vulnerable Jan 29 12:54:12.971355 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 29 12:54:12.971364 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:54:12.971379 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:54:12.971390 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:54:12.971399 kernel: landlock: Up and running. Jan 29 12:54:12.971408 kernel: SELinux: Initializing. Jan 29 12:54:12.971417 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:54:12.971426 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:54:12.971435 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 29 12:54:12.971447 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:54:12.971456 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:54:12.971465 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:54:12.971475 kernel: Performance Events: AMD PMU driver. Jan 29 12:54:12.971483 kernel: ... version: 0 Jan 29 12:54:12.971494 kernel: ... bit width: 48 Jan 29 12:54:12.971504 kernel: ... generic registers: 4 Jan 29 12:54:12.971513 kernel: ... value mask: 0000ffffffffffff Jan 29 12:54:12.971522 kernel: ... max period: 00007fffffffffff Jan 29 12:54:12.971531 kernel: ... fixed-purpose events: 0 Jan 29 12:54:12.971540 kernel: ... event mask: 000000000000000f Jan 29 12:54:12.971549 kernel: signal: max sigframe size: 1440 Jan 29 12:54:12.971559 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:54:12.971569 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:54:12.971579 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:54:12.971588 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:54:12.971596 kernel: .... node #0, CPUs: #1 Jan 29 12:54:12.971604 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:54:12.971613 kernel: smpboot: Max logical packages: 2 Jan 29 12:54:12.971621 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 29 12:54:12.971630 kernel: devtmpfs: initialized Jan 29 12:54:12.971638 kernel: x86/mm: Memory block size: 128MB Jan 29 12:54:12.971647 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:54:12.971657 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:54:12.971666 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:54:12.971674 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:54:12.971683 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:54:12.971691 kernel: audit: type=2000 audit(1738155251.524:1): state=initialized audit_enabled=0 res=1 Jan 29 12:54:12.971700 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:54:12.971708 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:54:12.971716 kernel: cpuidle: using governor menu Jan 29 12:54:12.971725 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:54:12.971735 kernel: dca service started, version 1.12.1 Jan 29 12:54:12.971743 kernel: PCI: Using configuration type 1 for base access Jan 29 12:54:12.971752 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:54:12.971760 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:54:12.971783 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:54:12.971792 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:54:12.971800 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:54:12.971808 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:54:12.971817 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:54:12.971842 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:54:12.971850 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:54:12.971859 kernel: ACPI: Interpreter enabled Jan 29 12:54:12.971867 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 12:54:12.971876 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:54:12.971884 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:54:12.971893 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 12:54:12.971901 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 12:54:12.971910 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:54:12.972050 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:54:12.972149 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 12:54:12.972239 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 12:54:12.972253 kernel: acpiphp: Slot [3] registered Jan 29 12:54:12.972261 kernel: acpiphp: Slot [4] registered Jan 29 12:54:12.972270 kernel: acpiphp: Slot [5] registered Jan 29 12:54:12.972278 kernel: acpiphp: Slot [6] registered Jan 29 12:54:12.972287 kernel: acpiphp: Slot [7] registered Jan 29 12:54:12.972299 kernel: acpiphp: Slot [8] registered Jan 29 12:54:12.972307 kernel: acpiphp: Slot [9] registered Jan 29 12:54:12.972315 kernel: acpiphp: Slot [10] registered Jan 29 12:54:12.972323 kernel: acpiphp: Slot [11] registered Jan 29 12:54:12.972332 kernel: acpiphp: Slot [12] registered Jan 29 12:54:12.972340 kernel: acpiphp: Slot [13] registered Jan 29 12:54:12.972348 kernel: acpiphp: Slot [14] registered Jan 29 12:54:12.972356 kernel: acpiphp: Slot [15] registered Jan 29 12:54:12.972365 kernel: acpiphp: Slot [16] registered Jan 29 12:54:12.972375 kernel: acpiphp: Slot [17] registered Jan 29 12:54:12.972383 kernel: acpiphp: Slot [18] registered Jan 29 12:54:12.972391 kernel: acpiphp: Slot [19] registered Jan 29 12:54:12.972399 kernel: acpiphp: Slot [20] registered Jan 29 12:54:12.972408 kernel: acpiphp: Slot [21] registered Jan 29 12:54:12.972416 kernel: acpiphp: Slot [22] registered Jan 29 12:54:12.972424 kernel: acpiphp: Slot [23] registered Jan 29 12:54:12.972432 kernel: acpiphp: Slot [24] registered Jan 29 12:54:12.972441 kernel: acpiphp: Slot [25] registered Jan 29 12:54:12.972449 kernel: acpiphp: Slot [26] registered Jan 29 12:54:12.972459 kernel: acpiphp: Slot [27] registered Jan 29 12:54:12.972467 kernel: acpiphp: Slot [28] registered Jan 29 12:54:12.972475 kernel: acpiphp: Slot [29] registered Jan 29 12:54:12.972484 kernel: acpiphp: Slot [30] registered Jan 29 12:54:12.972492 kernel: acpiphp: Slot [31] registered Jan 29 12:54:12.972500 kernel: PCI host bridge to bus 0000:00 Jan 29 12:54:12.972590 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:54:12.972672 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:54:12.972758 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:54:12.974903 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 12:54:12.974992 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 29 12:54:12.975081 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:54:12.975193 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 12:54:12.975299 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 12:54:12.975411 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 12:54:12.975509 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 29 12:54:12.975605 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 12:54:12.975694 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 12:54:12.975805 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 12:54:12.975898 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 12:54:12.975996 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 12:54:12.976092 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 12:54:12.976183 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 12:54:12.976284 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 12:54:12.976376 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 12:54:12.976466 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 29 12:54:12.976556 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 29 12:54:12.976650 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 29 12:54:12.976740 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 12:54:12.976880 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 12:54:12.976974 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 29 12:54:12.977065 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 29 12:54:12.977156 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 29 12:54:12.977245 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 29 12:54:12.977343 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 12:54:12.977440 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 12:54:12.977529 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 29 12:54:12.977619 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 29 12:54:12.977718 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 12:54:12.977846 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 29 12:54:12.977938 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 29 12:54:12.978039 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 12:54:12.978127 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 29 12:54:12.978214 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 29 12:54:12.978304 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 29 12:54:12.978329 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 12:54:12.978339 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 12:54:12.978347 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 12:54:12.978356 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 12:54:12.978368 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 12:54:12.978377 kernel: iommu: Default domain type: Translated Jan 29 12:54:12.978385 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:54:12.978394 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:54:12.978403 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:54:12.978411 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 12:54:12.978421 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 29 12:54:12.978517 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 12:54:12.978612 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 12:54:12.978714 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 12:54:12.978728 kernel: vgaarb: loaded Jan 29 12:54:12.978738 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 12:54:12.978747 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:54:12.978756 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:54:12.978784 kernel: pnp: PnP ACPI init Jan 29 12:54:12.978884 kernel: pnp 00:03: [dma 2] Jan 29 12:54:12.978899 kernel: pnp: PnP ACPI: found 5 devices Jan 29 12:54:12.978909 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:54:12.978923 kernel: NET: Registered PF_INET protocol family Jan 29 12:54:12.978932 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:54:12.978942 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 12:54:12.978951 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:54:12.978960 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:54:12.978970 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 12:54:12.978979 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 12:54:12.978989 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:54:12.979000 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:54:12.979009 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:54:12.979018 kernel: NET: Registered PF_XDP protocol family Jan 29 12:54:12.979103 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:54:12.979187 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:54:12.979270 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:54:12.979353 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 29 12:54:12.979436 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 29 12:54:12.979532 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 12:54:12.979635 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 12:54:12.979649 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:54:12.979658 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 12:54:12.979667 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 29 12:54:12.979675 kernel: Initialise system trusted keyrings Jan 29 12:54:12.979684 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 12:54:12.979693 kernel: Key type asymmetric registered Jan 29 12:54:12.979701 kernel: Asymmetric key parser 'x509' registered Jan 29 12:54:12.979713 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:54:12.979722 kernel: io scheduler mq-deadline registered Jan 29 12:54:12.979731 kernel: io scheduler kyber registered Jan 29 12:54:12.979739 kernel: io scheduler bfq registered Jan 29 12:54:12.979748 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:54:12.979757 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 12:54:12.980918 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 12:54:12.980929 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 12:54:12.980938 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 12:54:12.980951 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:54:12.980960 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:54:12.980969 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 12:54:12.980978 kernel: random: crng init done Jan 29 12:54:12.980987 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 12:54:12.980995 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 12:54:12.981098 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 12:54:12.981183 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 12:54:12.981201 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 12:54:12.981280 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T12:54:12 UTC (1738155252) Jan 29 12:54:12.981361 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 29 12:54:12.981373 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 12:54:12.981382 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:54:12.981391 kernel: Segment Routing with IPv6 Jan 29 12:54:12.981400 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:54:12.981409 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:54:12.981417 kernel: Key type dns_resolver registered Jan 29 12:54:12.981429 kernel: IPI shorthand broadcast: enabled Jan 29 12:54:12.981438 kernel: sched_clock: Marking stable (992007562, 172508773)->(1211948238, -47431903) Jan 29 12:54:12.981447 kernel: registered taskstats version 1 Jan 29 12:54:12.981455 kernel: Loading compiled-in X.509 certificates Jan 29 12:54:12.981464 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:54:12.981473 kernel: Key type .fscrypt registered Jan 29 12:54:12.981481 kernel: Key type fscrypt-provisioning registered Jan 29 12:54:12.981490 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:54:12.981500 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:54:12.981509 kernel: ima: No architecture policies found Jan 29 12:54:12.981518 kernel: clk: Disabling unused clocks Jan 29 12:54:12.981526 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:54:12.981535 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:54:12.981543 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:54:12.981552 kernel: Run /init as init process Jan 29 12:54:12.981560 kernel: with arguments: Jan 29 12:54:12.981569 kernel: /init Jan 29 12:54:12.981577 kernel: with environment: Jan 29 12:54:12.981588 kernel: HOME=/ Jan 29 12:54:12.981596 kernel: TERM=linux Jan 29 12:54:12.981604 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:54:12.981615 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:54:12.981627 systemd[1]: Detected virtualization kvm. Jan 29 12:54:12.981636 systemd[1]: Detected architecture x86-64. Jan 29 12:54:12.981646 systemd[1]: Running in initrd. Jan 29 12:54:12.981657 systemd[1]: No hostname configured, using default hostname. Jan 29 12:54:12.981666 systemd[1]: Hostname set to . Jan 29 12:54:12.981676 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:54:12.981685 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:54:12.981694 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:54:12.981703 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:54:12.981714 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:54:12.981732 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:54:12.981744 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:54:12.981754 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:54:12.982796 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:54:12.982813 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:54:12.982828 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:54:12.982838 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:54:12.982850 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:54:12.982860 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:54:12.982870 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:54:12.982881 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:54:12.982891 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:54:12.982901 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:54:12.982912 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:54:12.982925 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:54:12.982936 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:54:12.982946 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:54:12.982956 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:54:12.982967 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:54:12.982977 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:54:12.982988 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:54:12.982998 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:54:12.983008 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:54:12.983021 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:54:12.983031 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:54:12.983065 systemd-journald[184]: Collecting audit messages is disabled. Jan 29 12:54:12.983090 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:54:12.983104 systemd-journald[184]: Journal started Jan 29 12:54:12.983128 systemd-journald[184]: Runtime Journal (/run/log/journal/e3e3ec07fce644548f69be6ac15ffcae) is 8.0M, max 78.3M, 70.3M free. Jan 29 12:54:12.991901 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:54:12.992302 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:54:12.992947 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:54:12.993599 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:54:13.006839 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:54:13.009158 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:54:13.015225 systemd-modules-load[185]: Inserted module 'overlay' Jan 29 12:54:13.032070 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:54:13.079103 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:54:13.079129 kernel: Bridge firewalling registered Jan 29 12:54:13.045208 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 29 12:54:13.080635 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:54:13.081377 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:54:13.082869 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:54:13.091997 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:54:13.095975 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:54:13.099084 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:54:13.115214 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:54:13.122023 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:54:13.123476 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:54:13.124847 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:54:13.136922 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:54:13.154805 dracut-cmdline[216]: dracut-dracut-053 Jan 29 12:54:13.158780 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:54:13.179972 systemd-resolved[220]: Positive Trust Anchors: Jan 29 12:54:13.180713 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:54:13.180756 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:54:13.186797 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 29 12:54:13.187812 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:54:13.188635 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:54:13.242795 kernel: SCSI subsystem initialized Jan 29 12:54:13.253857 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:54:13.266230 kernel: iscsi: registered transport (tcp) Jan 29 12:54:13.289015 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:54:13.289076 kernel: QLogic iSCSI HBA Driver Jan 29 12:54:13.351320 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:54:13.359111 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:54:13.410959 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:54:13.411034 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:54:13.411934 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:54:13.454794 kernel: raid6: sse2x4 gen() 12990 MB/s Jan 29 12:54:13.473816 kernel: raid6: sse2x2 gen() 14636 MB/s Jan 29 12:54:13.493235 kernel: raid6: sse2x1 gen() 3648 MB/s Jan 29 12:54:13.493710 kernel: raid6: using algorithm sse2x2 gen() 14636 MB/s Jan 29 12:54:13.514147 kernel: raid6: .... xor() 9264 MB/s, rmw enabled Jan 29 12:54:13.514197 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 12:54:13.537381 kernel: xor: measuring software checksum speed Jan 29 12:54:13.537454 kernel: prefetch64-sse : 18520 MB/sec Jan 29 12:54:13.537891 kernel: generic_sse : 16867 MB/sec Jan 29 12:54:13.539035 kernel: xor: using function: prefetch64-sse (18520 MB/sec) Jan 29 12:54:13.717923 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:54:13.735936 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:54:13.746082 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:54:13.767679 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 29 12:54:13.773227 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:54:13.784036 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:54:13.821530 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jan 29 12:54:13.881835 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:54:13.890027 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:54:13.976299 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:54:13.986188 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:54:14.031484 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:54:14.035166 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:54:14.036451 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:54:14.038869 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:54:14.047940 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:54:14.069548 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:54:14.074161 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 29 12:54:14.120350 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 29 12:54:14.120477 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:54:14.120492 kernel: libata version 3.00 loaded. Jan 29 12:54:14.120504 kernel: GPT:17805311 != 20971519 Jan 29 12:54:14.120515 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:54:14.120525 kernel: GPT:17805311 != 20971519 Jan 29 12:54:14.120536 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:54:14.120546 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:54:14.120560 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 12:54:14.120690 kernel: scsi host0: ata_piix Jan 29 12:54:14.120854 kernel: scsi host1: ata_piix Jan 29 12:54:14.120973 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 29 12:54:14.120985 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 29 12:54:14.100331 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:54:14.100602 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:54:14.108982 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:54:14.109989 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:54:14.110121 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:54:14.111469 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:54:14.126218 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:54:14.179112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:54:14.184994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:54:14.210073 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:54:14.321835 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (464) Jan 29 12:54:14.329854 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Jan 29 12:54:14.370535 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 12:54:14.379029 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 12:54:14.383550 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 12:54:14.384151 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 12:54:14.390379 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:54:14.401961 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:54:14.413938 disk-uuid[512]: Primary Header is updated. Jan 29 12:54:14.413938 disk-uuid[512]: Secondary Entries is updated. Jan 29 12:54:14.413938 disk-uuid[512]: Secondary Header is updated. Jan 29 12:54:14.421813 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:54:14.428948 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:54:15.442981 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:54:15.444727 disk-uuid[513]: The operation has completed successfully. Jan 29 12:54:15.515226 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:54:15.516755 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:54:15.542910 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:54:15.562912 sh[526]: Success Jan 29 12:54:15.597831 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 29 12:54:15.668276 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:54:15.670550 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:54:15.685521 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:54:15.705813 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:54:15.705894 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:54:15.705939 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:54:15.705965 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:54:15.707463 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:54:15.725080 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:54:15.727248 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:54:15.736089 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:54:15.750221 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:54:15.786592 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:54:15.786681 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:54:15.786709 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:54:15.795821 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:54:15.822450 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:54:15.821875 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:54:15.844122 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:54:15.852055 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:54:15.861182 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:54:15.868920 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:54:15.899081 systemd-networkd[708]: lo: Link UP Jan 29 12:54:15.899847 systemd-networkd[708]: lo: Gained carrier Jan 29 12:54:15.900969 systemd-networkd[708]: Enumeration completed Jan 29 12:54:15.901302 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:54:15.901931 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:54:15.901935 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:54:15.903357 systemd-networkd[708]: eth0: Link UP Jan 29 12:54:15.903361 systemd-networkd[708]: eth0: Gained carrier Jan 29 12:54:15.903368 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:54:15.903976 systemd[1]: Reached target network.target - Network. Jan 29 12:54:15.913823 systemd-networkd[708]: eth0: DHCPv4 address 172.24.4.72/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 12:54:15.995123 ignition[691]: Ignition 2.19.0 Jan 29 12:54:15.995974 ignition[691]: Stage: fetch-offline Jan 29 12:54:15.996494 ignition[691]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:54:15.996504 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:54:15.996609 ignition[691]: parsed url from cmdline: "" Jan 29 12:54:15.996613 ignition[691]: no config URL provided Jan 29 12:54:15.996619 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:54:16.000205 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:54:15.996627 ignition[691]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:54:16.000872 systemd-resolved[220]: Detected conflict on linux IN A 172.24.4.72 Jan 29 12:54:15.996633 ignition[691]: failed to fetch config: resource requires networking Jan 29 12:54:16.000882 systemd-resolved[220]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Jan 29 12:54:15.997270 ignition[691]: Ignition finished successfully Jan 29 12:54:16.012843 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:54:16.024193 ignition[720]: Ignition 2.19.0 Jan 29 12:54:16.024211 ignition[720]: Stage: fetch Jan 29 12:54:16.024441 ignition[720]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:54:16.024453 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:54:16.024564 ignition[720]: parsed url from cmdline: "" Jan 29 12:54:16.024569 ignition[720]: no config URL provided Jan 29 12:54:16.024575 ignition[720]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:54:16.024584 ignition[720]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:54:16.024828 ignition[720]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 12:54:16.026164 ignition[720]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 12:54:16.026175 ignition[720]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 12:54:16.289738 ignition[720]: GET result: OK Jan 29 12:54:16.289962 ignition[720]: parsing config with SHA512: 7adfadb2547e6129ec49be3766b239211385c746e565a28058b48fa8bee06c3455c4644d33739688596e0e5761bb183dbb48c8bc3246c909b09873aa4ddddcaf Jan 29 12:54:16.300038 unknown[720]: fetched base config from "system" Jan 29 12:54:16.300110 unknown[720]: fetched base config from "system" Jan 29 12:54:16.300993 ignition[720]: fetch: fetch complete Jan 29 12:54:16.300124 unknown[720]: fetched user config from "openstack" Jan 29 12:54:16.301005 ignition[720]: fetch: fetch passed Jan 29 12:54:16.304149 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:54:16.301096 ignition[720]: Ignition finished successfully Jan 29 12:54:16.316207 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:54:16.353165 ignition[726]: Ignition 2.19.0 Jan 29 12:54:16.353193 ignition[726]: Stage: kargs Jan 29 12:54:16.353614 ignition[726]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:54:16.353641 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:54:16.356241 ignition[726]: kargs: kargs passed Jan 29 12:54:16.360762 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:54:16.356347 ignition[726]: Ignition finished successfully Jan 29 12:54:16.369097 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:54:16.410295 ignition[732]: Ignition 2.19.0 Jan 29 12:54:16.410321 ignition[732]: Stage: disks Jan 29 12:54:16.410542 ignition[732]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:54:16.412939 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:54:16.410559 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:54:16.414532 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:54:16.411667 ignition[732]: disks: disks passed Jan 29 12:54:16.415991 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:54:16.411718 ignition[732]: Ignition finished successfully Jan 29 12:54:16.417867 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:54:16.423238 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:54:16.425475 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:54:16.435980 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:54:16.464220 systemd-fsck[740]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 12:54:16.479838 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:54:16.489134 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:54:16.638835 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:54:16.639217 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:54:16.640326 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:54:16.647996 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:54:16.652014 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:54:16.653709 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:54:16.657949 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 12:54:16.660365 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:54:16.682711 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (748) Jan 29 12:54:16.682756 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:54:16.682836 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:54:16.682867 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:54:16.660401 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:54:16.664896 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:54:16.703035 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:54:16.702404 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:54:16.717842 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:54:16.805130 initrd-setup-root[776]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:54:16.812255 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:54:16.821839 initrd-setup-root[790]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:54:16.830168 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:54:16.990705 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:54:17.001936 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:54:17.012233 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:54:17.029151 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:54:17.037248 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:54:17.073112 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:54:17.080277 ignition[865]: INFO : Ignition 2.19.0 Jan 29 12:54:17.081132 ignition[865]: INFO : Stage: mount Jan 29 12:54:17.081605 ignition[865]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:54:17.081605 ignition[865]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:54:17.083044 ignition[865]: INFO : mount: mount passed Jan 29 12:54:17.083044 ignition[865]: INFO : Ignition finished successfully Jan 29 12:54:17.084382 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:54:17.331512 systemd-networkd[708]: eth0: Gained IPv6LL Jan 29 12:54:23.895252 coreos-metadata[750]: Jan 29 12:54:23.895 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:54:23.931638 coreos-metadata[750]: Jan 29 12:54:23.931 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 12:54:23.948552 coreos-metadata[750]: Jan 29 12:54:23.948 INFO Fetch successful Jan 29 12:54:23.950057 coreos-metadata[750]: Jan 29 12:54:23.949 INFO wrote hostname ci-4081-3-0-e-0a72854eea.novalocal to /sysroot/etc/hostname Jan 29 12:54:23.952565 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 12:54:23.952914 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 12:54:23.966989 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:54:24.001108 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:54:24.019833 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (882) Jan 29 12:54:24.027338 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:54:24.027410 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:54:24.031594 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:54:24.042831 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:54:24.048021 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:54:24.086875 ignition[900]: INFO : Ignition 2.19.0 Jan 29 12:54:24.086875 ignition[900]: INFO : Stage: files Jan 29 12:54:24.086875 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:54:24.086875 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:54:24.091237 ignition[900]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:54:24.093347 ignition[900]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:54:24.093347 ignition[900]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:54:24.099967 ignition[900]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:54:24.100749 ignition[900]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:54:24.101597 ignition[900]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:54:24.100949 unknown[900]: wrote ssh authorized keys file for user: core Jan 29 12:54:24.104010 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:54:24.104882 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:54:24.104882 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:54:24.104882 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:54:24.175963 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 12:54:24.460646 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:54:24.460646 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:54:24.465574 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:54:24.465574 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:54:24.465574 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:54:24.465574 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:54:24.465574 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:54:24.465574 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:54:24.465574 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:54:24.465574 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:54:24.465574 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:54:24.465574 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:54:24.465574 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:54:24.465574 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:54:24.465574 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 12:54:24.850619 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 12:54:26.362349 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:54:26.362349 ignition[900]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 29 12:54:26.373839 ignition[900]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:54:26.373839 ignition[900]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:54:26.373839 ignition[900]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 29 12:54:26.373839 ignition[900]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 29 12:54:26.373839 ignition[900]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:54:26.373839 ignition[900]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:54:26.373839 ignition[900]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 29 12:54:26.373839 ignition[900]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:54:26.373839 ignition[900]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:54:26.373839 ignition[900]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:54:26.373839 ignition[900]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:54:26.373839 ignition[900]: INFO : files: files passed Jan 29 12:54:26.373839 ignition[900]: INFO : Ignition finished successfully Jan 29 12:54:26.368154 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:54:26.379923 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:54:26.384889 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:54:26.393728 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:54:26.405181 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:54:26.405181 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:54:26.393925 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:54:26.413033 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:54:26.407590 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:54:26.413219 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:54:26.427931 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:54:26.471354 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:54:26.471577 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:54:26.473722 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:54:26.475382 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:54:26.477236 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:54:26.484058 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:54:26.499578 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:54:26.508085 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:54:26.518890 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:54:26.519867 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:54:26.521981 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:54:26.524035 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:54:26.524196 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:54:26.526450 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:54:26.527595 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:54:26.529595 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:54:26.531274 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:54:26.532936 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:54:26.535253 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:54:26.538886 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:54:26.541610 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:54:26.544548 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:54:26.547427 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:54:26.550056 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:54:26.550400 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:54:26.553373 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:54:26.555241 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:54:26.557735 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:54:26.560127 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:54:26.562315 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:54:26.562593 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:54:26.566019 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:54:26.566401 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:54:26.569966 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:54:26.570372 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:54:26.582003 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:54:26.583426 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:54:26.583928 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:54:26.594132 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:54:26.597938 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:54:26.598649 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:54:26.604578 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:54:26.604759 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:54:26.618869 ignition[953]: INFO : Ignition 2.19.0 Jan 29 12:54:26.618869 ignition[953]: INFO : Stage: umount Jan 29 12:54:26.618869 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:54:26.618869 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:54:26.618869 ignition[953]: INFO : umount: umount passed Jan 29 12:54:26.618869 ignition[953]: INFO : Ignition finished successfully Jan 29 12:54:26.619148 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:54:26.619283 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:54:26.621406 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:54:26.621483 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:54:26.625570 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:54:26.625642 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:54:26.626754 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:54:26.626843 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:54:26.627358 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:54:26.627405 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:54:26.627985 systemd[1]: Stopped target network.target - Network. Jan 29 12:54:26.628434 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:54:26.628478 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:54:26.629103 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:54:26.630088 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:54:26.631977 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:54:26.632572 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:54:26.633564 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:54:26.635111 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:54:26.635147 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:54:26.638127 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:54:26.638162 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:54:26.639289 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:54:26.639329 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:54:26.640373 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:54:26.640412 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:54:26.641505 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:54:26.642548 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:54:26.646103 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:54:26.648741 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:54:26.648863 systemd-networkd[708]: eth0: DHCPv6 lease lost Jan 29 12:54:26.649629 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:54:26.651906 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:54:26.652025 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:54:26.653729 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:54:26.654263 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:54:26.664023 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:54:26.664797 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:54:26.664852 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:54:26.665899 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:54:26.665940 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:54:26.667124 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:54:26.667163 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:54:26.668275 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:54:26.668315 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:54:26.669575 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:54:26.679034 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:54:26.679158 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:54:26.681298 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:54:26.681374 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:54:26.683396 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:54:26.683449 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:54:26.684605 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:54:26.684637 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:54:26.685663 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:54:26.685707 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:54:26.687217 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:54:26.687257 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:54:26.688454 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:54:26.688495 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:54:26.697928 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:54:26.698798 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:54:26.698852 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:54:26.699408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:54:26.699448 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:54:26.704695 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:54:26.704820 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:54:26.921614 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:54:26.922017 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:54:26.925757 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:54:26.927417 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:54:26.927537 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:54:26.937116 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:54:26.965000 systemd[1]: Switching root. Jan 29 12:54:27.016813 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 29 12:54:27.016912 systemd-journald[184]: Journal stopped Jan 29 12:54:28.631943 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:54:28.632010 kernel: SELinux: policy capability open_perms=1 Jan 29 12:54:28.632032 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:54:28.632047 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:54:28.632063 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:54:28.632077 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:54:28.632098 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:54:28.632116 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:54:28.632133 kernel: audit: type=1403 audit(1738155267.566:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:54:28.632157 systemd[1]: Successfully loaded SELinux policy in 71.646ms. Jan 29 12:54:28.632190 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.734ms. Jan 29 12:54:28.632210 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:54:28.632223 systemd[1]: Detected virtualization kvm. Jan 29 12:54:28.632235 systemd[1]: Detected architecture x86-64. Jan 29 12:54:28.632247 systemd[1]: Detected first boot. Jan 29 12:54:28.632259 systemd[1]: Hostname set to . Jan 29 12:54:28.632272 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:54:28.632286 zram_generator::config[1013]: No configuration found. Jan 29 12:54:28.632299 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:54:28.632311 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:54:28.632322 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 12:54:28.632335 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:54:28.632347 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:54:28.632359 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:54:28.632371 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:54:28.632385 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:54:28.632397 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:54:28.632409 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:54:28.632420 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:54:28.632432 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:54:28.632444 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:54:28.632456 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:54:28.632468 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:54:28.632480 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:54:28.632494 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:54:28.632506 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:54:28.632517 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:54:28.632530 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:54:28.632547 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:54:28.632559 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:54:28.632571 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:54:28.632585 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:54:28.632597 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:54:28.632609 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:54:28.632621 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:54:28.632633 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:54:28.632645 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:54:28.632657 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:54:28.632668 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:54:28.632680 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:54:28.632694 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:54:28.632705 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:54:28.632718 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:54:28.632730 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:54:28.632743 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:54:28.632755 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:54:28.632808 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:54:28.632823 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:54:28.632836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:54:28.632852 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:54:28.632865 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:54:28.632876 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:54:28.632888 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:54:28.632900 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:54:28.632911 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:54:28.632923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:54:28.632935 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:54:28.632949 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 12:54:28.632962 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 12:54:28.632973 kernel: fuse: init (API version 7.39) Jan 29 12:54:28.632985 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:54:28.632997 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:54:28.633008 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:54:28.633019 kernel: loop: module loaded Jan 29 12:54:28.633047 systemd-journald[1124]: Collecting audit messages is disabled. Jan 29 12:54:28.633075 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:54:28.633088 systemd-journald[1124]: Journal started Jan 29 12:54:28.633112 systemd-journald[1124]: Runtime Journal (/run/log/journal/e3e3ec07fce644548f69be6ac15ffcae) is 8.0M, max 78.3M, 70.3M free. Jan 29 12:54:28.647422 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:54:28.647500 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:54:28.651779 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:54:28.654280 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:54:28.654889 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:54:28.655459 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:54:28.656039 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:54:28.656601 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:54:28.657189 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:54:28.657929 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:54:28.658664 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:54:28.658828 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:54:28.660021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:54:28.660153 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:54:28.660884 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:54:28.661017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:54:28.661874 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:54:28.662006 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:54:28.663027 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:54:28.663157 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:54:28.664175 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:54:28.673237 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:54:28.678994 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:54:28.683483 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:54:28.687888 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:54:28.699206 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:54:28.707348 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:54:28.708285 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:54:28.709064 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:54:28.709691 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:54:28.710312 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:54:28.716217 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:54:28.720898 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:54:28.724719 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:54:28.731926 kernel: ACPI: bus type drm_connector registered Jan 29 12:54:28.730860 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:54:28.740280 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:54:28.744906 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:54:28.748977 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:54:28.757982 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:54:28.761911 systemd-journald[1124]: Time spent on flushing to /var/log/journal/e3e3ec07fce644548f69be6ac15ffcae is 24.010ms for 934 entries. Jan 29 12:54:28.761911 systemd-journald[1124]: System Journal (/var/log/journal/e3e3ec07fce644548f69be6ac15ffcae) is 8.0M, max 584.8M, 576.8M free. Jan 29 12:54:28.815806 systemd-journald[1124]: Received client request to flush runtime journal. Jan 29 12:54:28.775950 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 29 12:54:28.775966 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 29 12:54:28.779207 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:54:28.781134 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:54:28.785191 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:54:28.796851 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:54:28.819178 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:54:28.821283 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:54:28.830913 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:54:28.837311 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:54:28.855559 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 12:54:28.869368 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:54:28.876003 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:54:28.891575 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 29 12:54:28.891598 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 29 12:54:28.898239 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:54:29.471380 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:54:29.489092 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:54:29.514393 systemd-udevd[1198]: Using default interface naming scheme 'v255'. Jan 29 12:54:29.547643 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:54:29.560557 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:54:29.612175 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:54:29.677742 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:54:29.722533 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 29 12:54:29.738812 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1207) Jan 29 12:54:29.781130 systemd-networkd[1206]: lo: Link UP Jan 29 12:54:29.781140 systemd-networkd[1206]: lo: Gained carrier Jan 29 12:54:29.782419 systemd-networkd[1206]: Enumeration completed Jan 29 12:54:29.782854 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:54:29.782858 systemd-networkd[1206]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:54:29.783563 systemd-networkd[1206]: eth0: Link UP Jan 29 12:54:29.783567 systemd-networkd[1206]: eth0: Gained carrier Jan 29 12:54:29.783581 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:54:29.783678 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:54:29.797828 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 12:54:29.841127 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 29 12:54:29.841150 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 12:54:29.841164 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 12:54:29.841351 kernel: Console: switching to colour dummy device 80x25 Jan 29 12:54:29.841369 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 12:54:29.841382 kernel: [drm] features: -context_init Jan 29 12:54:29.841395 kernel: [drm] number of scanouts: 1 Jan 29 12:54:29.841408 kernel: [drm] number of cap sets: 0 Jan 29 12:54:29.841420 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 12:54:29.841433 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 12:54:29.841446 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 12:54:29.796202 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:54:29.797863 systemd-networkd[1206]: eth0: DHCPv4 address 172.24.4.72/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 12:54:29.847871 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 12:54:29.866935 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 12:54:29.875461 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:54:29.880800 kernel: ACPI: button: Power Button [PWRF] Jan 29 12:54:29.901803 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:54:29.909113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:54:29.919051 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:54:29.919312 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:54:29.928971 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:54:29.933186 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:54:29.933404 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:54:29.940006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:54:29.940396 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:54:29.947950 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:54:29.972829 lvm[1244]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:54:29.999582 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:54:29.999830 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:54:30.005113 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:54:30.018052 lvm[1249]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:54:30.066581 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:54:30.068395 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:54:30.069637 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:54:30.069699 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:54:30.070691 systemd[1]: Reached target machines.target - Containers. Jan 29 12:54:30.075171 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:54:30.083081 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:54:30.090059 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:54:30.093244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:54:30.097397 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:54:30.115498 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:54:30.127157 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:54:30.134199 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:54:30.147150 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:54:30.183147 kernel: loop0: detected capacity change from 0 to 140768 Jan 29 12:54:30.180140 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:54:30.223080 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:54:30.225603 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:54:30.322548 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:54:30.352460 kernel: loop1: detected capacity change from 0 to 142488 Jan 29 12:54:30.470845 kernel: loop2: detected capacity change from 0 to 210664 Jan 29 12:54:30.542859 kernel: loop3: detected capacity change from 0 to 8 Jan 29 12:54:30.574843 kernel: loop4: detected capacity change from 0 to 140768 Jan 29 12:54:30.626885 kernel: loop5: detected capacity change from 0 to 142488 Jan 29 12:54:30.714436 kernel: loop6: detected capacity change from 0 to 210664 Jan 29 12:54:30.798235 kernel: loop7: detected capacity change from 0 to 8 Jan 29 12:54:30.800720 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 12:54:30.801761 (sd-merge)[1275]: Merged extensions into '/usr'. Jan 29 12:54:30.822565 systemd[1]: Reloading requested from client PID 1259 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:54:30.823121 systemd[1]: Reloading... Jan 29 12:54:30.939064 zram_generator::config[1307]: No configuration found. Jan 29 12:54:31.102645 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:54:31.168273 systemd[1]: Reloading finished in 344 ms. Jan 29 12:54:31.183024 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:54:31.199991 systemd[1]: Starting ensure-sysext.service... Jan 29 12:54:31.205985 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:54:31.214103 systemd[1]: Reloading requested from client PID 1364 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:54:31.214124 systemd[1]: Reloading... Jan 29 12:54:31.253159 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:54:31.253500 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:54:31.254381 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:54:31.254691 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Jan 29 12:54:31.254781 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Jan 29 12:54:31.258702 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:54:31.258714 systemd-tmpfiles[1365]: Skipping /boot Jan 29 12:54:31.281107 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:54:31.281118 systemd-tmpfiles[1365]: Skipping /boot Jan 29 12:54:31.285695 systemd-networkd[1206]: eth0: Gained IPv6LL Jan 29 12:54:31.310803 zram_generator::config[1395]: No configuration found. Jan 29 12:54:31.332562 ldconfig[1254]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:54:31.473260 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:54:31.539382 systemd[1]: Reloading finished in 324 ms. Jan 29 12:54:31.555282 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:54:31.556598 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:54:31.557414 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:54:31.579025 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:54:31.601022 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:54:31.609066 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:54:31.623980 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:54:31.633400 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:54:31.653488 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:54:31.654605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:54:31.658861 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:54:31.669860 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:54:31.677003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:54:31.683191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:54:31.683369 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:54:31.685591 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:54:31.685784 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:54:31.694507 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:54:31.695063 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:54:31.698736 augenrules[1487]: No rules Jan 29 12:54:31.701960 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:54:31.704565 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:54:31.706851 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:54:31.714726 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:54:31.729734 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:54:31.738627 systemd[1]: Finished ensure-sysext.service. Jan 29 12:54:31.744918 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:54:31.745224 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:54:31.750916 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:54:31.755953 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:54:31.764467 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:54:31.779968 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:54:31.783389 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:54:31.793520 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 12:54:31.805990 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:54:31.807762 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:54:31.815001 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:54:31.817139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:54:31.819950 systemd-resolved[1472]: Positive Trust Anchors: Jan 29 12:54:31.819961 systemd-resolved[1472]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:54:31.820003 systemd-resolved[1472]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:54:31.820445 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:54:31.824088 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:54:31.824556 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:54:31.828035 systemd-resolved[1472]: Using system hostname 'ci-4081-3-0-e-0a72854eea.novalocal'. Jan 29 12:54:31.829457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:54:31.829628 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:54:31.830413 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:54:31.830560 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:54:31.835126 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:54:31.836839 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:54:31.844150 systemd[1]: Reached target network.target - Network. Jan 29 12:54:31.846381 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:54:31.847616 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:54:31.849556 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:54:31.849631 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:54:31.849667 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:54:31.905576 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 12:54:31.908869 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:54:31.911061 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:54:31.912454 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:54:31.913521 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:54:31.914447 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:54:31.914491 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:54:31.915066 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:54:31.916207 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:54:31.917729 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:54:31.918694 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:54:31.920577 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:54:31.923887 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:54:31.928568 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:54:31.932724 systemd-timesyncd[1513]: Contacted time server 164.132.166.29:123 (0.flatcar.pool.ntp.org). Jan 29 12:54:31.932872 systemd-timesyncd[1513]: Initial clock synchronization to Wed 2025-01-29 12:54:31.876072 UTC. Jan 29 12:54:31.933741 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:54:31.935708 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:54:31.936365 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:54:31.937821 systemd[1]: System is tainted: cgroupsv1 Jan 29 12:54:31.937858 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:54:31.937880 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:54:31.939340 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:54:31.953120 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:54:31.966915 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:54:31.971703 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:54:31.987017 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:54:31.988822 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:54:31.993887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:54:31.997666 jq[1534]: false Jan 29 12:54:32.007145 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:54:32.013927 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:54:32.027597 extend-filesystems[1535]: Found loop4 Jan 29 12:54:32.041750 extend-filesystems[1535]: Found loop5 Jan 29 12:54:32.041750 extend-filesystems[1535]: Found loop6 Jan 29 12:54:32.041750 extend-filesystems[1535]: Found loop7 Jan 29 12:54:32.041750 extend-filesystems[1535]: Found vda Jan 29 12:54:32.041750 extend-filesystems[1535]: Found vda1 Jan 29 12:54:32.041750 extend-filesystems[1535]: Found vda2 Jan 29 12:54:32.041750 extend-filesystems[1535]: Found vda3 Jan 29 12:54:32.041750 extend-filesystems[1535]: Found usr Jan 29 12:54:32.041750 extend-filesystems[1535]: Found vda4 Jan 29 12:54:32.041750 extend-filesystems[1535]: Found vda6 Jan 29 12:54:32.041750 extend-filesystems[1535]: Found vda7 Jan 29 12:54:32.041750 extend-filesystems[1535]: Found vda9 Jan 29 12:54:32.041750 extend-filesystems[1535]: Checking size of /dev/vda9 Jan 29 12:54:32.190503 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 29 12:54:32.190530 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 29 12:54:32.190561 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1212) Jan 29 12:54:32.027919 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:54:32.041309 dbus-daemon[1531]: [system] SELinux support is enabled Jan 29 12:54:32.199614 extend-filesystems[1535]: Resized partition /dev/vda9 Jan 29 12:54:32.035531 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:54:32.212658 extend-filesystems[1557]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:54:32.212658 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 12:54:32.212658 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:54:32.212658 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 29 12:54:32.064435 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:54:32.247669 extend-filesystems[1535]: Resized filesystem in /dev/vda9 Jan 29 12:54:32.094012 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:54:32.110001 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:54:32.117023 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:54:32.268934 update_engine[1567]: I20250129 12:54:32.267132 1567 main.cc:92] Flatcar Update Engine starting Jan 29 12:54:32.122000 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:54:32.272040 jq[1568]: true Jan 29 12:54:32.135232 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:54:32.150225 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:54:32.150468 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:54:32.272494 tar[1578]: linux-amd64/helm Jan 29 12:54:32.152482 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:54:32.280924 jq[1579]: true Jan 29 12:54:32.168527 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:54:32.181600 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:54:32.184918 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:54:32.191795 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:54:32.202125 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:54:32.202356 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:54:32.270132 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:54:32.287786 update_engine[1567]: I20250129 12:54:32.281950 1567 update_check_scheduler.cc:74] Next update check in 2m17s Jan 29 12:54:32.291102 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:54:32.294603 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:54:32.294641 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:54:32.296826 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:54:32.296859 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:54:32.299546 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:54:32.307931 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:54:32.375920 systemd-logind[1561]: New seat seat0. Jan 29 12:54:32.382975 systemd-logind[1561]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 12:54:32.382996 systemd-logind[1561]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:54:32.383282 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:54:32.401982 bash[1609]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:54:32.408013 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:54:32.436416 systemd[1]: Starting sshkeys.service... Jan 29 12:54:32.462052 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 12:54:32.473182 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 12:54:32.595611 locksmithd[1599]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:54:32.770057 sshd_keygen[1575]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:54:32.802803 containerd[1580]: time="2025-01-29T12:54:32.802126268Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:54:32.824153 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:54:32.834100 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:54:32.852534 containerd[1580]: time="2025-01-29T12:54:32.852438748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:54:32.854890 containerd[1580]: time="2025-01-29T12:54:32.853772200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:54:32.854890 containerd[1580]: time="2025-01-29T12:54:32.853804671Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:54:32.854890 containerd[1580]: time="2025-01-29T12:54:32.853821791Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:54:32.854890 containerd[1580]: time="2025-01-29T12:54:32.853997005Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:54:32.854890 containerd[1580]: time="2025-01-29T12:54:32.854017288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:54:32.854890 containerd[1580]: time="2025-01-29T12:54:32.854089451Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:54:32.854890 containerd[1580]: time="2025-01-29T12:54:32.854107039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:54:32.854890 containerd[1580]: time="2025-01-29T12:54:32.854312742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:54:32.854890 containerd[1580]: time="2025-01-29T12:54:32.854330698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:54:32.854890 containerd[1580]: time="2025-01-29T12:54:32.854346127Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:54:32.854890 containerd[1580]: time="2025-01-29T12:54:32.854358583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:54:32.855148 containerd[1580]: time="2025-01-29T12:54:32.854435122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:54:32.855148 containerd[1580]: time="2025-01-29T12:54:32.854634459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:54:32.855148 containerd[1580]: time="2025-01-29T12:54:32.854780953Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:54:32.855148 containerd[1580]: time="2025-01-29T12:54:32.854798182Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:54:32.855148 containerd[1580]: time="2025-01-29T12:54:32.854875080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:54:32.855148 containerd[1580]: time="2025-01-29T12:54:32.854923378Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:54:32.858387 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:54:32.860062 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:54:32.869624 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:54:32.870914 containerd[1580]: time="2025-01-29T12:54:32.868224530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:54:32.870914 containerd[1580]: time="2025-01-29T12:54:32.870636081Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:54:32.871845 containerd[1580]: time="2025-01-29T12:54:32.871195235Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:54:32.871845 containerd[1580]: time="2025-01-29T12:54:32.871229516Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:54:32.871845 containerd[1580]: time="2025-01-29T12:54:32.871424287Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:54:32.873240 containerd[1580]: time="2025-01-29T12:54:32.872977400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:54:32.873675 containerd[1580]: time="2025-01-29T12:54:32.873430857Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:54:32.873675 containerd[1580]: time="2025-01-29T12:54:32.873553475Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:54:32.873675 containerd[1580]: time="2025-01-29T12:54:32.873574117Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:54:32.873675 containerd[1580]: time="2025-01-29T12:54:32.873593217Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:54:32.873675 containerd[1580]: time="2025-01-29T12:54:32.873610299Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:54:32.873675 containerd[1580]: time="2025-01-29T12:54:32.873626464Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:54:32.873675 containerd[1580]: time="2025-01-29T12:54:32.873640749Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:54:32.873675 containerd[1580]: time="2025-01-29T12:54:32.873657501Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:54:32.873675 containerd[1580]: time="2025-01-29T12:54:32.873675318Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:54:32.874438 containerd[1580]: time="2025-01-29T12:54:32.873693313Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:54:32.874438 containerd[1580]: time="2025-01-29T12:54:32.873709668Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:54:32.874438 containerd[1580]: time="2025-01-29T12:54:32.873725794Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:54:32.874438 containerd[1580]: time="2025-01-29T12:54:32.873751410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874438 containerd[1580]: time="2025-01-29T12:54:32.873793818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874438 containerd[1580]: time="2025-01-29T12:54:32.873811655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874438 containerd[1580]: time="2025-01-29T12:54:32.873828228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874438 containerd[1580]: time="2025-01-29T12:54:32.873843657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874438 containerd[1580]: time="2025-01-29T12:54:32.873859246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874438 containerd[1580]: time="2025-01-29T12:54:32.873873939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874438 containerd[1580]: time="2025-01-29T12:54:32.873890522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874438 containerd[1580]: time="2025-01-29T12:54:32.873915173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874438 containerd[1580]: time="2025-01-29T12:54:32.873933000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874438 containerd[1580]: time="2025-01-29T12:54:32.873948867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874748 containerd[1580]: time="2025-01-29T12:54:32.873962874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874748 containerd[1580]: time="2025-01-29T12:54:32.873985237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874748 containerd[1580]: time="2025-01-29T12:54:32.874010375Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:54:32.874748 containerd[1580]: time="2025-01-29T12:54:32.874037672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874748 containerd[1580]: time="2025-01-29T12:54:32.874052425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874748 containerd[1580]: time="2025-01-29T12:54:32.874065079Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:54:32.874748 containerd[1580]: time="2025-01-29T12:54:32.874122060Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:54:32.874748 containerd[1580]: time="2025-01-29T12:54:32.874146085Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:54:32.874748 containerd[1580]: time="2025-01-29T12:54:32.874159087Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:54:32.874748 containerd[1580]: time="2025-01-29T12:54:32.874172745Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:54:32.874748 containerd[1580]: time="2025-01-29T12:54:32.874185211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.874748 containerd[1580]: time="2025-01-29T12:54:32.874200102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:54:32.874748 containerd[1580]: time="2025-01-29T12:54:32.874212616Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:54:32.874748 containerd[1580]: time="2025-01-29T12:54:32.874224077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:54:32.875079 containerd[1580]: time="2025-01-29T12:54:32.874569122Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:54:32.875079 containerd[1580]: time="2025-01-29T12:54:32.874653052Z" level=info msg="Connect containerd service" Jan 29 12:54:32.875079 containerd[1580]: time="2025-01-29T12:54:32.874694108Z" level=info msg="using legacy CRI server" Jan 29 12:54:32.875079 containerd[1580]: time="2025-01-29T12:54:32.874704215Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:54:32.877569 containerd[1580]: time="2025-01-29T12:54:32.875334924Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:54:32.877569 containerd[1580]: time="2025-01-29T12:54:32.875901868Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:54:32.877569 containerd[1580]: time="2025-01-29T12:54:32.876252662Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:54:32.877569 containerd[1580]: time="2025-01-29T12:54:32.876298363Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:54:32.877569 containerd[1580]: time="2025-01-29T12:54:32.876357891Z" level=info msg="Start subscribing containerd event" Jan 29 12:54:32.877569 containerd[1580]: time="2025-01-29T12:54:32.876402339Z" level=info msg="Start recovering state" Jan 29 12:54:32.877569 containerd[1580]: time="2025-01-29T12:54:32.876459450Z" level=info msg="Start event monitor" Jan 29 12:54:32.877569 containerd[1580]: time="2025-01-29T12:54:32.876479257Z" level=info msg="Start snapshots syncer" Jan 29 12:54:32.877569 containerd[1580]: time="2025-01-29T12:54:32.876495173Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:54:32.877569 containerd[1580]: time="2025-01-29T12:54:32.876504046Z" level=info msg="Start streaming server" Jan 29 12:54:32.877569 containerd[1580]: time="2025-01-29T12:54:32.876551817Z" level=info msg="containerd successfully booted in 0.076306s" Jan 29 12:54:32.879033 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:54:32.898131 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:54:32.908135 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:54:32.917972 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:54:32.921143 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:54:33.064667 tar[1578]: linux-amd64/LICENSE Jan 29 12:54:33.064667 tar[1578]: linux-amd64/README.md Jan 29 12:54:33.079895 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:54:34.131105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:54:34.152589 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:54:35.779720 kubelet[1664]: E0129 12:54:35.779603 1664 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:54:35.783437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:54:35.784392 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:54:37.994577 login[1647]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Jan 29 12:54:38.000656 login[1648]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:54:38.027003 systemd-logind[1561]: New session 1 of user core. Jan 29 12:54:38.031508 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:54:38.045461 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:54:38.075022 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:54:38.084580 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:54:38.088920 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:54:38.202512 systemd[1683]: Queued start job for default target default.target. Jan 29 12:54:38.202895 systemd[1683]: Created slice app.slice - User Application Slice. Jan 29 12:54:38.202917 systemd[1683]: Reached target paths.target - Paths. Jan 29 12:54:38.202933 systemd[1683]: Reached target timers.target - Timers. Jan 29 12:54:38.207935 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:54:38.218660 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:54:38.219011 systemd[1683]: Reached target sockets.target - Sockets. Jan 29 12:54:38.219502 systemd[1683]: Reached target basic.target - Basic System. Jan 29 12:54:38.219851 systemd[1683]: Reached target default.target - Main User Target. Jan 29 12:54:38.219928 systemd[1683]: Startup finished in 123ms. Jan 29 12:54:38.220644 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:54:38.229048 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:54:38.996274 login[1647]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:54:39.011713 systemd-logind[1561]: New session 2 of user core. Jan 29 12:54:39.023500 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:54:39.059057 coreos-metadata[1529]: Jan 29 12:54:39.058 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:54:39.109317 coreos-metadata[1529]: Jan 29 12:54:39.109 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 12:54:39.296349 coreos-metadata[1529]: Jan 29 12:54:39.296 INFO Fetch successful Jan 29 12:54:39.296349 coreos-metadata[1529]: Jan 29 12:54:39.296 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 12:54:39.310217 coreos-metadata[1529]: Jan 29 12:54:39.310 INFO Fetch successful Jan 29 12:54:39.310377 coreos-metadata[1529]: Jan 29 12:54:39.310 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 12:54:39.324439 coreos-metadata[1529]: Jan 29 12:54:39.324 INFO Fetch successful Jan 29 12:54:39.324439 coreos-metadata[1529]: Jan 29 12:54:39.324 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 12:54:39.337693 coreos-metadata[1529]: Jan 29 12:54:39.337 INFO Fetch successful Jan 29 12:54:39.337889 coreos-metadata[1529]: Jan 29 12:54:39.337 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 12:54:39.350970 coreos-metadata[1529]: Jan 29 12:54:39.350 INFO Fetch successful Jan 29 12:54:39.350970 coreos-metadata[1529]: Jan 29 12:54:39.350 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 12:54:39.365119 coreos-metadata[1529]: Jan 29 12:54:39.364 INFO Fetch successful Jan 29 12:54:39.418245 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:54:39.421572 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:54:39.574278 coreos-metadata[1615]: Jan 29 12:54:39.574 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:54:39.616587 coreos-metadata[1615]: Jan 29 12:54:39.616 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 12:54:39.631644 coreos-metadata[1615]: Jan 29 12:54:39.631 INFO Fetch successful Jan 29 12:54:39.631644 coreos-metadata[1615]: Jan 29 12:54:39.631 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 12:54:39.645654 coreos-metadata[1615]: Jan 29 12:54:39.645 INFO Fetch successful Jan 29 12:54:39.651010 unknown[1615]: wrote ssh authorized keys file for user: core Jan 29 12:54:39.685831 update-ssh-keys[1726]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:54:39.686935 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 12:54:39.693464 systemd[1]: Finished sshkeys.service. Jan 29 12:54:39.705551 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:54:39.706124 systemd[1]: Startup finished in 15.963s (kernel) + 12.210s (userspace) = 28.174s. Jan 29 12:54:40.882178 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:54:40.893303 systemd[1]: Started sshd@0-172.24.4.72:22-172.24.4.1:38188.service - OpenSSH per-connection server daemon (172.24.4.1:38188). Jan 29 12:54:42.015318 sshd[1734]: Accepted publickey for core from 172.24.4.1 port 38188 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:54:42.018044 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:54:42.027879 systemd-logind[1561]: New session 3 of user core. Jan 29 12:54:42.040382 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:54:42.657260 systemd[1]: Started sshd@1-172.24.4.72:22-172.24.4.1:38190.service - OpenSSH per-connection server daemon (172.24.4.1:38190). Jan 29 12:54:44.138945 sshd[1739]: Accepted publickey for core from 172.24.4.1 port 38190 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:54:44.141632 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:54:44.151100 systemd-logind[1561]: New session 4 of user core. Jan 29 12:54:44.161290 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:54:44.912103 sshd[1739]: pam_unix(sshd:session): session closed for user core Jan 29 12:54:44.921345 systemd[1]: Started sshd@2-172.24.4.72:22-172.24.4.1:33528.service - OpenSSH per-connection server daemon (172.24.4.1:33528). Jan 29 12:54:44.922430 systemd[1]: sshd@1-172.24.4.72:22-172.24.4.1:38190.service: Deactivated successfully. Jan 29 12:54:44.931302 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:54:44.934315 systemd-logind[1561]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:54:44.942357 systemd-logind[1561]: Removed session 4. Jan 29 12:54:45.887316 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:54:45.906204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:54:46.179851 sshd[1744]: Accepted publickey for core from 172.24.4.1 port 33528 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:54:46.185336 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:54:46.187097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:54:46.190258 (kubelet)[1760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:54:46.199060 systemd-logind[1561]: New session 5 of user core. Jan 29 12:54:46.207020 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:54:46.270269 kubelet[1760]: E0129 12:54:46.270188 1760 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:54:46.274189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:54:46.274391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:54:46.817994 sshd[1744]: pam_unix(sshd:session): session closed for user core Jan 29 12:54:46.828323 systemd[1]: Started sshd@3-172.24.4.72:22-172.24.4.1:33530.service - OpenSSH per-connection server daemon (172.24.4.1:33530). Jan 29 12:54:46.829596 systemd[1]: sshd@2-172.24.4.72:22-172.24.4.1:33528.service: Deactivated successfully. Jan 29 12:54:46.838210 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:54:46.847675 systemd-logind[1561]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:54:46.850485 systemd-logind[1561]: Removed session 5. Jan 29 12:54:48.222096 sshd[1773]: Accepted publickey for core from 172.24.4.1 port 33530 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:54:48.225116 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:54:48.236155 systemd-logind[1561]: New session 6 of user core. Jan 29 12:54:48.248310 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:54:48.881259 sshd[1773]: pam_unix(sshd:session): session closed for user core Jan 29 12:54:48.891579 systemd[1]: sshd@3-172.24.4.72:22-172.24.4.1:33530.service: Deactivated successfully. Jan 29 12:54:48.897654 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:54:48.901196 systemd-logind[1561]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:54:48.907485 systemd[1]: Started sshd@4-172.24.4.72:22-172.24.4.1:33546.service - OpenSSH per-connection server daemon (172.24.4.1:33546). Jan 29 12:54:48.910510 systemd-logind[1561]: Removed session 6. Jan 29 12:54:50.443300 sshd[1784]: Accepted publickey for core from 172.24.4.1 port 33546 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:54:50.446382 sshd[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:54:50.459432 systemd-logind[1561]: New session 7 of user core. Jan 29 12:54:50.466401 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:54:50.941260 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:54:50.941942 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:54:50.964894 sudo[1788]: pam_unix(sudo:session): session closed for user root Jan 29 12:54:51.189031 sshd[1784]: pam_unix(sshd:session): session closed for user core Jan 29 12:54:51.200302 systemd[1]: Started sshd@5-172.24.4.72:22-172.24.4.1:33560.service - OpenSSH per-connection server daemon (172.24.4.1:33560). Jan 29 12:54:51.202250 systemd[1]: sshd@4-172.24.4.72:22-172.24.4.1:33546.service: Deactivated successfully. Jan 29 12:54:51.214049 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:54:51.225332 systemd-logind[1561]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:54:51.228545 systemd-logind[1561]: Removed session 7. Jan 29 12:54:52.647561 sshd[1790]: Accepted publickey for core from 172.24.4.1 port 33560 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:54:52.649332 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:54:52.654115 systemd-logind[1561]: New session 8 of user core. Jan 29 12:54:52.660994 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:54:53.170243 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:54:53.171572 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:54:53.179808 sudo[1798]: pam_unix(sudo:session): session closed for user root Jan 29 12:54:53.191934 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:54:53.192626 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:54:53.221324 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:54:53.223474 auditctl[1801]: No rules Jan 29 12:54:53.224321 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:54:53.224818 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:54:53.233843 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:54:53.259160 augenrules[1820]: No rules Jan 29 12:54:53.261633 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:54:53.264092 sudo[1797]: pam_unix(sudo:session): session closed for user root Jan 29 12:54:53.527477 sshd[1790]: pam_unix(sshd:session): session closed for user core Jan 29 12:54:53.540501 systemd[1]: Started sshd@6-172.24.4.72:22-172.24.4.1:40902.service - OpenSSH per-connection server daemon (172.24.4.1:40902). Jan 29 12:54:53.548010 systemd[1]: sshd@5-172.24.4.72:22-172.24.4.1:33560.service: Deactivated successfully. Jan 29 12:54:53.556081 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:54:53.559365 systemd-logind[1561]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:54:53.563322 systemd-logind[1561]: Removed session 8. Jan 29 12:54:54.794221 sshd[1826]: Accepted publickey for core from 172.24.4.1 port 40902 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:54:54.797459 sshd[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:54:54.808285 systemd-logind[1561]: New session 9 of user core. Jan 29 12:54:54.818316 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:54:55.267467 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:54:55.268429 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:54:56.018190 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:54:56.019745 (dockerd)[1850]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:54:56.512313 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:54:56.526183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:54:56.671593 dockerd[1850]: time="2025-01-29T12:54:56.671534517Z" level=info msg="Starting up" Jan 29 12:54:56.912959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:54:56.916071 (kubelet)[1875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:54:56.978547 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1188799314-merged.mount: Deactivated successfully. Jan 29 12:54:57.000346 kubelet[1875]: E0129 12:54:57.000308 1875 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:54:57.003420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:54:57.003946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:54:57.235174 dockerd[1850]: time="2025-01-29T12:54:57.234937883Z" level=info msg="Loading containers: start." Jan 29 12:54:57.418830 kernel: Initializing XFRM netlink socket Jan 29 12:54:57.521218 systemd-networkd[1206]: docker0: Link UP Jan 29 12:54:57.545169 dockerd[1850]: time="2025-01-29T12:54:57.544914234Z" level=info msg="Loading containers: done." Jan 29 12:54:57.572247 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1072074501-merged.mount: Deactivated successfully. Jan 29 12:54:57.579467 dockerd[1850]: time="2025-01-29T12:54:57.579375240Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:54:57.579957 dockerd[1850]: time="2025-01-29T12:54:57.579907104Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:54:57.580607 dockerd[1850]: time="2025-01-29T12:54:57.580402970Z" level=info msg="Daemon has completed initialization" Jan 29 12:54:57.630819 dockerd[1850]: time="2025-01-29T12:54:57.630649375Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:54:57.631444 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:54:59.470464 containerd[1580]: time="2025-01-29T12:54:59.469753253Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 12:55:00.197362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2150189648.mount: Deactivated successfully. Jan 29 12:55:02.214897 containerd[1580]: time="2025-01-29T12:55:02.214791394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:02.216123 containerd[1580]: time="2025-01-29T12:55:02.216046917Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 29 12:55:02.217320 containerd[1580]: time="2025-01-29T12:55:02.217264925Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:02.220516 containerd[1580]: time="2025-01-29T12:55:02.220478118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:02.222067 containerd[1580]: time="2025-01-29T12:55:02.221550732Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.751684549s" Jan 29 12:55:02.222067 containerd[1580]: time="2025-01-29T12:55:02.221596152Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 12:55:02.245954 containerd[1580]: time="2025-01-29T12:55:02.245852918Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 12:55:04.630242 containerd[1580]: time="2025-01-29T12:55:04.630173989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:04.631796 containerd[1580]: time="2025-01-29T12:55:04.631546773Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 29 12:55:04.632988 containerd[1580]: time="2025-01-29T12:55:04.632944302Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:04.636395 containerd[1580]: time="2025-01-29T12:55:04.636351876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:04.637944 containerd[1580]: time="2025-01-29T12:55:04.637461134Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.391569046s" Jan 29 12:55:04.637944 containerd[1580]: time="2025-01-29T12:55:04.637508017Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 12:55:04.661075 containerd[1580]: time="2025-01-29T12:55:04.660873166Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 12:55:06.430717 containerd[1580]: time="2025-01-29T12:55:06.430650949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:06.432123 containerd[1580]: time="2025-01-29T12:55:06.431900035Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 29 12:55:06.433445 containerd[1580]: time="2025-01-29T12:55:06.433388832Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:06.436947 containerd[1580]: time="2025-01-29T12:55:06.436898270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:06.438305 containerd[1580]: time="2025-01-29T12:55:06.438192548Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.777283438s" Jan 29 12:55:06.438305 containerd[1580]: time="2025-01-29T12:55:06.438222814Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 12:55:06.461300 containerd[1580]: time="2025-01-29T12:55:06.461257926Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:55:07.013042 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 12:55:07.023196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:55:07.209966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:55:07.210913 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:55:07.311874 kubelet[2101]: E0129 12:55:07.308182 2101 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:55:07.309800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:55:07.309998 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:55:08.092520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1901128641.mount: Deactivated successfully. Jan 29 12:55:08.799878 containerd[1580]: time="2025-01-29T12:55:08.799677380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:08.802412 containerd[1580]: time="2025-01-29T12:55:08.802298305Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 29 12:55:08.804446 containerd[1580]: time="2025-01-29T12:55:08.804330953Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:08.809245 containerd[1580]: time="2025-01-29T12:55:08.809133411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:08.812232 containerd[1580]: time="2025-01-29T12:55:08.810994372Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.349683398s" Jan 29 12:55:08.812232 containerd[1580]: time="2025-01-29T12:55:08.811083908Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 12:55:08.865962 containerd[1580]: time="2025-01-29T12:55:08.865820272Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:55:09.519290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3784364985.mount: Deactivated successfully. Jan 29 12:55:10.859745 containerd[1580]: time="2025-01-29T12:55:10.859380637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:10.862608 containerd[1580]: time="2025-01-29T12:55:10.862499793Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 29 12:55:10.864290 containerd[1580]: time="2025-01-29T12:55:10.864145360Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:10.871993 containerd[1580]: time="2025-01-29T12:55:10.871854083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:10.875392 containerd[1580]: time="2025-01-29T12:55:10.875142822Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.009177291s" Jan 29 12:55:10.875392 containerd[1580]: time="2025-01-29T12:55:10.875218222Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 12:55:10.928415 containerd[1580]: time="2025-01-29T12:55:10.928329922Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 12:55:11.507972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3465937031.mount: Deactivated successfully. Jan 29 12:55:11.518929 containerd[1580]: time="2025-01-29T12:55:11.518344469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:11.520841 containerd[1580]: time="2025-01-29T12:55:11.520576015Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 29 12:55:11.523001 containerd[1580]: time="2025-01-29T12:55:11.522901603Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:11.528593 containerd[1580]: time="2025-01-29T12:55:11.528453059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:11.530993 containerd[1580]: time="2025-01-29T12:55:11.530687359Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 602.295644ms" Jan 29 12:55:11.530993 containerd[1580]: time="2025-01-29T12:55:11.530813803Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 12:55:11.585445 containerd[1580]: time="2025-01-29T12:55:11.585144129Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 12:55:12.343385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount195224632.mount: Deactivated successfully. Jan 29 12:55:16.342323 containerd[1580]: time="2025-01-29T12:55:16.342155199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:16.344102 containerd[1580]: time="2025-01-29T12:55:16.344040873Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 29 12:55:16.345605 containerd[1580]: time="2025-01-29T12:55:16.345541140Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:16.348935 containerd[1580]: time="2025-01-29T12:55:16.348910922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:16.350428 containerd[1580]: time="2025-01-29T12:55:16.350282350Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.765076777s" Jan 29 12:55:16.350428 containerd[1580]: time="2025-01-29T12:55:16.350315873Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 12:55:17.512474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 12:55:17.523068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:55:17.554779 update_engine[1567]: I20250129 12:55:17.553820 1567 update_attempter.cc:509] Updating boot flags... Jan 29 12:55:17.755953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:55:17.760736 (kubelet)[2294]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:55:17.934748 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2305) Jan 29 12:55:17.948474 kubelet[2294]: E0129 12:55:17.948437 2294 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:55:17.953288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:55:17.953460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:55:18.112813 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2309) Jan 29 12:55:18.176011 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2309) Jan 29 12:55:20.698279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:55:20.705143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:55:20.738007 systemd[1]: Reloading requested from client PID 2323 ('systemctl') (unit session-9.scope)... Jan 29 12:55:20.738022 systemd[1]: Reloading... Jan 29 12:55:20.804828 zram_generator::config[2358]: No configuration found. Jan 29 12:55:20.966149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:55:21.044630 systemd[1]: Reloading finished in 306 ms. Jan 29 12:55:21.095942 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:55:21.096039 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:55:21.096300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:55:21.105286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:55:21.771066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:55:21.790498 (kubelet)[2436]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:55:21.869623 kubelet[2436]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:55:21.869623 kubelet[2436]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:55:21.869623 kubelet[2436]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:55:21.870409 kubelet[2436]: I0129 12:55:21.869642 2436 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:55:22.510260 kubelet[2436]: I0129 12:55:22.509601 2436 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:55:22.510260 kubelet[2436]: I0129 12:55:22.509636 2436 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:55:22.510260 kubelet[2436]: I0129 12:55:22.509882 2436 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:55:22.533302 kubelet[2436]: I0129 12:55:22.533241 2436 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:55:22.536421 kubelet[2436]: E0129 12:55:22.536382 2436 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:22.546531 kubelet[2436]: I0129 12:55:22.546501 2436 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:55:22.546864 kubelet[2436]: I0129 12:55:22.546819 2436 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:55:22.547069 kubelet[2436]: I0129 12:55:22.546846 2436 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-e-0a72854eea.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:55:22.548257 kubelet[2436]: I0129 12:55:22.548195 2436 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:55:22.548257 kubelet[2436]: I0129 12:55:22.548219 2436 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:55:22.548413 kubelet[2436]: I0129 12:55:22.548335 2436 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:55:22.549466 kubelet[2436]: I0129 12:55:22.549432 2436 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:55:22.549466 kubelet[2436]: I0129 12:55:22.549448 2436 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:55:22.549466 kubelet[2436]: I0129 12:55:22.549469 2436 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:55:22.550067 kubelet[2436]: I0129 12:55:22.549485 2436 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:55:22.554905 kubelet[2436]: W0129 12:55:22.554703 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:22.554905 kubelet[2436]: E0129 12:55:22.554759 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:22.555118 kubelet[2436]: W0129 12:55:22.555088 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-0a72854eea.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:22.555846 kubelet[2436]: E0129 12:55:22.555125 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-0a72854eea.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:22.555846 kubelet[2436]: I0129 12:55:22.555187 2436 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:55:22.557867 kubelet[2436]: I0129 12:55:22.557280 2436 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:55:22.557867 kubelet[2436]: W0129 12:55:22.557330 2436 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:55:22.559832 kubelet[2436]: I0129 12:55:22.558111 2436 server.go:1264] "Started kubelet" Jan 29 12:55:22.559832 kubelet[2436]: I0129 12:55:22.559492 2436 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:55:22.568519 kubelet[2436]: I0129 12:55:22.568464 2436 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:55:22.572154 kubelet[2436]: I0129 12:55:22.572117 2436 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:55:22.572451 kubelet[2436]: I0129 12:55:22.572359 2436 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:55:22.573478 kubelet[2436]: I0129 12:55:22.573446 2436 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:55:22.577702 kubelet[2436]: I0129 12:55:22.577652 2436 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:55:22.577845 kubelet[2436]: I0129 12:55:22.577814 2436 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:55:22.577970 kubelet[2436]: I0129 12:55:22.577872 2436 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:55:22.580644 kubelet[2436]: W0129 12:55:22.578685 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:22.580644 kubelet[2436]: E0129 12:55:22.578737 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:22.592654 kubelet[2436]: E0129 12:55:22.592556 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-0a72854eea.novalocal?timeout=10s\": dial tcp 172.24.4.72:6443: connect: connection refused" interval="200ms" Jan 29 12:55:22.593567 kubelet[2436]: I0129 12:55:22.593522 2436 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:55:22.594115 kubelet[2436]: I0129 12:55:22.593604 2436 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:55:22.596733 kubelet[2436]: E0129 12:55:22.596512 2436 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.72:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.72:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-e-0a72854eea.novalocal.181f2b0e85878a1d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-e-0a72854eea.novalocal,UID:ci-4081-3-0-e-0a72854eea.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-e-0a72854eea.novalocal,},FirstTimestamp:2025-01-29 12:55:22.558089757 +0000 UTC m=+0.761144276,LastTimestamp:2025-01-29 12:55:22.558089757 +0000 UTC m=+0.761144276,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-e-0a72854eea.novalocal,}" Jan 29 12:55:22.598603 kubelet[2436]: I0129 12:55:22.598563 2436 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:55:22.601295 kubelet[2436]: I0129 12:55:22.601244 2436 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:55:22.603614 kubelet[2436]: I0129 12:55:22.603580 2436 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:55:22.603873 kubelet[2436]: I0129 12:55:22.603847 2436 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:55:22.604048 kubelet[2436]: I0129 12:55:22.604026 2436 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:55:22.604257 kubelet[2436]: E0129 12:55:22.604221 2436 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:55:22.608981 kubelet[2436]: E0129 12:55:22.608904 2436 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:55:22.618695 kubelet[2436]: W0129 12:55:22.618657 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:22.619369 kubelet[2436]: E0129 12:55:22.618831 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:22.641673 kubelet[2436]: I0129 12:55:22.641649 2436 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:55:22.641673 kubelet[2436]: I0129 12:55:22.641666 2436 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:55:22.641757 kubelet[2436]: I0129 12:55:22.641681 2436 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:55:22.646361 kubelet[2436]: I0129 12:55:22.646316 2436 policy_none.go:49] "None policy: Start" Jan 29 12:55:22.647003 kubelet[2436]: I0129 12:55:22.646975 2436 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:55:22.647003 kubelet[2436]: I0129 12:55:22.647001 2436 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:55:22.653784 kubelet[2436]: I0129 12:55:22.652513 2436 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:55:22.653784 kubelet[2436]: I0129 12:55:22.652674 2436 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:55:22.653784 kubelet[2436]: I0129 12:55:22.652777 2436 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:55:22.656057 kubelet[2436]: E0129 12:55:22.656038 2436 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-e-0a72854eea.novalocal\" not found" Jan 29 12:55:22.678165 kubelet[2436]: I0129 12:55:22.678131 2436 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.678458 kubelet[2436]: E0129 12:55:22.678437 2436 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.72:6443/api/v1/nodes\": dial tcp 172.24.4.72:6443: connect: connection refused" node="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.704958 kubelet[2436]: I0129 12:55:22.704802 2436 topology_manager.go:215] "Topology Admit Handler" podUID="92eba97dcaff1f49c8c00e71db6d5219" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.706478 kubelet[2436]: I0129 12:55:22.706231 2436 topology_manager.go:215] "Topology Admit Handler" podUID="b483ae913f9ca69f66ee59054c345089" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.708037 kubelet[2436]: I0129 12:55:22.707584 2436 topology_manager.go:215] "Topology Admit Handler" podUID="d3f4547534c6908d63ddda7fc1e9e728" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.779109 kubelet[2436]: I0129 12:55:22.779032 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92eba97dcaff1f49c8c00e71db6d5219-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"92eba97dcaff1f49c8c00e71db6d5219\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.779283 kubelet[2436]: I0129 12:55:22.779117 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b483ae913f9ca69f66ee59054c345089-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"b483ae913f9ca69f66ee59054c345089\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.779283 kubelet[2436]: I0129 12:55:22.779172 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b483ae913f9ca69f66ee59054c345089-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"b483ae913f9ca69f66ee59054c345089\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.779283 kubelet[2436]: I0129 12:55:22.779221 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3f4547534c6908d63ddda7fc1e9e728-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"d3f4547534c6908d63ddda7fc1e9e728\") " pod="kube-system/kube-scheduler-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.779283 kubelet[2436]: I0129 12:55:22.779266 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92eba97dcaff1f49c8c00e71db6d5219-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"92eba97dcaff1f49c8c00e71db6d5219\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.779563 kubelet[2436]: I0129 12:55:22.779308 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b483ae913f9ca69f66ee59054c345089-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"b483ae913f9ca69f66ee59054c345089\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.779563 kubelet[2436]: I0129 12:55:22.779351 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b483ae913f9ca69f66ee59054c345089-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"b483ae913f9ca69f66ee59054c345089\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.779563 kubelet[2436]: I0129 12:55:22.779397 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b483ae913f9ca69f66ee59054c345089-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"b483ae913f9ca69f66ee59054c345089\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.779563 kubelet[2436]: I0129 12:55:22.779442 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92eba97dcaff1f49c8c00e71db6d5219-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"92eba97dcaff1f49c8c00e71db6d5219\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.793604 kubelet[2436]: E0129 12:55:22.793431 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-0a72854eea.novalocal?timeout=10s\": dial tcp 172.24.4.72:6443: connect: connection refused" interval="400ms" Jan 29 12:55:22.882203 kubelet[2436]: I0129 12:55:22.882153 2436 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.883114 kubelet[2436]: E0129 12:55:22.882602 2436 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.72:6443/api/v1/nodes\": dial tcp 172.24.4.72:6443: connect: connection refused" node="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:22.933988 kubelet[2436]: E0129 12:55:22.933757 2436 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.72:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.72:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-e-0a72854eea.novalocal.181f2b0e85878a1d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-e-0a72854eea.novalocal,UID:ci-4081-3-0-e-0a72854eea.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-e-0a72854eea.novalocal,},FirstTimestamp:2025-01-29 12:55:22.558089757 +0000 UTC m=+0.761144276,LastTimestamp:2025-01-29 12:55:22.558089757 +0000 UTC m=+0.761144276,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-e-0a72854eea.novalocal,}" Jan 29 12:55:23.015782 containerd[1580]: time="2025-01-29T12:55:23.015665034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal,Uid:b483ae913f9ca69f66ee59054c345089,Namespace:kube-system,Attempt:0,}" Jan 29 12:55:23.018401 containerd[1580]: time="2025-01-29T12:55:23.017855896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal,Uid:92eba97dcaff1f49c8c00e71db6d5219,Namespace:kube-system,Attempt:0,}" Jan 29 12:55:23.019386 containerd[1580]: time="2025-01-29T12:55:23.019316047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-e-0a72854eea.novalocal,Uid:d3f4547534c6908d63ddda7fc1e9e728,Namespace:kube-system,Attempt:0,}" Jan 29 12:55:23.196639 kubelet[2436]: E0129 12:55:23.195264 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-0a72854eea.novalocal?timeout=10s\": dial tcp 172.24.4.72:6443: connect: connection refused" interval="800ms" Jan 29 12:55:23.287054 kubelet[2436]: I0129 12:55:23.286948 2436 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:23.287526 kubelet[2436]: E0129 12:55:23.287456 2436 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.72:6443/api/v1/nodes\": dial tcp 172.24.4.72:6443: connect: connection refused" node="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:23.643886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3552789615.mount: Deactivated successfully. Jan 29 12:55:23.655507 containerd[1580]: time="2025-01-29T12:55:23.655282962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:55:23.660356 containerd[1580]: time="2025-01-29T12:55:23.660266147Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 12:55:23.662437 containerd[1580]: time="2025-01-29T12:55:23.661906994Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:55:23.664864 containerd[1580]: time="2025-01-29T12:55:23.664050618Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:55:23.670813 containerd[1580]: time="2025-01-29T12:55:23.670543576Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:55:23.672697 containerd[1580]: time="2025-01-29T12:55:23.672572595Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:55:23.676272 containerd[1580]: time="2025-01-29T12:55:23.676166662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:55:23.681256 containerd[1580]: time="2025-01-29T12:55:23.681132154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:55:23.688552 containerd[1580]: time="2025-01-29T12:55:23.688014215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 672.156523ms" Jan 29 12:55:23.694492 containerd[1580]: time="2025-01-29T12:55:23.694382441Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 676.358632ms" Jan 29 12:55:23.696334 containerd[1580]: time="2025-01-29T12:55:23.696218912Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 676.786208ms" Jan 29 12:55:23.742216 kubelet[2436]: W0129 12:55:23.742091 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:23.742216 kubelet[2436]: E0129 12:55:23.742173 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:23.825045 kubelet[2436]: W0129 12:55:23.825006 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:23.825242 kubelet[2436]: E0129 12:55:23.825229 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:23.859567 containerd[1580]: time="2025-01-29T12:55:23.858430648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:55:23.859567 containerd[1580]: time="2025-01-29T12:55:23.858478026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:55:23.859567 containerd[1580]: time="2025-01-29T12:55:23.858491992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:55:23.859567 containerd[1580]: time="2025-01-29T12:55:23.858563495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:55:23.860212 containerd[1580]: time="2025-01-29T12:55:23.855516809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:55:23.860212 containerd[1580]: time="2025-01-29T12:55:23.855671618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:55:23.860212 containerd[1580]: time="2025-01-29T12:55:23.855727942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:55:23.860212 containerd[1580]: time="2025-01-29T12:55:23.856240157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:55:23.865592 containerd[1580]: time="2025-01-29T12:55:23.865342375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:55:23.866966 containerd[1580]: time="2025-01-29T12:55:23.866914365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:55:23.867227 containerd[1580]: time="2025-01-29T12:55:23.867145565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:55:23.867624 containerd[1580]: time="2025-01-29T12:55:23.867535021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:55:23.946193 containerd[1580]: time="2025-01-29T12:55:23.946070906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal,Uid:b483ae913f9ca69f66ee59054c345089,Namespace:kube-system,Attempt:0,} returns sandbox id \"3011aeb6218d2ac68394dfd35917b313d7220b633ef07a2bf17cdcefcd9953e3\"" Jan 29 12:55:23.956790 containerd[1580]: time="2025-01-29T12:55:23.956061801Z" level=info msg="CreateContainer within sandbox \"3011aeb6218d2ac68394dfd35917b313d7220b633ef07a2bf17cdcefcd9953e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:55:23.966562 containerd[1580]: time="2025-01-29T12:55:23.966526227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal,Uid:92eba97dcaff1f49c8c00e71db6d5219,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a878c932195966c1962a377493db9766414073dd2a4f7fcad3f2189f778d593\"" Jan 29 12:55:23.971807 containerd[1580]: time="2025-01-29T12:55:23.971742486Z" level=info msg="CreateContainer within sandbox \"9a878c932195966c1962a377493db9766414073dd2a4f7fcad3f2189f778d593\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:55:23.974871 containerd[1580]: time="2025-01-29T12:55:23.974817205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-e-0a72854eea.novalocal,Uid:d3f4547534c6908d63ddda7fc1e9e728,Namespace:kube-system,Attempt:0,} returns sandbox id \"fae493a5c2a8eb8dca77883228bfa7427c2a15641838c8f51f1e0cedd3f2c0f8\"" Jan 29 12:55:23.977168 containerd[1580]: time="2025-01-29T12:55:23.977146835Z" level=info msg="CreateContainer within sandbox \"fae493a5c2a8eb8dca77883228bfa7427c2a15641838c8f51f1e0cedd3f2c0f8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:55:23.977232 kubelet[2436]: W0129 12:55:23.977161 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:23.977232 kubelet[2436]: E0129 12:55:23.977214 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:23.996730 kubelet[2436]: E0129 12:55:23.996687 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-0a72854eea.novalocal?timeout=10s\": dial tcp 172.24.4.72:6443: connect: connection refused" interval="1.6s" Jan 29 12:55:24.015812 containerd[1580]: time="2025-01-29T12:55:24.015742373Z" level=info msg="CreateContainer within sandbox \"3011aeb6218d2ac68394dfd35917b313d7220b633ef07a2bf17cdcefcd9953e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"058ffe7079666bdcfa8e5c04bd10e85973e2fa137f7d6b0b50af6630dfe4fdb5\"" Jan 29 12:55:24.018176 containerd[1580]: time="2025-01-29T12:55:24.017014093Z" level=info msg="StartContainer for \"058ffe7079666bdcfa8e5c04bd10e85973e2fa137f7d6b0b50af6630dfe4fdb5\"" Jan 29 12:55:24.023838 containerd[1580]: time="2025-01-29T12:55:24.023781746Z" level=info msg="CreateContainer within sandbox \"fae493a5c2a8eb8dca77883228bfa7427c2a15641838c8f51f1e0cedd3f2c0f8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"92ec74177bcc95d2e3641d7362fe6b15beafc629fda576f73192049e885b4110\"" Jan 29 12:55:24.024726 containerd[1580]: time="2025-01-29T12:55:24.024705297Z" level=info msg="StartContainer for \"92ec74177bcc95d2e3641d7362fe6b15beafc629fda576f73192049e885b4110\"" Jan 29 12:55:24.026306 containerd[1580]: time="2025-01-29T12:55:24.026261397Z" level=info msg="CreateContainer within sandbox \"9a878c932195966c1962a377493db9766414073dd2a4f7fcad3f2189f778d593\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4d285039df999f95e8089a63a5d883b5bfecc1140e5370d60cd523880079d7e2\"" Jan 29 12:55:24.027196 containerd[1580]: time="2025-01-29T12:55:24.027154201Z" level=info msg="StartContainer for \"4d285039df999f95e8089a63a5d883b5bfecc1140e5370d60cd523880079d7e2\"" Jan 29 12:55:24.063549 kubelet[2436]: W0129 12:55:24.063241 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-0a72854eea.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:24.064722 kubelet[2436]: E0129 12:55:24.064706 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-0a72854eea.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.72:6443: connect: connection refused Jan 29 12:55:24.091263 kubelet[2436]: I0129 12:55:24.091230 2436 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:24.091687 kubelet[2436]: E0129 12:55:24.091553 2436 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.72:6443/api/v1/nodes\": dial tcp 172.24.4.72:6443: connect: connection refused" node="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:24.132963 containerd[1580]: time="2025-01-29T12:55:24.132460892Z" level=info msg="StartContainer for \"4d285039df999f95e8089a63a5d883b5bfecc1140e5370d60cd523880079d7e2\" returns successfully" Jan 29 12:55:24.132963 containerd[1580]: time="2025-01-29T12:55:24.132484515Z" level=info msg="StartContainer for \"058ffe7079666bdcfa8e5c04bd10e85973e2fa137f7d6b0b50af6630dfe4fdb5\" returns successfully" Jan 29 12:55:24.175636 containerd[1580]: time="2025-01-29T12:55:24.175260939Z" level=info msg="StartContainer for \"92ec74177bcc95d2e3641d7362fe6b15beafc629fda576f73192049e885b4110\" returns successfully" Jan 29 12:55:25.694792 kubelet[2436]: I0129 12:55:25.693185 2436 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:26.204596 kubelet[2436]: E0129 12:55:26.204547 2436 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-e-0a72854eea.novalocal\" not found" node="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:26.360308 kubelet[2436]: I0129 12:55:26.359305 2436 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:26.404099 kubelet[2436]: E0129 12:55:26.404066 2436 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-0a72854eea.novalocal\" not found" Jan 29 12:55:26.505249 kubelet[2436]: E0129 12:55:26.505042 2436 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-0a72854eea.novalocal\" not found" Jan 29 12:55:26.605635 kubelet[2436]: E0129 12:55:26.605538 2436 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-0a72854eea.novalocal\" not found" Jan 29 12:55:26.706507 kubelet[2436]: E0129 12:55:26.706434 2436 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-0a72854eea.novalocal\" not found" Jan 29 12:55:27.557964 kubelet[2436]: I0129 12:55:27.557822 2436 apiserver.go:52] "Watching apiserver" Jan 29 12:55:27.578315 kubelet[2436]: I0129 12:55:27.578259 2436 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:55:28.827873 systemd[1]: Reloading requested from client PID 2708 ('systemctl') (unit session-9.scope)... Jan 29 12:55:28.827906 systemd[1]: Reloading... Jan 29 12:55:28.930932 zram_generator::config[2743]: No configuration found. Jan 29 12:55:29.092992 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:55:29.179352 systemd[1]: Reloading finished in 350 ms. Jan 29 12:55:29.218300 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:55:29.218991 kubelet[2436]: E0129 12:55:29.218724 2436 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081-3-0-e-0a72854eea.novalocal.181f2b0e85878a1d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-e-0a72854eea.novalocal,UID:ci-4081-3-0-e-0a72854eea.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-e-0a72854eea.novalocal,},FirstTimestamp:2025-01-29 12:55:22.558089757 +0000 UTC m=+0.761144276,LastTimestamp:2025-01-29 12:55:22.558089757 +0000 UTC m=+0.761144276,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-e-0a72854eea.novalocal,}" Jan 29 12:55:29.230073 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:55:29.230340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:55:29.238348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:55:29.440978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:55:29.442973 (kubelet)[2821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:55:29.494817 kubelet[2821]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:55:29.494817 kubelet[2821]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:55:29.494817 kubelet[2821]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:55:29.495227 kubelet[2821]: I0129 12:55:29.494867 2821 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:55:29.499107 kubelet[2821]: I0129 12:55:29.499075 2821 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:55:29.499107 kubelet[2821]: I0129 12:55:29.499096 2821 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:55:29.499292 kubelet[2821]: I0129 12:55:29.499267 2821 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:55:29.500623 kubelet[2821]: I0129 12:55:29.500599 2821 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:55:29.502972 kubelet[2821]: I0129 12:55:29.502336 2821 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:55:29.512311 kubelet[2821]: I0129 12:55:29.512288 2821 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:55:29.512851 kubelet[2821]: I0129 12:55:29.512822 2821 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:55:29.513109 kubelet[2821]: I0129 12:55:29.512932 2821 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-e-0a72854eea.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:55:29.513239 kubelet[2821]: I0129 12:55:29.513228 2821 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:55:29.513303 kubelet[2821]: I0129 12:55:29.513295 2821 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:55:29.513423 kubelet[2821]: I0129 12:55:29.513413 2821 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:55:29.513623 kubelet[2821]: I0129 12:55:29.513609 2821 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:55:29.513699 kubelet[2821]: I0129 12:55:29.513689 2821 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:55:29.513820 kubelet[2821]: I0129 12:55:29.513809 2821 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:55:29.513900 kubelet[2821]: I0129 12:55:29.513891 2821 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:55:29.520880 kubelet[2821]: I0129 12:55:29.518199 2821 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:55:29.520880 kubelet[2821]: I0129 12:55:29.518350 2821 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:55:29.520880 kubelet[2821]: I0129 12:55:29.518749 2821 server.go:1264] "Started kubelet" Jan 29 12:55:29.525269 kubelet[2821]: E0129 12:55:29.525243 2821 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:55:29.525782 kubelet[2821]: I0129 12:55:29.525354 2821 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:55:29.525782 kubelet[2821]: I0129 12:55:29.525626 2821 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:55:29.525782 kubelet[2821]: I0129 12:55:29.525660 2821 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:55:29.527788 kubelet[2821]: I0129 12:55:29.527100 2821 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:55:29.528795 kubelet[2821]: I0129 12:55:29.528173 2821 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:55:29.543386 kubelet[2821]: I0129 12:55:29.543359 2821 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:55:29.543531 kubelet[2821]: I0129 12:55:29.543492 2821 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:55:29.543634 kubelet[2821]: I0129 12:55:29.543615 2821 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:55:29.547819 kubelet[2821]: I0129 12:55:29.547206 2821 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:55:29.547819 kubelet[2821]: I0129 12:55:29.547307 2821 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:55:29.548519 kubelet[2821]: I0129 12:55:29.548506 2821 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:55:29.550213 kubelet[2821]: I0129 12:55:29.550179 2821 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:55:29.551146 kubelet[2821]: I0129 12:55:29.551124 2821 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:55:29.551207 kubelet[2821]: I0129 12:55:29.551152 2821 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:55:29.551207 kubelet[2821]: I0129 12:55:29.551170 2821 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:55:29.551257 kubelet[2821]: E0129 12:55:29.551212 2821 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:55:29.627215 kubelet[2821]: I0129 12:55:29.626965 2821 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:55:29.627355 kubelet[2821]: I0129 12:55:29.627342 2821 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:55:29.627447 kubelet[2821]: I0129 12:55:29.627437 2821 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:55:29.627793 kubelet[2821]: I0129 12:55:29.627757 2821 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:55:29.628162 kubelet[2821]: I0129 12:55:29.628105 2821 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:55:29.628275 kubelet[2821]: I0129 12:55:29.628218 2821 policy_none.go:49] "None policy: Start" Jan 29 12:55:29.629858 kubelet[2821]: I0129 12:55:29.629283 2821 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:55:29.629858 kubelet[2821]: I0129 12:55:29.629303 2821 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:55:29.629858 kubelet[2821]: I0129 12:55:29.629460 2821 state_mem.go:75] "Updated machine memory state" Jan 29 12:55:29.631946 kubelet[2821]: I0129 12:55:29.630816 2821 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:55:29.631946 kubelet[2821]: I0129 12:55:29.631352 2821 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:55:29.637365 kubelet[2821]: I0129 12:55:29.637338 2821 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:55:29.651654 kubelet[2821]: I0129 12:55:29.651601 2821 topology_manager.go:215] "Topology Admit Handler" podUID="92eba97dcaff1f49c8c00e71db6d5219" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:29.651896 kubelet[2821]: I0129 12:55:29.651881 2821 topology_manager.go:215] "Topology Admit Handler" podUID="b483ae913f9ca69f66ee59054c345089" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:29.652043 kubelet[2821]: I0129 12:55:29.652026 2821 topology_manager.go:215] "Topology Admit Handler" podUID="d3f4547534c6908d63ddda7fc1e9e728" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:29.747496 kubelet[2821]: I0129 12:55:29.744502 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b483ae913f9ca69f66ee59054c345089-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"b483ae913f9ca69f66ee59054c345089\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:29.747496 kubelet[2821]: I0129 12:55:29.747170 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3f4547534c6908d63ddda7fc1e9e728-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"d3f4547534c6908d63ddda7fc1e9e728\") " pod="kube-system/kube-scheduler-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:29.747496 kubelet[2821]: I0129 12:55:29.747394 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92eba97dcaff1f49c8c00e71db6d5219-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"92eba97dcaff1f49c8c00e71db6d5219\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:29.747908 kubelet[2821]: I0129 12:55:29.747621 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b483ae913f9ca69f66ee59054c345089-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"b483ae913f9ca69f66ee59054c345089\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:29.747908 kubelet[2821]: I0129 12:55:29.747745 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b483ae913f9ca69f66ee59054c345089-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"b483ae913f9ca69f66ee59054c345089\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:29.748201 kubelet[2821]: I0129 12:55:29.747884 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b483ae913f9ca69f66ee59054c345089-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"b483ae913f9ca69f66ee59054c345089\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:29.748379 kubelet[2821]: I0129 12:55:29.748295 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92eba97dcaff1f49c8c00e71db6d5219-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"92eba97dcaff1f49c8c00e71db6d5219\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:29.748547 kubelet[2821]: I0129 12:55:29.748459 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92eba97dcaff1f49c8c00e71db6d5219-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"92eba97dcaff1f49c8c00e71db6d5219\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:29.748648 kubelet[2821]: I0129 12:55:29.748579 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b483ae913f9ca69f66ee59054c345089-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal\" (UID: \"b483ae913f9ca69f66ee59054c345089\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:29.750290 kubelet[2821]: I0129 12:55:29.750181 2821 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:29.759686 kubelet[2821]: W0129 12:55:29.759592 2821 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:55:29.761035 kubelet[2821]: W0129 12:55:29.760988 2821 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:55:29.765130 kubelet[2821]: W0129 12:55:29.764678 2821 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:55:29.778869 kubelet[2821]: I0129 12:55:29.778806 2821 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:29.779141 kubelet[2821]: I0129 12:55:29.778987 2821 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:30.515219 kubelet[2821]: I0129 12:55:30.515057 2821 apiserver.go:52] "Watching apiserver" Jan 29 12:55:30.544380 kubelet[2821]: I0129 12:55:30.544310 2821 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:55:30.612851 kubelet[2821]: W0129 12:55:30.612186 2821 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:55:30.612851 kubelet[2821]: E0129 12:55:30.612256 2821 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:55:30.653141 kubelet[2821]: I0129 12:55:30.652667 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-e-0a72854eea.novalocal" podStartSLOduration=1.6526261880000002 podStartE2EDuration="1.652626188s" podCreationTimestamp="2025-01-29 12:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:55:30.647149647 +0000 UTC m=+1.197332221" watchObservedRunningTime="2025-01-29 12:55:30.652626188 +0000 UTC m=+1.202808812" Jan 29 12:55:30.687049 kubelet[2821]: I0129 12:55:30.686402 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-e-0a72854eea.novalocal" podStartSLOduration=1.686361841 podStartE2EDuration="1.686361841s" podCreationTimestamp="2025-01-29 12:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:55:30.686204908 +0000 UTC m=+1.236387482" watchObservedRunningTime="2025-01-29 12:55:30.686361841 +0000 UTC m=+1.236544485" Jan 29 12:55:30.687049 kubelet[2821]: I0129 12:55:30.686618 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-e-0a72854eea.novalocal" podStartSLOduration=1.686605095 podStartE2EDuration="1.686605095s" podCreationTimestamp="2025-01-29 12:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:55:30.670862544 +0000 UTC m=+1.221045218" watchObservedRunningTime="2025-01-29 12:55:30.686605095 +0000 UTC m=+1.236787689" Jan 29 12:55:35.660103 sudo[1833]: pam_unix(sudo:session): session closed for user root Jan 29 12:55:35.846864 sshd[1826]: pam_unix(sshd:session): session closed for user core Jan 29 12:55:35.855082 systemd[1]: sshd@6-172.24.4.72:22-172.24.4.1:40902.service: Deactivated successfully. Jan 29 12:55:35.862515 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:55:35.864347 systemd-logind[1561]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:55:35.867762 systemd-logind[1561]: Removed session 9. Jan 29 12:55:41.819542 kubelet[2821]: I0129 12:55:41.819504 2821 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:55:41.820042 containerd[1580]: time="2025-01-29T12:55:41.819980552Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:55:41.820303 kubelet[2821]: I0129 12:55:41.820182 2821 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:55:42.428675 kubelet[2821]: I0129 12:55:42.428447 2821 topology_manager.go:215] "Topology Admit Handler" podUID="dd1d99f9-76ac-41de-a62e-633d344c6ada" podNamespace="kube-system" podName="kube-proxy-btp8h" Jan 29 12:55:42.534562 kubelet[2821]: I0129 12:55:42.534517 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd1d99f9-76ac-41de-a62e-633d344c6ada-xtables-lock\") pod \"kube-proxy-btp8h\" (UID: \"dd1d99f9-76ac-41de-a62e-633d344c6ada\") " pod="kube-system/kube-proxy-btp8h" Jan 29 12:55:42.534562 kubelet[2821]: I0129 12:55:42.534568 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbqkv\" (UniqueName: \"kubernetes.io/projected/dd1d99f9-76ac-41de-a62e-633d344c6ada-kube-api-access-hbqkv\") pod \"kube-proxy-btp8h\" (UID: \"dd1d99f9-76ac-41de-a62e-633d344c6ada\") " pod="kube-system/kube-proxy-btp8h" Jan 29 12:55:42.534727 kubelet[2821]: I0129 12:55:42.534593 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dd1d99f9-76ac-41de-a62e-633d344c6ada-kube-proxy\") pod \"kube-proxy-btp8h\" (UID: \"dd1d99f9-76ac-41de-a62e-633d344c6ada\") " pod="kube-system/kube-proxy-btp8h" Jan 29 12:55:42.534727 kubelet[2821]: I0129 12:55:42.534615 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd1d99f9-76ac-41de-a62e-633d344c6ada-lib-modules\") pod \"kube-proxy-btp8h\" (UID: \"dd1d99f9-76ac-41de-a62e-633d344c6ada\") " pod="kube-system/kube-proxy-btp8h" Jan 29 12:55:42.748909 containerd[1580]: time="2025-01-29T12:55:42.748784306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-btp8h,Uid:dd1d99f9-76ac-41de-a62e-633d344c6ada,Namespace:kube-system,Attempt:0,}" Jan 29 12:55:42.794180 containerd[1580]: time="2025-01-29T12:55:42.793558265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:55:42.794180 containerd[1580]: time="2025-01-29T12:55:42.793653083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:55:42.794180 containerd[1580]: time="2025-01-29T12:55:42.793675434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:55:42.794180 containerd[1580]: time="2025-01-29T12:55:42.793797452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:55:42.869521 containerd[1580]: time="2025-01-29T12:55:42.869443254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-btp8h,Uid:dd1d99f9-76ac-41de-a62e-633d344c6ada,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ed81c7e841bcab5898a222d1d79ef0aa6b5c14ccefaba84736147eff1ae7536\"" Jan 29 12:55:42.874336 containerd[1580]: time="2025-01-29T12:55:42.874205817Z" level=info msg="CreateContainer within sandbox \"3ed81c7e841bcab5898a222d1d79ef0aa6b5c14ccefaba84736147eff1ae7536\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:55:42.899382 containerd[1580]: time="2025-01-29T12:55:42.899325014Z" level=info msg="CreateContainer within sandbox \"3ed81c7e841bcab5898a222d1d79ef0aa6b5c14ccefaba84736147eff1ae7536\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"06699dc145749d81d19f86f3e239e25e92be371814197ded2f4ae2bb8eb011a1\"" Jan 29 12:55:42.900016 containerd[1580]: time="2025-01-29T12:55:42.899940174Z" level=info msg="StartContainer for \"06699dc145749d81d19f86f3e239e25e92be371814197ded2f4ae2bb8eb011a1\"" Jan 29 12:55:42.961146 kubelet[2821]: I0129 12:55:42.957184 2821 topology_manager.go:215] "Topology Admit Handler" podUID="066ae17f-362a-4c3d-ad1a-74ca8d372079" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-w42k4" Jan 29 12:55:42.995969 containerd[1580]: time="2025-01-29T12:55:42.995917995Z" level=info msg="StartContainer for \"06699dc145749d81d19f86f3e239e25e92be371814197ded2f4ae2bb8eb011a1\" returns successfully" Jan 29 12:55:43.037585 kubelet[2821]: I0129 12:55:43.037482 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/066ae17f-362a-4c3d-ad1a-74ca8d372079-var-lib-calico\") pod \"tigera-operator-7bc55997bb-w42k4\" (UID: \"066ae17f-362a-4c3d-ad1a-74ca8d372079\") " pod="tigera-operator/tigera-operator-7bc55997bb-w42k4" Jan 29 12:55:43.037585 kubelet[2821]: I0129 12:55:43.037531 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjqt2\" (UniqueName: \"kubernetes.io/projected/066ae17f-362a-4c3d-ad1a-74ca8d372079-kube-api-access-hjqt2\") pod \"tigera-operator-7bc55997bb-w42k4\" (UID: \"066ae17f-362a-4c3d-ad1a-74ca8d372079\") " pod="tigera-operator/tigera-operator-7bc55997bb-w42k4" Jan 29 12:55:43.270477 containerd[1580]: time="2025-01-29T12:55:43.270232859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-w42k4,Uid:066ae17f-362a-4c3d-ad1a-74ca8d372079,Namespace:tigera-operator,Attempt:0,}" Jan 29 12:55:43.309012 containerd[1580]: time="2025-01-29T12:55:43.308824278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:55:43.309012 containerd[1580]: time="2025-01-29T12:55:43.308924014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:55:43.309012 containerd[1580]: time="2025-01-29T12:55:43.308943571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:55:43.309946 containerd[1580]: time="2025-01-29T12:55:43.309167900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:55:43.367549 containerd[1580]: time="2025-01-29T12:55:43.367512989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-w42k4,Uid:066ae17f-362a-4c3d-ad1a-74ca8d372079,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b8eb4ee81873511b61c4f98e582a12e242a2aa35e263240170c066cf40a1154c\"" Jan 29 12:55:43.369440 containerd[1580]: time="2025-01-29T12:55:43.369344744Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 12:55:45.784867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2023188001.mount: Deactivated successfully. Jan 29 12:55:46.410851 containerd[1580]: time="2025-01-29T12:55:46.410657603Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:46.413511 containerd[1580]: time="2025-01-29T12:55:46.412978944Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 12:55:46.415460 containerd[1580]: time="2025-01-29T12:55:46.415417724Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:46.420312 containerd[1580]: time="2025-01-29T12:55:46.420167195Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:46.421448 containerd[1580]: time="2025-01-29T12:55:46.420896188Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.051374394s" Jan 29 12:55:46.421448 containerd[1580]: time="2025-01-29T12:55:46.420930413Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 12:55:46.425945 containerd[1580]: time="2025-01-29T12:55:46.425809406Z" level=info msg="CreateContainer within sandbox \"b8eb4ee81873511b61c4f98e582a12e242a2aa35e263240170c066cf40a1154c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 12:55:46.449626 containerd[1580]: time="2025-01-29T12:55:46.449358842Z" level=info msg="CreateContainer within sandbox \"b8eb4ee81873511b61c4f98e582a12e242a2aa35e263240170c066cf40a1154c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"10d777dc4b24599a3794789ad26c1db12396c7281019cb1e8e3ff025c6535881\"" Jan 29 12:55:46.451305 containerd[1580]: time="2025-01-29T12:55:46.450284934Z" level=info msg="StartContainer for \"10d777dc4b24599a3794789ad26c1db12396c7281019cb1e8e3ff025c6535881\"" Jan 29 12:55:46.508296 containerd[1580]: time="2025-01-29T12:55:46.508253265Z" level=info msg="StartContainer for \"10d777dc4b24599a3794789ad26c1db12396c7281019cb1e8e3ff025c6535881\" returns successfully" Jan 29 12:55:46.682662 kubelet[2821]: I0129 12:55:46.681311 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-btp8h" podStartSLOduration=4.681239785 podStartE2EDuration="4.681239785s" podCreationTimestamp="2025-01-29 12:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:55:43.673021984 +0000 UTC m=+14.223204628" watchObservedRunningTime="2025-01-29 12:55:46.681239785 +0000 UTC m=+17.231422399" Jan 29 12:55:46.682662 kubelet[2821]: I0129 12:55:46.681586 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-w42k4" podStartSLOduration=1.627380836 podStartE2EDuration="4.681543854s" podCreationTimestamp="2025-01-29 12:55:42 +0000 UTC" firstStartedPulling="2025-01-29 12:55:43.368870577 +0000 UTC m=+13.919053141" lastFinishedPulling="2025-01-29 12:55:46.423033595 +0000 UTC m=+16.973216159" observedRunningTime="2025-01-29 12:55:46.681464295 +0000 UTC m=+17.231646959" watchObservedRunningTime="2025-01-29 12:55:46.681543854 +0000 UTC m=+17.231726478" Jan 29 12:55:49.623467 kubelet[2821]: I0129 12:55:49.623414 2821 topology_manager.go:215] "Topology Admit Handler" podUID="759526e3-e482-4760-b432-7ae6fbbc7012" podNamespace="calico-system" podName="calico-typha-5b9859ff97-nwm58" Jan 29 12:55:49.680042 kubelet[2821]: I0129 12:55:49.679981 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/759526e3-e482-4760-b432-7ae6fbbc7012-typha-certs\") pod \"calico-typha-5b9859ff97-nwm58\" (UID: \"759526e3-e482-4760-b432-7ae6fbbc7012\") " pod="calico-system/calico-typha-5b9859ff97-nwm58" Jan 29 12:55:49.680042 kubelet[2821]: I0129 12:55:49.680038 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2w9v\" (UniqueName: \"kubernetes.io/projected/759526e3-e482-4760-b432-7ae6fbbc7012-kube-api-access-s2w9v\") pod \"calico-typha-5b9859ff97-nwm58\" (UID: \"759526e3-e482-4760-b432-7ae6fbbc7012\") " pod="calico-system/calico-typha-5b9859ff97-nwm58" Jan 29 12:55:49.680215 kubelet[2821]: I0129 12:55:49.680066 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/759526e3-e482-4760-b432-7ae6fbbc7012-tigera-ca-bundle\") pod \"calico-typha-5b9859ff97-nwm58\" (UID: \"759526e3-e482-4760-b432-7ae6fbbc7012\") " pod="calico-system/calico-typha-5b9859ff97-nwm58" Jan 29 12:55:49.749790 kubelet[2821]: I0129 12:55:49.749375 2821 topology_manager.go:215] "Topology Admit Handler" podUID="cce191d7-73d5-40e6-adb9-bf968dd705f7" podNamespace="calico-system" podName="calico-node-85h29" Jan 29 12:55:49.780669 kubelet[2821]: I0129 12:55:49.780620 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cce191d7-73d5-40e6-adb9-bf968dd705f7-var-lib-calico\") pod \"calico-node-85h29\" (UID: \"cce191d7-73d5-40e6-adb9-bf968dd705f7\") " pod="calico-system/calico-node-85h29" Jan 29 12:55:49.780669 kubelet[2821]: I0129 12:55:49.780669 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cce191d7-73d5-40e6-adb9-bf968dd705f7-cni-log-dir\") pod \"calico-node-85h29\" (UID: \"cce191d7-73d5-40e6-adb9-bf968dd705f7\") " pod="calico-system/calico-node-85h29" Jan 29 12:55:49.780852 kubelet[2821]: I0129 12:55:49.780696 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc27h\" (UniqueName: \"kubernetes.io/projected/cce191d7-73d5-40e6-adb9-bf968dd705f7-kube-api-access-lc27h\") pod \"calico-node-85h29\" (UID: \"cce191d7-73d5-40e6-adb9-bf968dd705f7\") " pod="calico-system/calico-node-85h29" Jan 29 12:55:49.780852 kubelet[2821]: I0129 12:55:49.780720 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cce191d7-73d5-40e6-adb9-bf968dd705f7-cni-bin-dir\") pod \"calico-node-85h29\" (UID: \"cce191d7-73d5-40e6-adb9-bf968dd705f7\") " pod="calico-system/calico-node-85h29" Jan 29 12:55:49.780852 kubelet[2821]: I0129 12:55:49.780777 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cce191d7-73d5-40e6-adb9-bf968dd705f7-node-certs\") pod \"calico-node-85h29\" (UID: \"cce191d7-73d5-40e6-adb9-bf968dd705f7\") " pod="calico-system/calico-node-85h29" Jan 29 12:55:49.780852 kubelet[2821]: I0129 12:55:49.780813 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cce191d7-73d5-40e6-adb9-bf968dd705f7-xtables-lock\") pod \"calico-node-85h29\" (UID: \"cce191d7-73d5-40e6-adb9-bf968dd705f7\") " pod="calico-system/calico-node-85h29" Jan 29 12:55:49.780852 kubelet[2821]: I0129 12:55:49.780832 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cce191d7-73d5-40e6-adb9-bf968dd705f7-var-run-calico\") pod \"calico-node-85h29\" (UID: \"cce191d7-73d5-40e6-adb9-bf968dd705f7\") " pod="calico-system/calico-node-85h29" Jan 29 12:55:49.780981 kubelet[2821]: I0129 12:55:49.780850 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cce191d7-73d5-40e6-adb9-bf968dd705f7-tigera-ca-bundle\") pod \"calico-node-85h29\" (UID: \"cce191d7-73d5-40e6-adb9-bf968dd705f7\") " pod="calico-system/calico-node-85h29" Jan 29 12:55:49.780981 kubelet[2821]: I0129 12:55:49.780869 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cce191d7-73d5-40e6-adb9-bf968dd705f7-cni-net-dir\") pod \"calico-node-85h29\" (UID: \"cce191d7-73d5-40e6-adb9-bf968dd705f7\") " pod="calico-system/calico-node-85h29" Jan 29 12:55:49.780981 kubelet[2821]: I0129 12:55:49.780911 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cce191d7-73d5-40e6-adb9-bf968dd705f7-policysync\") pod \"calico-node-85h29\" (UID: \"cce191d7-73d5-40e6-adb9-bf968dd705f7\") " pod="calico-system/calico-node-85h29" Jan 29 12:55:49.780981 kubelet[2821]: I0129 12:55:49.780934 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cce191d7-73d5-40e6-adb9-bf968dd705f7-lib-modules\") pod \"calico-node-85h29\" (UID: \"cce191d7-73d5-40e6-adb9-bf968dd705f7\") " pod="calico-system/calico-node-85h29" Jan 29 12:55:49.780981 kubelet[2821]: I0129 12:55:49.780952 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cce191d7-73d5-40e6-adb9-bf968dd705f7-flexvol-driver-host\") pod \"calico-node-85h29\" (UID: \"cce191d7-73d5-40e6-adb9-bf968dd705f7\") " pod="calico-system/calico-node-85h29" Jan 29 12:55:49.885587 kubelet[2821]: E0129 12:55:49.884327 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.885587 kubelet[2821]: W0129 12:55:49.884350 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.885587 kubelet[2821]: E0129 12:55:49.884378 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.885587 kubelet[2821]: E0129 12:55:49.884539 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.885587 kubelet[2821]: W0129 12:55:49.884548 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.885587 kubelet[2821]: E0129 12:55:49.884557 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.885587 kubelet[2821]: E0129 12:55:49.884676 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.885587 kubelet[2821]: W0129 12:55:49.884685 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.885587 kubelet[2821]: E0129 12:55:49.884694 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.885587 kubelet[2821]: E0129 12:55:49.884863 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.887245 kubelet[2821]: W0129 12:55:49.884873 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.887245 kubelet[2821]: E0129 12:55:49.884881 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.887245 kubelet[2821]: E0129 12:55:49.885428 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.887245 kubelet[2821]: W0129 12:55:49.885439 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.887245 kubelet[2821]: E0129 12:55:49.885452 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.887245 kubelet[2821]: E0129 12:55:49.885892 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.887245 kubelet[2821]: W0129 12:55:49.885904 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.887245 kubelet[2821]: E0129 12:55:49.885955 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.887707 kubelet[2821]: E0129 12:55:49.887691 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.887707 kubelet[2821]: W0129 12:55:49.887704 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.887876 kubelet[2821]: E0129 12:55:49.887760 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.888048 kubelet[2821]: E0129 12:55:49.887880 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.888048 kubelet[2821]: W0129 12:55:49.887889 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.888048 kubelet[2821]: E0129 12:55:49.888011 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.888048 kubelet[2821]: E0129 12:55:49.888015 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.889986 kubelet[2821]: W0129 12:55:49.888019 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.890789 kubelet[2821]: E0129 12:55:49.890055 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.891027 kubelet[2821]: E0129 12:55:49.890951 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.891027 kubelet[2821]: W0129 12:55:49.890968 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.891027 kubelet[2821]: E0129 12:55:49.891013 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.891596 kubelet[2821]: E0129 12:55:49.891441 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.891596 kubelet[2821]: W0129 12:55:49.891453 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.891596 kubelet[2821]: E0129 12:55:49.891513 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.892045 kubelet[2821]: E0129 12:55:49.891897 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.892045 kubelet[2821]: W0129 12:55:49.891908 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.892045 kubelet[2821]: E0129 12:55:49.891957 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.893558 kubelet[2821]: E0129 12:55:49.893499 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.893558 kubelet[2821]: W0129 12:55:49.893551 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.893725 kubelet[2821]: E0129 12:55:49.893690 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.893909 kubelet[2821]: E0129 12:55:49.893893 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.893909 kubelet[2821]: W0129 12:55:49.893908 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.893998 kubelet[2821]: E0129 12:55:49.893981 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.894169 kubelet[2821]: E0129 12:55:49.894150 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.894169 kubelet[2821]: W0129 12:55:49.894163 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.894263 kubelet[2821]: E0129 12:55:49.894250 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.894689 kubelet[2821]: E0129 12:55:49.894662 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.894689 kubelet[2821]: W0129 12:55:49.894678 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.895745 kubelet[2821]: E0129 12:55:49.895652 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.895852 kubelet[2821]: E0129 12:55:49.895835 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.895852 kubelet[2821]: W0129 12:55:49.895850 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.895939 kubelet[2821]: E0129 12:55:49.895869 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.896178 kubelet[2821]: E0129 12:55:49.896137 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.896178 kubelet[2821]: W0129 12:55:49.896152 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.897810 kubelet[2821]: E0129 12:55:49.896341 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.897810 kubelet[2821]: W0129 12:55:49.896356 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.897810 kubelet[2821]: E0129 12:55:49.896516 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.897810 kubelet[2821]: W0129 12:55:49.896525 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.897810 kubelet[2821]: E0129 12:55:49.896537 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.897810 kubelet[2821]: E0129 12:55:49.896571 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.897810 kubelet[2821]: E0129 12:55:49.896584 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.898107 kubelet[2821]: E0129 12:55:49.897968 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.898107 kubelet[2821]: W0129 12:55:49.897983 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.898107 kubelet[2821]: E0129 12:55:49.898000 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.906754 kubelet[2821]: E0129 12:55:49.904211 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.906754 kubelet[2821]: W0129 12:55:49.904229 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.906754 kubelet[2821]: E0129 12:55:49.904244 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.918414 kubelet[2821]: E0129 12:55:49.918207 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.918414 kubelet[2821]: W0129 12:55:49.918354 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.918414 kubelet[2821]: E0129 12:55:49.918374 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.921295 kubelet[2821]: I0129 12:55:49.921050 2821 topology_manager.go:215] "Topology Admit Handler" podUID="dfee4c06-00be-4b24-82c3-46cade2a09c8" podNamespace="calico-system" podName="csi-node-driver-x8b47" Jan 29 12:55:49.921842 kubelet[2821]: E0129 12:55:49.921514 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8b47" podUID="dfee4c06-00be-4b24-82c3-46cade2a09c8" Jan 29 12:55:49.935431 containerd[1580]: time="2025-01-29T12:55:49.933916232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b9859ff97-nwm58,Uid:759526e3-e482-4760-b432-7ae6fbbc7012,Namespace:calico-system,Attempt:0,}" Jan 29 12:55:49.973184 kubelet[2821]: E0129 12:55:49.972937 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.973184 kubelet[2821]: W0129 12:55:49.972963 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.973184 kubelet[2821]: E0129 12:55:49.973000 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.974650 kubelet[2821]: E0129 12:55:49.974127 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.974650 kubelet[2821]: W0129 12:55:49.974453 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.974650 kubelet[2821]: E0129 12:55:49.974476 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.975887 kubelet[2821]: E0129 12:55:49.975626 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.975887 kubelet[2821]: W0129 12:55:49.975645 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.975887 kubelet[2821]: E0129 12:55:49.975661 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.977302 kubelet[2821]: E0129 12:55:49.977026 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.977302 kubelet[2821]: W0129 12:55:49.977043 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.977302 kubelet[2821]: E0129 12:55:49.977060 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.978391 kubelet[2821]: E0129 12:55:49.978173 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.978391 kubelet[2821]: W0129 12:55:49.978194 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.978391 kubelet[2821]: E0129 12:55:49.978213 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.979628 kubelet[2821]: E0129 12:55:49.979238 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.979628 kubelet[2821]: W0129 12:55:49.979254 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.979628 kubelet[2821]: E0129 12:55:49.979276 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.980803 kubelet[2821]: E0129 12:55:49.980323 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.980803 kubelet[2821]: W0129 12:55:49.980338 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.980803 kubelet[2821]: E0129 12:55:49.980353 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.982455 kubelet[2821]: E0129 12:55:49.981507 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.982455 kubelet[2821]: W0129 12:55:49.981520 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.982455 kubelet[2821]: E0129 12:55:49.981532 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.983293 kubelet[2821]: E0129 12:55:49.982761 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.983293 kubelet[2821]: W0129 12:55:49.982934 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.983293 kubelet[2821]: E0129 12:55:49.982950 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.984395 kubelet[2821]: E0129 12:55:49.983969 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.984395 kubelet[2821]: W0129 12:55:49.983982 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.984395 kubelet[2821]: E0129 12:55:49.983997 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.985297 kubelet[2821]: E0129 12:55:49.985041 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.985297 kubelet[2821]: W0129 12:55:49.985057 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.985297 kubelet[2821]: E0129 12:55:49.985070 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.986229 kubelet[2821]: E0129 12:55:49.986113 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.986229 kubelet[2821]: W0129 12:55:49.986130 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.986229 kubelet[2821]: E0129 12:55:49.986142 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.986617 kubelet[2821]: E0129 12:55:49.986481 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.986617 kubelet[2821]: W0129 12:55:49.986493 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.986617 kubelet[2821]: E0129 12:55:49.986503 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.986994 kubelet[2821]: E0129 12:55:49.986822 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.986994 kubelet[2821]: W0129 12:55:49.986845 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.986994 kubelet[2821]: E0129 12:55:49.986855 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.987314 kubelet[2821]: E0129 12:55:49.987246 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.987314 kubelet[2821]: W0129 12:55:49.987257 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.987314 kubelet[2821]: E0129 12:55:49.987267 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.987640 kubelet[2821]: E0129 12:55:49.987630 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.987793 kubelet[2821]: W0129 12:55:49.987699 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.987793 kubelet[2821]: E0129 12:55:49.987713 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.988060 kubelet[2821]: E0129 12:55:49.987990 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.988060 kubelet[2821]: W0129 12:55:49.988000 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.988060 kubelet[2821]: E0129 12:55:49.988009 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.988346 kubelet[2821]: E0129 12:55:49.988269 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.988346 kubelet[2821]: W0129 12:55:49.988279 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.988346 kubelet[2821]: E0129 12:55:49.988290 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.988596 kubelet[2821]: E0129 12:55:49.988519 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.988596 kubelet[2821]: W0129 12:55:49.988529 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.988596 kubelet[2821]: E0129 12:55:49.988538 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.988905 kubelet[2821]: E0129 12:55:49.988797 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.988905 kubelet[2821]: W0129 12:55:49.988808 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.988905 kubelet[2821]: E0129 12:55:49.988823 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.989881 kubelet[2821]: E0129 12:55:49.989523 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.989881 kubelet[2821]: W0129 12:55:49.989533 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.989881 kubelet[2821]: E0129 12:55:49.989543 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.989881 kubelet[2821]: I0129 12:55:49.989578 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dfee4c06-00be-4b24-82c3-46cade2a09c8-socket-dir\") pod \"csi-node-driver-x8b47\" (UID: \"dfee4c06-00be-4b24-82c3-46cade2a09c8\") " pod="calico-system/csi-node-driver-x8b47" Jan 29 12:55:49.989881 kubelet[2821]: E0129 12:55:49.989750 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.989881 kubelet[2821]: W0129 12:55:49.989760 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.989881 kubelet[2821]: E0129 12:55:49.989810 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.989881 kubelet[2821]: I0129 12:55:49.989827 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dfee4c06-00be-4b24-82c3-46cade2a09c8-registration-dir\") pod \"csi-node-driver-x8b47\" (UID: \"dfee4c06-00be-4b24-82c3-46cade2a09c8\") " pod="calico-system/csi-node-driver-x8b47" Jan 29 12:55:49.990355 kubelet[2821]: E0129 12:55:49.990343 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.990527 kubelet[2821]: W0129 12:55:49.990417 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.990527 kubelet[2821]: E0129 12:55:49.990436 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.990527 kubelet[2821]: I0129 12:55:49.990457 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hgdq\" (UniqueName: \"kubernetes.io/projected/dfee4c06-00be-4b24-82c3-46cade2a09c8-kube-api-access-9hgdq\") pod \"csi-node-driver-x8b47\" (UID: \"dfee4c06-00be-4b24-82c3-46cade2a09c8\") " pod="calico-system/csi-node-driver-x8b47" Jan 29 12:55:49.990983 kubelet[2821]: E0129 12:55:49.990927 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.990983 kubelet[2821]: W0129 12:55:49.990966 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.991319 kubelet[2821]: E0129 12:55:49.991202 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.991319 kubelet[2821]: I0129 12:55:49.991231 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dfee4c06-00be-4b24-82c3-46cade2a09c8-kubelet-dir\") pod \"csi-node-driver-x8b47\" (UID: \"dfee4c06-00be-4b24-82c3-46cade2a09c8\") " pod="calico-system/csi-node-driver-x8b47" Jan 29 12:55:49.991724 kubelet[2821]: E0129 12:55:49.991646 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.991724 kubelet[2821]: W0129 12:55:49.991658 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.992104 kubelet[2821]: E0129 12:55:49.991990 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.992274 kubelet[2821]: E0129 12:55:49.992189 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.992274 kubelet[2821]: W0129 12:55:49.992198 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.992451 kubelet[2821]: E0129 12:55:49.992338 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.992753 kubelet[2821]: E0129 12:55:49.992729 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.992753 kubelet[2821]: W0129 12:55:49.992740 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.993315 kubelet[2821]: E0129 12:55:49.993070 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.993432 kubelet[2821]: E0129 12:55:49.993420 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.993531 kubelet[2821]: W0129 12:55:49.993496 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.993756 kubelet[2821]: E0129 12:55:49.993677 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.993756 kubelet[2821]: I0129 12:55:49.993706 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dfee4c06-00be-4b24-82c3-46cade2a09c8-varrun\") pod \"csi-node-driver-x8b47\" (UID: \"dfee4c06-00be-4b24-82c3-46cade2a09c8\") " pod="calico-system/csi-node-driver-x8b47" Jan 29 12:55:49.994176 kubelet[2821]: E0129 12:55:49.994101 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.994176 kubelet[2821]: W0129 12:55:49.994113 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.994340 kubelet[2821]: E0129 12:55:49.994263 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.994444 kubelet[2821]: E0129 12:55:49.994434 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.994619 kubelet[2821]: W0129 12:55:49.994508 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.994619 kubelet[2821]: E0129 12:55:49.994525 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.994941 kubelet[2821]: E0129 12:55:49.994853 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.994941 kubelet[2821]: W0129 12:55:49.994866 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.994941 kubelet[2821]: E0129 12:55:49.994879 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.995200 kubelet[2821]: E0129 12:55:49.995161 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.995200 kubelet[2821]: W0129 12:55:49.995173 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.995200 kubelet[2821]: E0129 12:55:49.995186 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.995678 kubelet[2821]: E0129 12:55:49.995576 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.995678 kubelet[2821]: W0129 12:55:49.995588 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.995678 kubelet[2821]: E0129 12:55:49.995613 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.997661 kubelet[2821]: E0129 12:55:49.997384 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.997661 kubelet[2821]: W0129 12:55:49.997401 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.997661 kubelet[2821]: E0129 12:55:49.997414 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:49.998265 kubelet[2821]: E0129 12:55:49.998186 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:49.998265 kubelet[2821]: W0129 12:55:49.998239 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:49.998265 kubelet[2821]: E0129 12:55:49.998251 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.008486 containerd[1580]: time="2025-01-29T12:55:50.008029950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:55:50.010211 containerd[1580]: time="2025-01-29T12:55:50.010050499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:55:50.010211 containerd[1580]: time="2025-01-29T12:55:50.010094051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:55:50.010649 containerd[1580]: time="2025-01-29T12:55:50.010494911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:55:50.058761 containerd[1580]: time="2025-01-29T12:55:50.058712889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-85h29,Uid:cce191d7-73d5-40e6-adb9-bf968dd705f7,Namespace:calico-system,Attempt:0,}" Jan 29 12:55:50.099816 kubelet[2821]: E0129 12:55:50.099740 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.100369 kubelet[2821]: W0129 12:55:50.100206 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.100369 kubelet[2821]: E0129 12:55:50.100235 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.102047 kubelet[2821]: E0129 12:55:50.101897 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.102047 kubelet[2821]: W0129 12:55:50.101910 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.102047 kubelet[2821]: E0129 12:55:50.101929 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.103259 kubelet[2821]: E0129 12:55:50.102875 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.103259 kubelet[2821]: W0129 12:55:50.102892 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.103259 kubelet[2821]: E0129 12:55:50.102925 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.103629 kubelet[2821]: E0129 12:55:50.103445 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.103629 kubelet[2821]: W0129 12:55:50.103466 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.103629 kubelet[2821]: E0129 12:55:50.103556 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.104447 kubelet[2821]: E0129 12:55:50.104056 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.104447 kubelet[2821]: W0129 12:55:50.104073 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.104447 kubelet[2821]: E0129 12:55:50.104085 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.105647 kubelet[2821]: E0129 12:55:50.104475 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.105647 kubelet[2821]: W0129 12:55:50.104485 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.105647 kubelet[2821]: E0129 12:55:50.104581 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.105647 kubelet[2821]: E0129 12:55:50.105076 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.105647 kubelet[2821]: W0129 12:55:50.105085 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.105647 kubelet[2821]: E0129 12:55:50.105130 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.105647 kubelet[2821]: E0129 12:55:50.105570 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.105647 kubelet[2821]: W0129 12:55:50.105582 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.106872 kubelet[2821]: E0129 12:55:50.106078 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.106872 kubelet[2821]: E0129 12:55:50.106301 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.106872 kubelet[2821]: W0129 12:55:50.106310 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.106872 kubelet[2821]: E0129 12:55:50.106421 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.106872 kubelet[2821]: E0129 12:55:50.106589 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.106872 kubelet[2821]: W0129 12:55:50.106599 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.106872 kubelet[2821]: E0129 12:55:50.106792 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.107034 kubelet[2821]: E0129 12:55:50.107012 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.107034 kubelet[2821]: W0129 12:55:50.107022 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.108128 containerd[1580]: time="2025-01-29T12:55:50.105835629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:55:50.108128 containerd[1580]: time="2025-01-29T12:55:50.105905920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:55:50.108128 containerd[1580]: time="2025-01-29T12:55:50.105926058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:55:50.108128 containerd[1580]: time="2025-01-29T12:55:50.106019523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:55:50.108759 kubelet[2821]: E0129 12:55:50.107207 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.108759 kubelet[2821]: E0129 12:55:50.107594 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.108759 kubelet[2821]: W0129 12:55:50.107606 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.108759 kubelet[2821]: E0129 12:55:50.107620 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.108759 kubelet[2821]: E0129 12:55:50.108150 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.108759 kubelet[2821]: W0129 12:55:50.108161 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.108759 kubelet[2821]: E0129 12:55:50.108287 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.110622 kubelet[2821]: E0129 12:55:50.109007 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.110622 kubelet[2821]: W0129 12:55:50.109016 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.110622 kubelet[2821]: E0129 12:55:50.109357 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.110622 kubelet[2821]: E0129 12:55:50.109506 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.110622 kubelet[2821]: W0129 12:55:50.109517 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.110622 kubelet[2821]: E0129 12:55:50.109735 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.110622 kubelet[2821]: E0129 12:55:50.110040 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.110622 kubelet[2821]: W0129 12:55:50.110049 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.110622 kubelet[2821]: E0129 12:55:50.110220 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.112079 kubelet[2821]: E0129 12:55:50.110984 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.112079 kubelet[2821]: W0129 12:55:50.110999 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.112079 kubelet[2821]: E0129 12:55:50.111415 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.112079 kubelet[2821]: W0129 12:55:50.111423 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.112079 kubelet[2821]: E0129 12:55:50.111636 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.112079 kubelet[2821]: E0129 12:55:50.111667 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.112079 kubelet[2821]: E0129 12:55:50.111745 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.112079 kubelet[2821]: W0129 12:55:50.111754 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.112079 kubelet[2821]: E0129 12:55:50.111837 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.112322 kubelet[2821]: E0129 12:55:50.112266 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.112322 kubelet[2821]: W0129 12:55:50.112275 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.114171 kubelet[2821]: E0129 12:55:50.112730 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.114171 kubelet[2821]: W0129 12:55:50.112856 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.114171 kubelet[2821]: E0129 12:55:50.112873 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.114171 kubelet[2821]: E0129 12:55:50.113757 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.114171 kubelet[2821]: E0129 12:55:50.114113 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.114171 kubelet[2821]: W0129 12:55:50.114124 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.114171 kubelet[2821]: E0129 12:55:50.114137 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.115590 kubelet[2821]: E0129 12:55:50.114312 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.115590 kubelet[2821]: W0129 12:55:50.114320 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.115590 kubelet[2821]: E0129 12:55:50.114329 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.115590 kubelet[2821]: E0129 12:55:50.114795 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.115590 kubelet[2821]: W0129 12:55:50.114804 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.115590 kubelet[2821]: E0129 12:55:50.114815 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.115590 kubelet[2821]: E0129 12:55:50.115097 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.115590 kubelet[2821]: W0129 12:55:50.115106 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.115590 kubelet[2821]: E0129 12:55:50.115115 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.132178 kubelet[2821]: E0129 12:55:50.132091 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:50.132178 kubelet[2821]: W0129 12:55:50.132109 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:50.132178 kubelet[2821]: E0129 12:55:50.132127 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:50.161161 containerd[1580]: time="2025-01-29T12:55:50.161033343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b9859ff97-nwm58,Uid:759526e3-e482-4760-b432-7ae6fbbc7012,Namespace:calico-system,Attempt:0,} returns sandbox id \"80c0c944e16db279de6eda37b2af27c8a823ed7bf4d9792a2c80145cc1a3c4a8\"" Jan 29 12:55:50.165750 containerd[1580]: time="2025-01-29T12:55:50.165539672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 12:55:50.188761 containerd[1580]: time="2025-01-29T12:55:50.188574276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-85h29,Uid:cce191d7-73d5-40e6-adb9-bf968dd705f7,Namespace:calico-system,Attempt:0,} returns sandbox id \"a09d9f56cc7779244cab1c8af162945b93ac1f5b5fbf2ed8be2e53ca81b74a2d\"" Jan 29 12:55:51.555861 kubelet[2821]: E0129 12:55:51.552743 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8b47" podUID="dfee4c06-00be-4b24-82c3-46cade2a09c8" Jan 29 12:55:51.921489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884547253.mount: Deactivated successfully. Jan 29 12:55:53.009365 containerd[1580]: time="2025-01-29T12:55:53.009328102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:53.010910 containerd[1580]: time="2025-01-29T12:55:53.010873272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 29 12:55:53.012743 containerd[1580]: time="2025-01-29T12:55:53.012701042Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:53.015722 containerd[1580]: time="2025-01-29T12:55:53.015679573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:53.016673 containerd[1580]: time="2025-01-29T12:55:53.016559489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.850979763s" Jan 29 12:55:53.016673 containerd[1580]: time="2025-01-29T12:55:53.016607279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 12:55:53.018591 containerd[1580]: time="2025-01-29T12:55:53.018000896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 12:55:53.031528 containerd[1580]: time="2025-01-29T12:55:53.031490400Z" level=info msg="CreateContainer within sandbox \"80c0c944e16db279de6eda37b2af27c8a823ed7bf4d9792a2c80145cc1a3c4a8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 12:55:53.053040 containerd[1580]: time="2025-01-29T12:55:53.053007333Z" level=info msg="CreateContainer within sandbox \"80c0c944e16db279de6eda37b2af27c8a823ed7bf4d9792a2c80145cc1a3c4a8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"97373b2c4e340bb267db2be67e4123d5c36b344d8cd3725e8b5404105a9b3012\"" Jan 29 12:55:53.054810 containerd[1580]: time="2025-01-29T12:55:53.054079027Z" level=info msg="StartContainer for \"97373b2c4e340bb267db2be67e4123d5c36b344d8cd3725e8b5404105a9b3012\"" Jan 29 12:55:53.132935 containerd[1580]: time="2025-01-29T12:55:53.132847590Z" level=info msg="StartContainer for \"97373b2c4e340bb267db2be67e4123d5c36b344d8cd3725e8b5404105a9b3012\" returns successfully" Jan 29 12:55:53.553338 kubelet[2821]: E0129 12:55:53.551880 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8b47" podUID="dfee4c06-00be-4b24-82c3-46cade2a09c8" Jan 29 12:55:53.718818 kubelet[2821]: E0129 12:55:53.718331 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.718818 kubelet[2821]: W0129 12:55:53.718418 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.718818 kubelet[2821]: E0129 12:55:53.718555 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.720095 kubelet[2821]: E0129 12:55:53.719752 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.720095 kubelet[2821]: W0129 12:55:53.719848 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.720095 kubelet[2821]: E0129 12:55:53.719973 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.720957 kubelet[2821]: E0129 12:55:53.720585 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.720957 kubelet[2821]: W0129 12:55:53.720610 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.720957 kubelet[2821]: E0129 12:55:53.720635 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.721368 kubelet[2821]: E0129 12:55:53.721342 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.721452 kubelet[2821]: W0129 12:55:53.721367 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.721452 kubelet[2821]: E0129 12:55:53.721392 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.722303 kubelet[2821]: E0129 12:55:53.722269 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.722393 kubelet[2821]: W0129 12:55:53.722338 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.722393 kubelet[2821]: E0129 12:55:53.722366 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.723067 kubelet[2821]: E0129 12:55:53.723036 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.723067 kubelet[2821]: W0129 12:55:53.723066 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.723249 kubelet[2821]: E0129 12:55:53.723089 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.723512 kubelet[2821]: E0129 12:55:53.723484 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.723587 kubelet[2821]: W0129 12:55:53.723512 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.723587 kubelet[2821]: E0129 12:55:53.723533 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.723946 kubelet[2821]: E0129 12:55:53.723918 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.723946 kubelet[2821]: W0129 12:55:53.723946 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.724120 kubelet[2821]: E0129 12:55:53.723967 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.724339 kubelet[2821]: E0129 12:55:53.724312 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.724420 kubelet[2821]: W0129 12:55:53.724339 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.724420 kubelet[2821]: E0129 12:55:53.724361 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.724681 kubelet[2821]: E0129 12:55:53.724655 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.724820 kubelet[2821]: W0129 12:55:53.724683 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.724820 kubelet[2821]: E0129 12:55:53.724704 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.725124 kubelet[2821]: E0129 12:55:53.725097 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.725226 kubelet[2821]: W0129 12:55:53.725124 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.725226 kubelet[2821]: E0129 12:55:53.725145 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.725504 kubelet[2821]: E0129 12:55:53.725476 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.725588 kubelet[2821]: W0129 12:55:53.725503 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.725588 kubelet[2821]: E0129 12:55:53.725524 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.725926 kubelet[2821]: E0129 12:55:53.725897 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.725926 kubelet[2821]: W0129 12:55:53.725925 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.726087 kubelet[2821]: E0129 12:55:53.725950 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.726317 kubelet[2821]: E0129 12:55:53.726289 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.726422 kubelet[2821]: W0129 12:55:53.726316 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.726422 kubelet[2821]: E0129 12:55:53.726337 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.726659 kubelet[2821]: E0129 12:55:53.726632 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.726735 kubelet[2821]: W0129 12:55:53.726659 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.726735 kubelet[2821]: E0129 12:55:53.726680 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.733476 kubelet[2821]: E0129 12:55:53.733444 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.733890 kubelet[2821]: W0129 12:55:53.733622 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.733890 kubelet[2821]: E0129 12:55:53.733661 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.734216 kubelet[2821]: E0129 12:55:53.734191 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.734349 kubelet[2821]: W0129 12:55:53.734327 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.734483 kubelet[2821]: E0129 12:55:53.734461 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.734886 kubelet[2821]: E0129 12:55:53.734847 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.734886 kubelet[2821]: W0129 12:55:53.734886 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.735050 kubelet[2821]: E0129 12:55:53.734942 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.735350 kubelet[2821]: E0129 12:55:53.735321 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.735350 kubelet[2821]: W0129 12:55:53.735350 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.735501 kubelet[2821]: E0129 12:55:53.735403 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.735841 kubelet[2821]: E0129 12:55:53.735810 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.735841 kubelet[2821]: W0129 12:55:53.735841 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.736012 kubelet[2821]: E0129 12:55:53.735870 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.736340 kubelet[2821]: E0129 12:55:53.736300 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.736340 kubelet[2821]: W0129 12:55:53.736331 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.737114 kubelet[2821]: E0129 12:55:53.736513 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.737114 kubelet[2821]: E0129 12:55:53.736633 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.737114 kubelet[2821]: W0129 12:55:53.736653 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.737114 kubelet[2821]: E0129 12:55:53.736945 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.737114 kubelet[2821]: E0129 12:55:53.736989 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.737114 kubelet[2821]: W0129 12:55:53.737008 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.737114 kubelet[2821]: E0129 12:55:53.737029 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.738237 kubelet[2821]: E0129 12:55:53.737568 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.738237 kubelet[2821]: W0129 12:55:53.737589 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.738237 kubelet[2821]: E0129 12:55:53.737624 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.738237 kubelet[2821]: E0129 12:55:53.737969 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.738237 kubelet[2821]: W0129 12:55:53.737987 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.738237 kubelet[2821]: E0129 12:55:53.738022 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.739008 kubelet[2821]: E0129 12:55:53.738733 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.739008 kubelet[2821]: W0129 12:55:53.738762 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.739008 kubelet[2821]: E0129 12:55:53.738833 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.739553 kubelet[2821]: E0129 12:55:53.739338 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.739553 kubelet[2821]: W0129 12:55:53.739365 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.739553 kubelet[2821]: E0129 12:55:53.739399 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.740193 kubelet[2821]: E0129 12:55:53.740010 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.740193 kubelet[2821]: W0129 12:55:53.740036 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.740193 kubelet[2821]: E0129 12:55:53.740099 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.741041 kubelet[2821]: E0129 12:55:53.740700 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.741041 kubelet[2821]: W0129 12:55:53.740725 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.741041 kubelet[2821]: E0129 12:55:53.740762 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.741426 kubelet[2821]: E0129 12:55:53.741400 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.741584 kubelet[2821]: W0129 12:55:53.741558 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.742105 kubelet[2821]: E0129 12:55:53.741719 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.742518 kubelet[2821]: E0129 12:55:53.742474 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.742518 kubelet[2821]: W0129 12:55:53.742503 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.742687 kubelet[2821]: E0129 12:55:53.742533 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.743365 kubelet[2821]: E0129 12:55:53.743335 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.743693 kubelet[2821]: W0129 12:55:53.743530 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.743693 kubelet[2821]: E0129 12:55:53.743589 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:53.744350 kubelet[2821]: E0129 12:55:53.744258 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:53.744350 kubelet[2821]: W0129 12:55:53.744283 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:53.744350 kubelet[2821]: E0129 12:55:53.744306 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.692535 kubelet[2821]: I0129 12:55:54.692511 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:55:54.735009 kubelet[2821]: E0129 12:55:54.734857 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.735009 kubelet[2821]: W0129 12:55:54.734886 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.735009 kubelet[2821]: E0129 12:55:54.734902 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.772928 kubelet[2821]: E0129 12:55:54.735131 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.772928 kubelet[2821]: W0129 12:55:54.735141 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.772928 kubelet[2821]: E0129 12:55:54.735171 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.772928 kubelet[2821]: E0129 12:55:54.735347 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.772928 kubelet[2821]: W0129 12:55:54.735355 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.772928 kubelet[2821]: E0129 12:55:54.735364 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.772928 kubelet[2821]: E0129 12:55:54.735618 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.772928 kubelet[2821]: W0129 12:55:54.735627 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.772928 kubelet[2821]: E0129 12:55:54.735636 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.772928 kubelet[2821]: E0129 12:55:54.735912 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.773601 kubelet[2821]: W0129 12:55:54.735922 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.773601 kubelet[2821]: E0129 12:55:54.735932 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.773601 kubelet[2821]: E0129 12:55:54.736165 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.773601 kubelet[2821]: W0129 12:55:54.736175 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.773601 kubelet[2821]: E0129 12:55:54.736183 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.773601 kubelet[2821]: E0129 12:55:54.736388 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.773601 kubelet[2821]: W0129 12:55:54.736397 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.773601 kubelet[2821]: E0129 12:55:54.736433 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.773601 kubelet[2821]: E0129 12:55:54.736656 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.773601 kubelet[2821]: W0129 12:55:54.736664 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.774199 kubelet[2821]: E0129 12:55:54.736672 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.774199 kubelet[2821]: E0129 12:55:54.736947 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.774199 kubelet[2821]: W0129 12:55:54.736955 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.774199 kubelet[2821]: E0129 12:55:54.736963 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.774199 kubelet[2821]: E0129 12:55:54.737220 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.774199 kubelet[2821]: W0129 12:55:54.737230 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.774199 kubelet[2821]: E0129 12:55:54.737239 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.774199 kubelet[2821]: E0129 12:55:54.737487 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.774199 kubelet[2821]: W0129 12:55:54.737496 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.774199 kubelet[2821]: E0129 12:55:54.737505 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.774752 kubelet[2821]: E0129 12:55:54.737778 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.774752 kubelet[2821]: W0129 12:55:54.737788 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.774752 kubelet[2821]: E0129 12:55:54.737797 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.774752 kubelet[2821]: E0129 12:55:54.738016 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.774752 kubelet[2821]: W0129 12:55:54.738024 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.774752 kubelet[2821]: E0129 12:55:54.738032 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.774752 kubelet[2821]: E0129 12:55:54.738247 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.774752 kubelet[2821]: W0129 12:55:54.738256 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.774752 kubelet[2821]: E0129 12:55:54.738277 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.774752 kubelet[2821]: E0129 12:55:54.738487 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.775343 kubelet[2821]: W0129 12:55:54.738495 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.775343 kubelet[2821]: E0129 12:55:54.738504 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.775343 kubelet[2821]: E0129 12:55:54.746119 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.775343 kubelet[2821]: W0129 12:55:54.746149 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.775343 kubelet[2821]: E0129 12:55:54.746184 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.775343 kubelet[2821]: E0129 12:55:54.746568 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.775343 kubelet[2821]: W0129 12:55:54.746588 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.775343 kubelet[2821]: E0129 12:55:54.746622 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.775343 kubelet[2821]: E0129 12:55:54.747044 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.775343 kubelet[2821]: W0129 12:55:54.747065 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.775929 kubelet[2821]: E0129 12:55:54.747100 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.775929 kubelet[2821]: E0129 12:55:54.747700 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.775929 kubelet[2821]: W0129 12:55:54.747713 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.775929 kubelet[2821]: E0129 12:55:54.747732 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.775929 kubelet[2821]: E0129 12:55:54.747895 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.775929 kubelet[2821]: W0129 12:55:54.747903 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.775929 kubelet[2821]: E0129 12:55:54.747912 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.775929 kubelet[2821]: E0129 12:55:54.748040 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.775929 kubelet[2821]: W0129 12:55:54.748048 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.775929 kubelet[2821]: E0129 12:55:54.748181 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.776478 kubelet[2821]: E0129 12:55:54.748245 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.776478 kubelet[2821]: W0129 12:55:54.748252 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.776478 kubelet[2821]: E0129 12:55:54.748362 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.776478 kubelet[2821]: E0129 12:55:54.748392 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.776478 kubelet[2821]: W0129 12:55:54.748400 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.776478 kubelet[2821]: E0129 12:55:54.748432 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.776478 kubelet[2821]: E0129 12:55:54.748623 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.776478 kubelet[2821]: W0129 12:55:54.748632 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.776478 kubelet[2821]: E0129 12:55:54.748656 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.776478 kubelet[2821]: E0129 12:55:54.748971 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.777072 kubelet[2821]: W0129 12:55:54.748979 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.777072 kubelet[2821]: E0129 12:55:54.748996 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.777072 kubelet[2821]: E0129 12:55:54.749258 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.777072 kubelet[2821]: W0129 12:55:54.749267 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.777072 kubelet[2821]: E0129 12:55:54.749287 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.777072 kubelet[2821]: E0129 12:55:54.749502 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.777072 kubelet[2821]: W0129 12:55:54.749515 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.777072 kubelet[2821]: E0129 12:55:54.749537 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.777072 kubelet[2821]: E0129 12:55:54.749912 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.777072 kubelet[2821]: W0129 12:55:54.749921 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.777642 kubelet[2821]: E0129 12:55:54.749932 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.777642 kubelet[2821]: E0129 12:55:54.750249 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.777642 kubelet[2821]: W0129 12:55:54.750258 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.777642 kubelet[2821]: E0129 12:55:54.750275 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.777642 kubelet[2821]: E0129 12:55:54.750516 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.777642 kubelet[2821]: W0129 12:55:54.750524 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.777642 kubelet[2821]: E0129 12:55:54.750538 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.777642 kubelet[2821]: E0129 12:55:54.750932 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.777642 kubelet[2821]: W0129 12:55:54.750941 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.777642 kubelet[2821]: E0129 12:55:54.750950 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.778293 kubelet[2821]: E0129 12:55:54.751174 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.778293 kubelet[2821]: W0129 12:55:54.751182 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.778293 kubelet[2821]: E0129 12:55:54.751199 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.778293 kubelet[2821]: E0129 12:55:54.751400 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:55:54.778293 kubelet[2821]: W0129 12:55:54.751408 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:55:54.778293 kubelet[2821]: E0129 12:55:54.751416 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:55:54.994850 containerd[1580]: time="2025-01-29T12:55:54.994712457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:54.996557 containerd[1580]: time="2025-01-29T12:55:54.996515680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 29 12:55:54.999237 containerd[1580]: time="2025-01-29T12:55:54.998203408Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:55.000847 containerd[1580]: time="2025-01-29T12:55:55.000813501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:55:55.003180 containerd[1580]: time="2025-01-29T12:55:55.003149010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.985117217s" Jan 29 12:55:55.003237 containerd[1580]: time="2025-01-29T12:55:55.003210816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 12:55:55.008163 containerd[1580]: time="2025-01-29T12:55:55.008054047Z" level=info msg="CreateContainer within sandbox \"a09d9f56cc7779244cab1c8af162945b93ac1f5b5fbf2ed8be2e53ca81b74a2d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:55:55.028757 containerd[1580]: time="2025-01-29T12:55:55.028709270Z" level=info msg="CreateContainer within sandbox \"a09d9f56cc7779244cab1c8af162945b93ac1f5b5fbf2ed8be2e53ca81b74a2d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0b473dfc023c8b2bb1ca405800c16a218f0fde01d1f64f19ef51be020e193b93\"" Jan 29 12:55:55.030003 containerd[1580]: time="2025-01-29T12:55:55.029575120Z" level=info msg="StartContainer for \"0b473dfc023c8b2bb1ca405800c16a218f0fde01d1f64f19ef51be020e193b93\"" Jan 29 12:55:55.107004 containerd[1580]: time="2025-01-29T12:55:55.106746566Z" level=info msg="StartContainer for \"0b473dfc023c8b2bb1ca405800c16a218f0fde01d1f64f19ef51be020e193b93\" returns successfully" Jan 29 12:55:55.142109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b473dfc023c8b2bb1ca405800c16a218f0fde01d1f64f19ef51be020e193b93-rootfs.mount: Deactivated successfully. Jan 29 12:55:55.554142 kubelet[2821]: E0129 12:55:55.552737 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8b47" podUID="dfee4c06-00be-4b24-82c3-46cade2a09c8" Jan 29 12:55:55.786145 kubelet[2821]: I0129 12:55:55.774568 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b9859ff97-nwm58" podStartSLOduration=3.921799206 podStartE2EDuration="6.774529784s" podCreationTimestamp="2025-01-29 12:55:49 +0000 UTC" firstStartedPulling="2025-01-29 12:55:50.164838319 +0000 UTC m=+20.715020883" lastFinishedPulling="2025-01-29 12:55:53.017568887 +0000 UTC m=+23.567751461" observedRunningTime="2025-01-29 12:55:53.714883497 +0000 UTC m=+24.265066111" watchObservedRunningTime="2025-01-29 12:55:55.774529784 +0000 UTC m=+26.324712469" Jan 29 12:55:55.838911 containerd[1580]: time="2025-01-29T12:55:55.838661536Z" level=info msg="shim disconnected" id=0b473dfc023c8b2bb1ca405800c16a218f0fde01d1f64f19ef51be020e193b93 namespace=k8s.io Jan 29 12:55:55.838911 containerd[1580]: time="2025-01-29T12:55:55.838761863Z" level=warning msg="cleaning up after shim disconnected" id=0b473dfc023c8b2bb1ca405800c16a218f0fde01d1f64f19ef51be020e193b93 namespace=k8s.io Jan 29 12:55:55.838911 containerd[1580]: time="2025-01-29T12:55:55.838825172Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:55:56.711266 containerd[1580]: time="2025-01-29T12:55:56.711183339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 12:55:56.830998 kubelet[2821]: I0129 12:55:56.830609 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:55:57.553515 kubelet[2821]: E0129 12:55:57.552399 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8b47" podUID="dfee4c06-00be-4b24-82c3-46cade2a09c8" Jan 29 12:55:59.555712 kubelet[2821]: E0129 12:55:59.554347 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8b47" podUID="dfee4c06-00be-4b24-82c3-46cade2a09c8" Jan 29 12:56:01.551962 kubelet[2821]: E0129 12:56:01.551713 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8b47" podUID="dfee4c06-00be-4b24-82c3-46cade2a09c8" Jan 29 12:56:03.551910 kubelet[2821]: E0129 12:56:03.551667 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8b47" podUID="dfee4c06-00be-4b24-82c3-46cade2a09c8" Jan 29 12:56:03.677665 containerd[1580]: time="2025-01-29T12:56:03.677461945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:03.680226 containerd[1580]: time="2025-01-29T12:56:03.680119298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 12:56:03.683710 containerd[1580]: time="2025-01-29T12:56:03.682015015Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:03.690711 containerd[1580]: time="2025-01-29T12:56:03.690599149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:03.693303 containerd[1580]: time="2025-01-29T12:56:03.693030258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.980629992s" Jan 29 12:56:03.693303 containerd[1580]: time="2025-01-29T12:56:03.693118172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 12:56:03.700727 containerd[1580]: time="2025-01-29T12:56:03.700399518Z" level=info msg="CreateContainer within sandbox \"a09d9f56cc7779244cab1c8af162945b93ac1f5b5fbf2ed8be2e53ca81b74a2d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:56:03.734437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount880134061.mount: Deactivated successfully. Jan 29 12:56:03.743383 containerd[1580]: time="2025-01-29T12:56:03.743275893Z" level=info msg="CreateContainer within sandbox \"a09d9f56cc7779244cab1c8af162945b93ac1f5b5fbf2ed8be2e53ca81b74a2d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9abd71d86df4c183d81ab37c2b40a4464c9e24985b2fc0e9975649e018c479c0\"" Jan 29 12:56:03.746077 containerd[1580]: time="2025-01-29T12:56:03.745245389Z" level=info msg="StartContainer for \"9abd71d86df4c183d81ab37c2b40a4464c9e24985b2fc0e9975649e018c479c0\"" Jan 29 12:56:03.836388 containerd[1580]: time="2025-01-29T12:56:03.836180176Z" level=info msg="StartContainer for \"9abd71d86df4c183d81ab37c2b40a4464c9e24985b2fc0e9975649e018c479c0\" returns successfully" Jan 29 12:56:05.057807 containerd[1580]: time="2025-01-29T12:56:05.057672142Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:56:05.082520 kubelet[2821]: I0129 12:56:05.081380 2821 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:56:05.119948 kubelet[2821]: I0129 12:56:05.119568 2821 topology_manager.go:215] "Topology Admit Handler" podUID="238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-t9ghv" Jan 29 12:56:05.135992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9abd71d86df4c183d81ab37c2b40a4464c9e24985b2fc0e9975649e018c479c0-rootfs.mount: Deactivated successfully. Jan 29 12:56:05.139193 kubelet[2821]: I0129 12:56:05.137041 2821 topology_manager.go:215] "Topology Admit Handler" podUID="6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7" podNamespace="calico-system" podName="calico-kube-controllers-68547577c7-5ck74" Jan 29 12:56:05.151336 kubelet[2821]: I0129 12:56:05.151100 2821 topology_manager.go:215] "Topology Admit Handler" podUID="1e22ea74-beab-433f-90fd-803bfaa2c127" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wxmjl" Jan 29 12:56:05.151787 kubelet[2821]: I0129 12:56:05.151715 2821 topology_manager.go:215] "Topology Admit Handler" podUID="f40c0f50-7def-47b4-a1b4-bdc72134047a" podNamespace="calico-apiserver" podName="calico-apiserver-64cd9546dc-kqbdx" Jan 29 12:56:05.155797 kubelet[2821]: I0129 12:56:05.154975 2821 topology_manager.go:215] "Topology Admit Handler" podUID="f2143d0c-26ec-4f29-91e9-994f575528dd" podNamespace="calico-apiserver" podName="calico-apiserver-64cd9546dc-s76sw" Jan 29 12:56:05.275540 kubelet[2821]: I0129 12:56:05.275386 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqj4w\" (UniqueName: \"kubernetes.io/projected/1e22ea74-beab-433f-90fd-803bfaa2c127-kube-api-access-hqj4w\") pod \"coredns-7db6d8ff4d-wxmjl\" (UID: \"1e22ea74-beab-433f-90fd-803bfaa2c127\") " pod="kube-system/coredns-7db6d8ff4d-wxmjl" Jan 29 12:56:05.275816 kubelet[2821]: I0129 12:56:05.275570 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f40c0f50-7def-47b4-a1b4-bdc72134047a-calico-apiserver-certs\") pod \"calico-apiserver-64cd9546dc-kqbdx\" (UID: \"f40c0f50-7def-47b4-a1b4-bdc72134047a\") " pod="calico-apiserver/calico-apiserver-64cd9546dc-kqbdx" Jan 29 12:56:05.275816 kubelet[2821]: I0129 12:56:05.275700 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g2bl\" (UniqueName: \"kubernetes.io/projected/f40c0f50-7def-47b4-a1b4-bdc72134047a-kube-api-access-9g2bl\") pod \"calico-apiserver-64cd9546dc-kqbdx\" (UID: \"f40c0f50-7def-47b4-a1b4-bdc72134047a\") " pod="calico-apiserver/calico-apiserver-64cd9546dc-kqbdx" Jan 29 12:56:05.276004 kubelet[2821]: I0129 12:56:05.275876 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znv6w\" (UniqueName: \"kubernetes.io/projected/6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7-kube-api-access-znv6w\") pod \"calico-kube-controllers-68547577c7-5ck74\" (UID: \"6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7\") " pod="calico-system/calico-kube-controllers-68547577c7-5ck74" Jan 29 12:56:05.276268 kubelet[2821]: I0129 12:56:05.276224 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f2143d0c-26ec-4f29-91e9-994f575528dd-calico-apiserver-certs\") pod \"calico-apiserver-64cd9546dc-s76sw\" (UID: \"f2143d0c-26ec-4f29-91e9-994f575528dd\") " pod="calico-apiserver/calico-apiserver-64cd9546dc-s76sw" Jan 29 12:56:05.276415 kubelet[2821]: I0129 12:56:05.276290 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwb6w\" (UniqueName: \"kubernetes.io/projected/f2143d0c-26ec-4f29-91e9-994f575528dd-kube-api-access-mwb6w\") pod \"calico-apiserver-64cd9546dc-s76sw\" (UID: \"f2143d0c-26ec-4f29-91e9-994f575528dd\") " pod="calico-apiserver/calico-apiserver-64cd9546dc-s76sw" Jan 29 12:56:05.276415 kubelet[2821]: I0129 12:56:05.276346 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7-tigera-ca-bundle\") pod \"calico-kube-controllers-68547577c7-5ck74\" (UID: \"6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7\") " pod="calico-system/calico-kube-controllers-68547577c7-5ck74" Jan 29 12:56:05.276415 kubelet[2821]: I0129 12:56:05.276391 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e22ea74-beab-433f-90fd-803bfaa2c127-config-volume\") pod \"coredns-7db6d8ff4d-wxmjl\" (UID: \"1e22ea74-beab-433f-90fd-803bfaa2c127\") " pod="kube-system/coredns-7db6d8ff4d-wxmjl" Jan 29 12:56:05.276608 kubelet[2821]: I0129 12:56:05.276438 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c-config-volume\") pod \"coredns-7db6d8ff4d-t9ghv\" (UID: \"238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c\") " pod="kube-system/coredns-7db6d8ff4d-t9ghv" Jan 29 12:56:05.276608 kubelet[2821]: I0129 12:56:05.276482 2821 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hq4h\" (UniqueName: \"kubernetes.io/projected/238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c-kube-api-access-8hq4h\") pod \"coredns-7db6d8ff4d-t9ghv\" (UID: \"238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c\") " pod="kube-system/coredns-7db6d8ff4d-t9ghv" Jan 29 12:56:05.558655 containerd[1580]: time="2025-01-29T12:56:05.558579495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8b47,Uid:dfee4c06-00be-4b24-82c3-46cade2a09c8,Namespace:calico-system,Attempt:0,}" Jan 29 12:56:06.095423 containerd[1580]: time="2025-01-29T12:56:06.094571920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t9ghv,Uid:238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c,Namespace:kube-system,Attempt:0,}" Jan 29 12:56:06.165159 containerd[1580]: time="2025-01-29T12:56:06.164982258Z" level=info msg="shim disconnected" id=9abd71d86df4c183d81ab37c2b40a4464c9e24985b2fc0e9975649e018c479c0 namespace=k8s.io Jan 29 12:56:06.165159 containerd[1580]: time="2025-01-29T12:56:06.165081743Z" level=warning msg="cleaning up after shim disconnected" id=9abd71d86df4c183d81ab37c2b40a4464c9e24985b2fc0e9975649e018c479c0 namespace=k8s.io Jan 29 12:56:06.165159 containerd[1580]: time="2025-01-29T12:56:06.165136055Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:56:06.300245 containerd[1580]: time="2025-01-29T12:56:06.300189728Z" level=error msg="Failed to destroy network for sandbox \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.300794 containerd[1580]: time="2025-01-29T12:56:06.300733426Z" level=error msg="encountered an error cleaning up failed sandbox \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.300909 containerd[1580]: time="2025-01-29T12:56:06.300826880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8b47,Uid:dfee4c06-00be-4b24-82c3-46cade2a09c8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.301895 kubelet[2821]: E0129 12:56:06.301092 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.301895 kubelet[2821]: E0129 12:56:06.301242 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x8b47" Jan 29 12:56:06.301895 kubelet[2821]: E0129 12:56:06.301270 2821 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x8b47" Jan 29 12:56:06.302834 kubelet[2821]: E0129 12:56:06.302216 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x8b47_calico-system(dfee4c06-00be-4b24-82c3-46cade2a09c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x8b47_calico-system(dfee4c06-00be-4b24-82c3-46cade2a09c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x8b47" podUID="dfee4c06-00be-4b24-82c3-46cade2a09c8" Jan 29 12:56:06.313373 containerd[1580]: time="2025-01-29T12:56:06.313246814Z" level=error msg="Failed to destroy network for sandbox \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.313620 containerd[1580]: time="2025-01-29T12:56:06.313592270Z" level=error msg="encountered an error cleaning up failed sandbox \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.313710 containerd[1580]: time="2025-01-29T12:56:06.313650559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t9ghv,Uid:238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.313998 kubelet[2821]: E0129 12:56:06.313934 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.314963 kubelet[2821]: E0129 12:56:06.314098 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t9ghv" Jan 29 12:56:06.314963 kubelet[2821]: E0129 12:56:06.314126 2821 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t9ghv" Jan 29 12:56:06.314963 kubelet[2821]: E0129 12:56:06.314175 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-t9ghv_kube-system(238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-t9ghv_kube-system(238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t9ghv" podUID="238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c" Jan 29 12:56:06.349805 containerd[1580]: time="2025-01-29T12:56:06.349687921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68547577c7-5ck74,Uid:6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7,Namespace:calico-system,Attempt:0,}" Jan 29 12:56:06.356432 containerd[1580]: time="2025-01-29T12:56:06.356208343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wxmjl,Uid:1e22ea74-beab-433f-90fd-803bfaa2c127,Namespace:kube-system,Attempt:0,}" Jan 29 12:56:06.361037 containerd[1580]: time="2025-01-29T12:56:06.360898911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cd9546dc-kqbdx,Uid:f40c0f50-7def-47b4-a1b4-bdc72134047a,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:56:06.363719 containerd[1580]: time="2025-01-29T12:56:06.363587853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cd9546dc-s76sw,Uid:f2143d0c-26ec-4f29-91e9-994f575528dd,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:56:06.600423 containerd[1580]: time="2025-01-29T12:56:06.599968522Z" level=error msg="Failed to destroy network for sandbox \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.600423 containerd[1580]: time="2025-01-29T12:56:06.600322475Z" level=error msg="encountered an error cleaning up failed sandbox \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.600423 containerd[1580]: time="2025-01-29T12:56:06.600376295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wxmjl,Uid:1e22ea74-beab-433f-90fd-803bfaa2c127,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.602555 kubelet[2821]: E0129 12:56:06.601498 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.602555 kubelet[2821]: E0129 12:56:06.601565 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wxmjl" Jan 29 12:56:06.602555 kubelet[2821]: E0129 12:56:06.601589 2821 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wxmjl" Jan 29 12:56:06.602704 kubelet[2821]: E0129 12:56:06.601640 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wxmjl_kube-system(1e22ea74-beab-433f-90fd-803bfaa2c127)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wxmjl_kube-system(1e22ea74-beab-433f-90fd-803bfaa2c127)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wxmjl" podUID="1e22ea74-beab-433f-90fd-803bfaa2c127" Jan 29 12:56:06.612859 containerd[1580]: time="2025-01-29T12:56:06.612794184Z" level=error msg="Failed to destroy network for sandbox \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.613928 containerd[1580]: time="2025-01-29T12:56:06.613653022Z" level=error msg="encountered an error cleaning up failed sandbox \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.613928 containerd[1580]: time="2025-01-29T12:56:06.613799466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cd9546dc-kqbdx,Uid:f40c0f50-7def-47b4-a1b4-bdc72134047a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.615085 kubelet[2821]: E0129 12:56:06.614174 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.615085 kubelet[2821]: E0129 12:56:06.614233 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64cd9546dc-kqbdx" Jan 29 12:56:06.615085 kubelet[2821]: E0129 12:56:06.614259 2821 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64cd9546dc-kqbdx" Jan 29 12:56:06.615215 kubelet[2821]: E0129 12:56:06.614302 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64cd9546dc-kqbdx_calico-apiserver(f40c0f50-7def-47b4-a1b4-bdc72134047a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64cd9546dc-kqbdx_calico-apiserver(f40c0f50-7def-47b4-a1b4-bdc72134047a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64cd9546dc-kqbdx" podUID="f40c0f50-7def-47b4-a1b4-bdc72134047a" Jan 29 12:56:06.621818 containerd[1580]: time="2025-01-29T12:56:06.621752148Z" level=error msg="Failed to destroy network for sandbox \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.622856 containerd[1580]: time="2025-01-29T12:56:06.622600316Z" level=error msg="encountered an error cleaning up failed sandbox \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.623133 containerd[1580]: time="2025-01-29T12:56:06.623088559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68547577c7-5ck74,Uid:6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.623618 kubelet[2821]: E0129 12:56:06.623501 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.623618 kubelet[2821]: E0129 12:56:06.623565 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68547577c7-5ck74" Jan 29 12:56:06.623618 kubelet[2821]: E0129 12:56:06.623588 2821 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68547577c7-5ck74" Jan 29 12:56:06.625151 kubelet[2821]: E0129 12:56:06.623818 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68547577c7-5ck74_calico-system(6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68547577c7-5ck74_calico-system(6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68547577c7-5ck74" podUID="6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7" Jan 29 12:56:06.627690 containerd[1580]: time="2025-01-29T12:56:06.627634788Z" level=error msg="Failed to destroy network for sandbox \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.628039 containerd[1580]: time="2025-01-29T12:56:06.627999931Z" level=error msg="encountered an error cleaning up failed sandbox \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.628120 containerd[1580]: time="2025-01-29T12:56:06.628063209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cd9546dc-s76sw,Uid:f2143d0c-26ec-4f29-91e9-994f575528dd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.628282 kubelet[2821]: E0129 12:56:06.628251 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.628360 kubelet[2821]: E0129 12:56:06.628299 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64cd9546dc-s76sw" Jan 29 12:56:06.628360 kubelet[2821]: E0129 12:56:06.628324 2821 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64cd9546dc-s76sw" Jan 29 12:56:06.628458 kubelet[2821]: E0129 12:56:06.628365 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64cd9546dc-s76sw_calico-apiserver(f2143d0c-26ec-4f29-91e9-994f575528dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64cd9546dc-s76sw_calico-apiserver(f2143d0c-26ec-4f29-91e9-994f575528dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64cd9546dc-s76sw" podUID="f2143d0c-26ec-4f29-91e9-994f575528dd" Jan 29 12:56:06.763058 kubelet[2821]: I0129 12:56:06.762086 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Jan 29 12:56:06.763963 containerd[1580]: time="2025-01-29T12:56:06.763689894Z" level=info msg="StopPodSandbox for \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\"" Jan 29 12:56:06.764236 containerd[1580]: time="2025-01-29T12:56:06.764195661Z" level=info msg="Ensure that sandbox f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676 in task-service has been cleanup successfully" Jan 29 12:56:06.769458 kubelet[2821]: I0129 12:56:06.769417 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Jan 29 12:56:06.771597 containerd[1580]: time="2025-01-29T12:56:06.770934342Z" level=info msg="StopPodSandbox for \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\"" Jan 29 12:56:06.771597 containerd[1580]: time="2025-01-29T12:56:06.771270751Z" level=info msg="Ensure that sandbox 11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83 in task-service has been cleanup successfully" Jan 29 12:56:06.778276 kubelet[2821]: I0129 12:56:06.778200 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Jan 29 12:56:06.783627 kubelet[2821]: I0129 12:56:06.783578 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Jan 29 12:56:06.788832 containerd[1580]: time="2025-01-29T12:56:06.788319117Z" level=info msg="StopPodSandbox for \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\"" Jan 29 12:56:06.789223 containerd[1580]: time="2025-01-29T12:56:06.789074621Z" level=info msg="Ensure that sandbox cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53 in task-service has been cleanup successfully" Jan 29 12:56:06.789459 containerd[1580]: time="2025-01-29T12:56:06.789389279Z" level=info msg="StopPodSandbox for \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\"" Jan 29 12:56:06.790013 containerd[1580]: time="2025-01-29T12:56:06.789968543Z" level=info msg="Ensure that sandbox 419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603 in task-service has been cleanup successfully" Jan 29 12:56:06.802381 kubelet[2821]: I0129 12:56:06.801903 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Jan 29 12:56:06.808334 containerd[1580]: time="2025-01-29T12:56:06.808243473Z" level=info msg="StopPodSandbox for \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\"" Jan 29 12:56:06.810574 containerd[1580]: time="2025-01-29T12:56:06.810216916Z" level=info msg="Ensure that sandbox 841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec in task-service has been cleanup successfully" Jan 29 12:56:06.837896 containerd[1580]: time="2025-01-29T12:56:06.837830373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 12:56:06.841076 kubelet[2821]: I0129 12:56:06.840939 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Jan 29 12:56:06.843355 containerd[1580]: time="2025-01-29T12:56:06.843235118Z" level=info msg="StopPodSandbox for \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\"" Jan 29 12:56:06.845482 containerd[1580]: time="2025-01-29T12:56:06.845261349Z" level=info msg="Ensure that sandbox 91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7 in task-service has been cleanup successfully" Jan 29 12:56:06.916853 containerd[1580]: time="2025-01-29T12:56:06.916716974Z" level=error msg="StopPodSandbox for \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\" failed" error="failed to destroy network for sandbox \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.917957 kubelet[2821]: E0129 12:56:06.917925 2821 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Jan 29 12:56:06.919002 kubelet[2821]: E0129 12:56:06.918313 2821 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676"} Jan 29 12:56:06.919002 kubelet[2821]: E0129 12:56:06.918489 2821 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f40c0f50-7def-47b4-a1b4-bdc72134047a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:56:06.919002 kubelet[2821]: E0129 12:56:06.918530 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f40c0f50-7def-47b4-a1b4-bdc72134047a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64cd9546dc-kqbdx" podUID="f40c0f50-7def-47b4-a1b4-bdc72134047a" Jan 29 12:56:06.926164 containerd[1580]: time="2025-01-29T12:56:06.925879500Z" level=error msg="StopPodSandbox for \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\" failed" error="failed to destroy network for sandbox \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.926343 kubelet[2821]: E0129 12:56:06.926294 2821 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Jan 29 12:56:06.926406 kubelet[2821]: E0129 12:56:06.926355 2821 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83"} Jan 29 12:56:06.926406 kubelet[2821]: E0129 12:56:06.926392 2821 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:56:06.926512 kubelet[2821]: E0129 12:56:06.926418 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68547577c7-5ck74" podUID="6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7" Jan 29 12:56:06.943159 containerd[1580]: time="2025-01-29T12:56:06.943071724Z" level=error msg="StopPodSandbox for \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\" failed" error="failed to destroy network for sandbox \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.943791 kubelet[2821]: E0129 12:56:06.943660 2821 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Jan 29 12:56:06.943791 kubelet[2821]: E0129 12:56:06.943735 2821 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603"} Jan 29 12:56:06.943994 kubelet[2821]: E0129 12:56:06.943796 2821 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:56:06.943994 kubelet[2821]: E0129 12:56:06.943830 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t9ghv" podUID="238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c" Jan 29 12:56:06.946757 containerd[1580]: time="2025-01-29T12:56:06.946653899Z" level=error msg="StopPodSandbox for \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\" failed" error="failed to destroy network for sandbox \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.946757 containerd[1580]: time="2025-01-29T12:56:06.946727596Z" level=error msg="StopPodSandbox for \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\" failed" error="failed to destroy network for sandbox \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.947160 kubelet[2821]: E0129 12:56:06.946963 2821 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Jan 29 12:56:06.947160 kubelet[2821]: E0129 12:56:06.947013 2821 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec"} Jan 29 12:56:06.947160 kubelet[2821]: E0129 12:56:06.947048 2821 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dfee4c06-00be-4b24-82c3-46cade2a09c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:56:06.947160 kubelet[2821]: E0129 12:56:06.947079 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dfee4c06-00be-4b24-82c3-46cade2a09c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x8b47" podUID="dfee4c06-00be-4b24-82c3-46cade2a09c8" Jan 29 12:56:06.947372 kubelet[2821]: E0129 12:56:06.947246 2821 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Jan 29 12:56:06.947372 kubelet[2821]: E0129 12:56:06.947282 2821 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53"} Jan 29 12:56:06.947372 kubelet[2821]: E0129 12:56:06.947313 2821 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2143d0c-26ec-4f29-91e9-994f575528dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:56:06.947372 kubelet[2821]: E0129 12:56:06.947348 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2143d0c-26ec-4f29-91e9-994f575528dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64cd9546dc-s76sw" podUID="f2143d0c-26ec-4f29-91e9-994f575528dd" Jan 29 12:56:06.955687 containerd[1580]: time="2025-01-29T12:56:06.955359670Z" level=error msg="StopPodSandbox for \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\" failed" error="failed to destroy network for sandbox \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:56:06.955827 kubelet[2821]: E0129 12:56:06.955552 2821 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Jan 29 12:56:06.955827 kubelet[2821]: E0129 12:56:06.955590 2821 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7"} Jan 29 12:56:06.955827 kubelet[2821]: E0129 12:56:06.955621 2821 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1e22ea74-beab-433f-90fd-803bfaa2c127\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:56:06.955827 kubelet[2821]: E0129 12:56:06.955644 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1e22ea74-beab-433f-90fd-803bfaa2c127\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wxmjl" podUID="1e22ea74-beab-433f-90fd-803bfaa2c127" Jan 29 12:56:07.141595 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603-shm.mount: Deactivated successfully. Jan 29 12:56:07.141967 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec-shm.mount: Deactivated successfully. Jan 29 12:56:15.504064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3899843883.mount: Deactivated successfully. Jan 29 12:56:15.563032 containerd[1580]: time="2025-01-29T12:56:15.562984408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:15.565440 containerd[1580]: time="2025-01-29T12:56:15.565378705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 12:56:15.567177 containerd[1580]: time="2025-01-29T12:56:15.567134861Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:15.603346 containerd[1580]: time="2025-01-29T12:56:15.603150986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:15.603972 containerd[1580]: time="2025-01-29T12:56:15.603693774Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.763568215s" Jan 29 12:56:15.603972 containerd[1580]: time="2025-01-29T12:56:15.603761683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 12:56:15.647654 containerd[1580]: time="2025-01-29T12:56:15.647596019Z" level=info msg="CreateContainer within sandbox \"a09d9f56cc7779244cab1c8af162945b93ac1f5b5fbf2ed8be2e53ca81b74a2d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:56:15.676410 containerd[1580]: time="2025-01-29T12:56:15.676332753Z" level=info msg="CreateContainer within sandbox \"a09d9f56cc7779244cab1c8af162945b93ac1f5b5fbf2ed8be2e53ca81b74a2d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1ef75dbfd46fab3371c2e13ece62256ce3d7f9dae546b2f85ca6604a81b67893\"" Jan 29 12:56:15.678839 containerd[1580]: time="2025-01-29T12:56:15.678549493Z" level=info msg="StartContainer for \"1ef75dbfd46fab3371c2e13ece62256ce3d7f9dae546b2f85ca6604a81b67893\"" Jan 29 12:56:15.767786 containerd[1580]: time="2025-01-29T12:56:15.766729275Z" level=info msg="StartContainer for \"1ef75dbfd46fab3371c2e13ece62256ce3d7f9dae546b2f85ca6604a81b67893\" returns successfully" Jan 29 12:56:15.851908 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 12:56:15.852004 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 12:56:15.903111 kubelet[2821]: I0129 12:56:15.902894 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-85h29" podStartSLOduration=1.488944954 podStartE2EDuration="26.902876563s" podCreationTimestamp="2025-01-29 12:55:49 +0000 UTC" firstStartedPulling="2025-01-29 12:55:50.191122212 +0000 UTC m=+20.741304776" lastFinishedPulling="2025-01-29 12:56:15.605053771 +0000 UTC m=+46.155236385" observedRunningTime="2025-01-29 12:56:15.899256084 +0000 UTC m=+46.449438658" watchObservedRunningTime="2025-01-29 12:56:15.902876563 +0000 UTC m=+46.453059127" Jan 29 12:56:16.929247 systemd[1]: run-containerd-runc-k8s.io-1ef75dbfd46fab3371c2e13ece62256ce3d7f9dae546b2f85ca6604a81b67893-runc.ByvrYA.mount: Deactivated successfully. Jan 29 12:56:17.512807 kernel: bpftool[4171]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 12:56:17.555783 containerd[1580]: time="2025-01-29T12:56:17.553862530Z" level=info msg="StopPodSandbox for \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\"" Jan 29 12:56:17.556648 containerd[1580]: time="2025-01-29T12:56:17.556426815Z" level=info msg="StopPodSandbox for \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\"" Jan 29 12:56:17.765587 containerd[1580]: 2025-01-29 12:56:17.697 [INFO][4199] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Jan 29 12:56:17.765587 containerd[1580]: 2025-01-29 12:56:17.697 [INFO][4199] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" iface="eth0" netns="/var/run/netns/cni-df09046b-66fd-7a52-4a5a-7649c12e3c8e" Jan 29 12:56:17.765587 containerd[1580]: 2025-01-29 12:56:17.697 [INFO][4199] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" iface="eth0" netns="/var/run/netns/cni-df09046b-66fd-7a52-4a5a-7649c12e3c8e" Jan 29 12:56:17.765587 containerd[1580]: 2025-01-29 12:56:17.698 [INFO][4199] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" iface="eth0" netns="/var/run/netns/cni-df09046b-66fd-7a52-4a5a-7649c12e3c8e" Jan 29 12:56:17.765587 containerd[1580]: 2025-01-29 12:56:17.698 [INFO][4199] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Jan 29 12:56:17.765587 containerd[1580]: 2025-01-29 12:56:17.699 [INFO][4199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Jan 29 12:56:17.765587 containerd[1580]: 2025-01-29 12:56:17.744 [INFO][4211] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" HandleID="k8s-pod-network.11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:17.765587 containerd[1580]: 2025-01-29 12:56:17.745 [INFO][4211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:17.765587 containerd[1580]: 2025-01-29 12:56:17.745 [INFO][4211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:17.765587 containerd[1580]: 2025-01-29 12:56:17.753 [WARNING][4211] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" HandleID="k8s-pod-network.11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:17.765587 containerd[1580]: 2025-01-29 12:56:17.753 [INFO][4211] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" HandleID="k8s-pod-network.11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:17.765587 containerd[1580]: 2025-01-29 12:56:17.756 [INFO][4211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:17.765587 containerd[1580]: 2025-01-29 12:56:17.763 [INFO][4199] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Jan 29 12:56:17.768785 containerd[1580]: time="2025-01-29T12:56:17.766873163Z" level=info msg="TearDown network for sandbox \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\" successfully" Jan 29 12:56:17.768785 containerd[1580]: time="2025-01-29T12:56:17.766932335Z" level=info msg="StopPodSandbox for \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\" returns successfully" Jan 29 12:56:17.771702 systemd[1]: run-netns-cni\x2ddf09046b\x2d66fd\x2d7a52\x2d4a5a\x2d7649c12e3c8e.mount: Deactivated successfully. Jan 29 12:56:17.778367 containerd[1580]: time="2025-01-29T12:56:17.777966918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68547577c7-5ck74,Uid:6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7,Namespace:calico-system,Attempt:1,}" Jan 29 12:56:17.791598 containerd[1580]: 2025-01-29 12:56:17.691 [INFO][4198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Jan 29 12:56:17.791598 containerd[1580]: 2025-01-29 12:56:17.696 [INFO][4198] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" iface="eth0" netns="/var/run/netns/cni-b36ce83b-94dd-6ced-35ca-8b5c83e8907f" Jan 29 12:56:17.791598 containerd[1580]: 2025-01-29 12:56:17.696 [INFO][4198] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" iface="eth0" netns="/var/run/netns/cni-b36ce83b-94dd-6ced-35ca-8b5c83e8907f" Jan 29 12:56:17.791598 containerd[1580]: 2025-01-29 12:56:17.698 [INFO][4198] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" iface="eth0" netns="/var/run/netns/cni-b36ce83b-94dd-6ced-35ca-8b5c83e8907f" Jan 29 12:56:17.791598 containerd[1580]: 2025-01-29 12:56:17.698 [INFO][4198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Jan 29 12:56:17.791598 containerd[1580]: 2025-01-29 12:56:17.699 [INFO][4198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Jan 29 12:56:17.791598 containerd[1580]: 2025-01-29 12:56:17.746 [INFO][4210] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" HandleID="k8s-pod-network.cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:17.791598 containerd[1580]: 2025-01-29 12:56:17.746 [INFO][4210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:17.791598 containerd[1580]: 2025-01-29 12:56:17.757 [INFO][4210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:17.791598 containerd[1580]: 2025-01-29 12:56:17.776 [WARNING][4210] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" HandleID="k8s-pod-network.cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:17.791598 containerd[1580]: 2025-01-29 12:56:17.776 [INFO][4210] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" HandleID="k8s-pod-network.cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:17.791598 containerd[1580]: 2025-01-29 12:56:17.780 [INFO][4210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:17.791598 containerd[1580]: 2025-01-29 12:56:17.785 [INFO][4198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Jan 29 12:56:17.794153 containerd[1580]: time="2025-01-29T12:56:17.792997976Z" level=info msg="TearDown network for sandbox \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\" successfully" Jan 29 12:56:17.794153 containerd[1580]: time="2025-01-29T12:56:17.793030539Z" level=info msg="StopPodSandbox for \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\" returns successfully" Jan 29 12:56:17.797477 containerd[1580]: time="2025-01-29T12:56:17.796176585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cd9546dc-s76sw,Uid:f2143d0c-26ec-4f29-91e9-994f575528dd,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:56:17.796467 systemd[1]: run-netns-cni\x2db36ce83b\x2d94dd\x2d6ced\x2d35ca\x2d8b5c83e8907f.mount: Deactivated successfully. Jan 29 12:56:17.857012 systemd-networkd[1206]: vxlan.calico: Link UP Jan 29 12:56:17.859340 systemd-networkd[1206]: vxlan.calico: Gained carrier Jan 29 12:56:18.186377 systemd-networkd[1206]: cali9b148c94038: Link UP Jan 29 12:56:18.186579 systemd-networkd[1206]: cali9b148c94038: Gained carrier Jan 29 12:56:18.212848 systemd-networkd[1206]: calibc12303ede8: Link UP Jan 29 12:56:18.215834 systemd-networkd[1206]: calibc12303ede8: Gained carrier Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:17.964 [INFO][4250] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0 calico-apiserver-64cd9546dc- calico-apiserver f2143d0c-26ec-4f29-91e9-994f575528dd 760 0 2025-01-29 12:55:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64cd9546dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-e-0a72854eea.novalocal calico-apiserver-64cd9546dc-s76sw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9b148c94038 [] []}} ContainerID="272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-s76sw" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-" Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:17.964 [INFO][4250] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-s76sw" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.031 [INFO][4288] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" HandleID="k8s-pod-network.272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.043 [INFO][4288] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" HandleID="k8s-pod-network.272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002936d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-e-0a72854eea.novalocal", "pod":"calico-apiserver-64cd9546dc-s76sw", "timestamp":"2025-01-29 12:56:18.03140117 +0000 UTC"}, Hostname:"ci-4081-3-0-e-0a72854eea.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.043 [INFO][4288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.106 [INFO][4288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.106 [INFO][4288] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e-0a72854eea.novalocal' Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.109 [INFO][4288] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.114 [INFO][4288] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.118 [INFO][4288] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.121 [INFO][4288] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.124 [INFO][4288] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.124 [INFO][4288] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.126 [INFO][4288] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.153 [INFO][4288] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.176 [INFO][4288] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.2/26] block=192.168.17.0/26 handle="k8s-pod-network.272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.177 [INFO][4288] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.2/26] handle="k8s-pod-network.272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.177 [INFO][4288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:18.253072 containerd[1580]: 2025-01-29 12:56:18.177 [INFO][4288] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.2/26] IPv6=[] ContainerID="272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" HandleID="k8s-pod-network.272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:18.254534 containerd[1580]: 2025-01-29 12:56:18.179 [INFO][4250] cni-plugin/k8s.go 386: Populated endpoint ContainerID="272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-s76sw" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0", GenerateName:"calico-apiserver-64cd9546dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2143d0c-26ec-4f29-91e9-994f575528dd", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cd9546dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"", Pod:"calico-apiserver-64cd9546dc-s76sw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b148c94038", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:18.254534 containerd[1580]: 2025-01-29 12:56:18.180 [INFO][4250] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.2/32] ContainerID="272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-s76sw" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:18.254534 containerd[1580]: 2025-01-29 12:56:18.180 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b148c94038 ContainerID="272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-s76sw" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:18.254534 containerd[1580]: 2025-01-29 12:56:18.213 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-s76sw" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:18.254534 containerd[1580]: 2025-01-29 12:56:18.214 [INFO][4250] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-s76sw" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0", GenerateName:"calico-apiserver-64cd9546dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2143d0c-26ec-4f29-91e9-994f575528dd", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cd9546dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f", Pod:"calico-apiserver-64cd9546dc-s76sw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b148c94038", MAC:"76:81:b6:10:b8:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:18.254534 containerd[1580]: 2025-01-29 12:56:18.250 [INFO][4250] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-s76sw" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:17.936 [INFO][4240] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0 calico-kube-controllers-68547577c7- calico-system 6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7 761 0 2025-01-29 12:55:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68547577c7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-e-0a72854eea.novalocal calico-kube-controllers-68547577c7-5ck74 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibc12303ede8 [] []}} ContainerID="2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" Namespace="calico-system" Pod="calico-kube-controllers-68547577c7-5ck74" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-" Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:17.940 [INFO][4240] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" Namespace="calico-system" Pod="calico-kube-controllers-68547577c7-5ck74" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.020 [INFO][4283] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" HandleID="k8s-pod-network.2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.039 [INFO][4283] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" HandleID="k8s-pod-network.2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004829f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-e-0a72854eea.novalocal", "pod":"calico-kube-controllers-68547577c7-5ck74", "timestamp":"2025-01-29 12:56:18.018698157 +0000 UTC"}, Hostname:"ci-4081-3-0-e-0a72854eea.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.040 [INFO][4283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.040 [INFO][4283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.040 [INFO][4283] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e-0a72854eea.novalocal' Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.042 [INFO][4283] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.047 [INFO][4283] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.052 [INFO][4283] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.055 [INFO][4283] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.057 [INFO][4283] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.057 [INFO][4283] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.059 [INFO][4283] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94 Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.075 [INFO][4283] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.106 [INFO][4283] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.1/26] block=192.168.17.0/26 handle="k8s-pod-network.2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.106 [INFO][4283] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.1/26] handle="k8s-pod-network.2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.106 [INFO][4283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:18.353008 containerd[1580]: 2025-01-29 12:56:18.106 [INFO][4283] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.1/26] IPv6=[] ContainerID="2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" HandleID="k8s-pod-network.2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:18.354693 containerd[1580]: 2025-01-29 12:56:18.108 [INFO][4240] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" Namespace="calico-system" Pod="calico-kube-controllers-68547577c7-5ck74" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0", GenerateName:"calico-kube-controllers-68547577c7-", Namespace:"calico-system", SelfLink:"", UID:"6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68547577c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"", Pod:"calico-kube-controllers-68547577c7-5ck74", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc12303ede8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:18.354693 containerd[1580]: 2025-01-29 12:56:18.109 [INFO][4240] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.1/32] ContainerID="2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" Namespace="calico-system" Pod="calico-kube-controllers-68547577c7-5ck74" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:18.354693 containerd[1580]: 2025-01-29 12:56:18.109 [INFO][4240] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc12303ede8 ContainerID="2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" Namespace="calico-system" Pod="calico-kube-controllers-68547577c7-5ck74" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:18.354693 containerd[1580]: 2025-01-29 12:56:18.214 [INFO][4240] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" Namespace="calico-system" Pod="calico-kube-controllers-68547577c7-5ck74" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:18.354693 containerd[1580]: 2025-01-29 12:56:18.214 [INFO][4240] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" Namespace="calico-system" Pod="calico-kube-controllers-68547577c7-5ck74" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0", GenerateName:"calico-kube-controllers-68547577c7-", Namespace:"calico-system", SelfLink:"", UID:"6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68547577c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94", Pod:"calico-kube-controllers-68547577c7-5ck74", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc12303ede8", MAC:"8a:a3:5c:55:0d:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:18.354693 containerd[1580]: 2025-01-29 12:56:18.343 [INFO][4240] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94" Namespace="calico-system" Pod="calico-kube-controllers-68547577c7-5ck74" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:18.552905 containerd[1580]: time="2025-01-29T12:56:18.552859784Z" level=info msg="StopPodSandbox for \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\"" Jan 29 12:56:18.554874 containerd[1580]: time="2025-01-29T12:56:18.553351404Z" level=info msg="StopPodSandbox for \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\"" Jan 29 12:56:18.604256 containerd[1580]: time="2025-01-29T12:56:18.603957150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:56:18.604653 containerd[1580]: time="2025-01-29T12:56:18.604258079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:56:18.604653 containerd[1580]: time="2025-01-29T12:56:18.604286393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:56:18.604653 containerd[1580]: time="2025-01-29T12:56:18.604389018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:56:18.700640 containerd[1580]: time="2025-01-29T12:56:18.700593250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cd9546dc-s76sw,Uid:f2143d0c-26ec-4f29-91e9-994f575528dd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f\"" Jan 29 12:56:18.723945 containerd[1580]: 2025-01-29 12:56:18.650 [INFO][4389] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Jan 29 12:56:18.723945 containerd[1580]: 2025-01-29 12:56:18.650 [INFO][4389] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" iface="eth0" netns="/var/run/netns/cni-001259de-7dcc-b898-c97b-d69e02c0712a" Jan 29 12:56:18.723945 containerd[1580]: 2025-01-29 12:56:18.650 [INFO][4389] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" iface="eth0" netns="/var/run/netns/cni-001259de-7dcc-b898-c97b-d69e02c0712a" Jan 29 12:56:18.723945 containerd[1580]: 2025-01-29 12:56:18.651 [INFO][4389] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" iface="eth0" netns="/var/run/netns/cni-001259de-7dcc-b898-c97b-d69e02c0712a" Jan 29 12:56:18.723945 containerd[1580]: 2025-01-29 12:56:18.651 [INFO][4389] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Jan 29 12:56:18.723945 containerd[1580]: 2025-01-29 12:56:18.651 [INFO][4389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Jan 29 12:56:18.723945 containerd[1580]: 2025-01-29 12:56:18.699 [INFO][4430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" HandleID="k8s-pod-network.419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:18.723945 containerd[1580]: 2025-01-29 12:56:18.700 [INFO][4430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:18.723945 containerd[1580]: 2025-01-29 12:56:18.700 [INFO][4430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:18.723945 containerd[1580]: 2025-01-29 12:56:18.710 [WARNING][4430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" HandleID="k8s-pod-network.419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:18.723945 containerd[1580]: 2025-01-29 12:56:18.710 [INFO][4430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" HandleID="k8s-pod-network.419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:18.723945 containerd[1580]: 2025-01-29 12:56:18.714 [INFO][4430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:18.723945 containerd[1580]: 2025-01-29 12:56:18.721 [INFO][4389] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Jan 29 12:56:18.725499 containerd[1580]: time="2025-01-29T12:56:18.725292610Z" level=info msg="TearDown network for sandbox \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\" successfully" Jan 29 12:56:18.725669 containerd[1580]: time="2025-01-29T12:56:18.725328408Z" level=info msg="StopPodSandbox for \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\" returns successfully" Jan 29 12:56:18.729026 containerd[1580]: time="2025-01-29T12:56:18.729002993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t9ghv,Uid:238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c,Namespace:kube-system,Attempt:1,}" Jan 29 12:56:18.762471 containerd[1580]: time="2025-01-29T12:56:18.762327970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:56:18.776449 systemd[1]: run-netns-cni\x2d001259de\x2d7dcc\x2db898\x2dc97b\x2dd69e02c0712a.mount: Deactivated successfully. Jan 29 12:56:18.800337 containerd[1580]: 2025-01-29 12:56:18.750 [INFO][4381] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Jan 29 12:56:18.800337 containerd[1580]: 2025-01-29 12:56:18.750 [INFO][4381] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" iface="eth0" netns="/var/run/netns/cni-28ae53f6-7641-076f-e792-33d88ab4ff33" Jan 29 12:56:18.800337 containerd[1580]: 2025-01-29 12:56:18.750 [INFO][4381] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" iface="eth0" netns="/var/run/netns/cni-28ae53f6-7641-076f-e792-33d88ab4ff33" Jan 29 12:56:18.800337 containerd[1580]: 2025-01-29 12:56:18.750 [INFO][4381] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" iface="eth0" netns="/var/run/netns/cni-28ae53f6-7641-076f-e792-33d88ab4ff33" Jan 29 12:56:18.800337 containerd[1580]: 2025-01-29 12:56:18.750 [INFO][4381] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Jan 29 12:56:18.800337 containerd[1580]: 2025-01-29 12:56:18.750 [INFO][4381] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Jan 29 12:56:18.800337 containerd[1580]: 2025-01-29 12:56:18.786 [INFO][4444] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" HandleID="k8s-pod-network.f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:18.800337 containerd[1580]: 2025-01-29 12:56:18.786 [INFO][4444] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:18.800337 containerd[1580]: 2025-01-29 12:56:18.786 [INFO][4444] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:18.800337 containerd[1580]: 2025-01-29 12:56:18.795 [WARNING][4444] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" HandleID="k8s-pod-network.f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:18.800337 containerd[1580]: 2025-01-29 12:56:18.795 [INFO][4444] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" HandleID="k8s-pod-network.f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:18.800337 containerd[1580]: 2025-01-29 12:56:18.798 [INFO][4444] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:18.800337 containerd[1580]: 2025-01-29 12:56:18.799 [INFO][4381] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Jan 29 12:56:18.801500 containerd[1580]: time="2025-01-29T12:56:18.800472419Z" level=info msg="TearDown network for sandbox \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\" successfully" Jan 29 12:56:18.801500 containerd[1580]: time="2025-01-29T12:56:18.800499160Z" level=info msg="StopPodSandbox for \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\" returns successfully" Jan 29 12:56:18.801500 containerd[1580]: time="2025-01-29T12:56:18.801100157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cd9546dc-kqbdx,Uid:f40c0f50-7def-47b4-a1b4-bdc72134047a,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:56:18.804299 systemd[1]: run-netns-cni\x2d28ae53f6\x2d7641\x2d076f\x2de792\x2d33d88ab4ff33.mount: Deactivated successfully. Jan 29 12:56:19.138008 containerd[1580]: time="2025-01-29T12:56:19.137118047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:56:19.138008 containerd[1580]: time="2025-01-29T12:56:19.137217365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:56:19.138008 containerd[1580]: time="2025-01-29T12:56:19.137505069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:56:19.138405 containerd[1580]: time="2025-01-29T12:56:19.138077273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:56:19.187325 systemd-networkd[1206]: vxlan.calico: Gained IPv6LL Jan 29 12:56:19.286369 containerd[1580]: time="2025-01-29T12:56:19.286264976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68547577c7-5ck74,Uid:6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7,Namespace:calico-system,Attempt:1,} returns sandbox id \"2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94\"" Jan 29 12:56:19.335454 systemd-networkd[1206]: calieb14f95517e: Link UP Jan 29 12:56:19.337476 systemd-networkd[1206]: calieb14f95517e: Gained carrier Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.219 [INFO][4468] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0 coredns-7db6d8ff4d- kube-system 238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c 774 0 2025-01-29 12:55:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-e-0a72854eea.novalocal coredns-7db6d8ff4d-t9ghv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieb14f95517e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t9ghv" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-" Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.220 [INFO][4468] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t9ghv" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.257 [INFO][4504] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" HandleID="k8s-pod-network.be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.282 [INFO][4504] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" HandleID="k8s-pod-network.be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b370), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-e-0a72854eea.novalocal", "pod":"coredns-7db6d8ff4d-t9ghv", "timestamp":"2025-01-29 12:56:19.257503362 +0000 UTC"}, Hostname:"ci-4081-3-0-e-0a72854eea.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.282 [INFO][4504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.282 [INFO][4504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.282 [INFO][4504] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e-0a72854eea.novalocal' Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.287 [INFO][4504] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.296 [INFO][4504] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.303 [INFO][4504] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.306 [INFO][4504] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.310 [INFO][4504] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.310 [INFO][4504] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.313 [INFO][4504] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7 Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.318 [INFO][4504] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.326 [INFO][4504] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.3/26] block=192.168.17.0/26 handle="k8s-pod-network.be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.326 [INFO][4504] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.3/26] handle="k8s-pod-network.be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.326 [INFO][4504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:19.363043 containerd[1580]: 2025-01-29 12:56:19.326 [INFO][4504] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.3/26] IPv6=[] ContainerID="be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" HandleID="k8s-pod-network.be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:19.364095 containerd[1580]: 2025-01-29 12:56:19.328 [INFO][4468] cni-plugin/k8s.go 386: Populated endpoint ContainerID="be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t9ghv" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-t9ghv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb14f95517e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:19.364095 containerd[1580]: 2025-01-29 12:56:19.330 [INFO][4468] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.3/32] ContainerID="be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t9ghv" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:19.364095 containerd[1580]: 2025-01-29 12:56:19.330 [INFO][4468] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb14f95517e ContainerID="be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t9ghv" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:19.364095 containerd[1580]: 2025-01-29 12:56:19.336 [INFO][4468] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t9ghv" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:19.364095 containerd[1580]: 2025-01-29 12:56:19.338 [INFO][4468] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t9ghv" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7", Pod:"coredns-7db6d8ff4d-t9ghv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb14f95517e", MAC:"56:92:b1:1b:49:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:19.364095 containerd[1580]: 2025-01-29 12:56:19.360 [INFO][4468] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t9ghv" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:19.410588 containerd[1580]: time="2025-01-29T12:56:19.408514458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:56:19.410588 containerd[1580]: time="2025-01-29T12:56:19.408595220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:56:19.410588 containerd[1580]: time="2025-01-29T12:56:19.408611462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:56:19.410588 containerd[1580]: time="2025-01-29T12:56:19.408715909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:56:19.441515 systemd-networkd[1206]: califb68f8625f4: Link UP Jan 29 12:56:19.443041 systemd-networkd[1206]: califb68f8625f4: Gained carrier Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.284 [INFO][4481] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0 calico-apiserver-64cd9546dc- calico-apiserver f40c0f50-7def-47b4-a1b4-bdc72134047a 775 0 2025-01-29 12:55:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64cd9546dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-e-0a72854eea.novalocal calico-apiserver-64cd9546dc-kqbdx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb68f8625f4 [] []}} ContainerID="2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-kqbdx" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-" Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.285 [INFO][4481] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-kqbdx" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.342 [INFO][4518] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" HandleID="k8s-pod-network.2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.365 [INFO][4518] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" HandleID="k8s-pod-network.2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051cf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-e-0a72854eea.novalocal", "pod":"calico-apiserver-64cd9546dc-kqbdx", "timestamp":"2025-01-29 12:56:19.342729639 +0000 UTC"}, Hostname:"ci-4081-3-0-e-0a72854eea.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.365 [INFO][4518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.365 [INFO][4518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.365 [INFO][4518] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e-0a72854eea.novalocal' Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.371 [INFO][4518] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.382 [INFO][4518] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.393 [INFO][4518] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.396 [INFO][4518] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.399 [INFO][4518] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.399 [INFO][4518] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.402 [INFO][4518] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.411 [INFO][4518] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.424 [INFO][4518] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.4/26] block=192.168.17.0/26 handle="k8s-pod-network.2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.424 [INFO][4518] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.4/26] handle="k8s-pod-network.2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.424 [INFO][4518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:19.474816 containerd[1580]: 2025-01-29 12:56:19.424 [INFO][4518] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.4/26] IPv6=[] ContainerID="2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" HandleID="k8s-pod-network.2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:19.475495 containerd[1580]: 2025-01-29 12:56:19.436 [INFO][4481] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-kqbdx" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0", GenerateName:"calico-apiserver-64cd9546dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f40c0f50-7def-47b4-a1b4-bdc72134047a", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cd9546dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"", Pod:"calico-apiserver-64cd9546dc-kqbdx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb68f8625f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:19.475495 containerd[1580]: 2025-01-29 12:56:19.436 [INFO][4481] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.4/32] ContainerID="2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-kqbdx" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:19.475495 containerd[1580]: 2025-01-29 12:56:19.436 [INFO][4481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb68f8625f4 ContainerID="2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-kqbdx" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:19.475495 containerd[1580]: 2025-01-29 12:56:19.441 [INFO][4481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-kqbdx" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:19.475495 containerd[1580]: 2025-01-29 12:56:19.445 [INFO][4481] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-kqbdx" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0", GenerateName:"calico-apiserver-64cd9546dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f40c0f50-7def-47b4-a1b4-bdc72134047a", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cd9546dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f", Pod:"calico-apiserver-64cd9546dc-kqbdx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb68f8625f4", MAC:"0e:30:3e:8c:3d:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:19.475495 containerd[1580]: 2025-01-29 12:56:19.467 [INFO][4481] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f" Namespace="calico-apiserver" Pod="calico-apiserver-64cd9546dc-kqbdx" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:19.504783 containerd[1580]: time="2025-01-29T12:56:19.504529118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t9ghv,Uid:238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c,Namespace:kube-system,Attempt:1,} returns sandbox id \"be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7\"" Jan 29 12:56:19.510678 containerd[1580]: time="2025-01-29T12:56:19.510507711Z" level=info msg="CreateContainer within sandbox \"be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:56:19.519631 containerd[1580]: time="2025-01-29T12:56:19.519521758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:56:19.520143 containerd[1580]: time="2025-01-29T12:56:19.520044517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:56:19.520209 containerd[1580]: time="2025-01-29T12:56:19.520134648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:56:19.521103 containerd[1580]: time="2025-01-29T12:56:19.521000967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:56:19.537667 containerd[1580]: time="2025-01-29T12:56:19.537544170Z" level=info msg="CreateContainer within sandbox \"be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa4deb49388f46dfbf5c917b9954c776b7ad8bbfb634ef308257e220d1c32b87\"" Jan 29 12:56:19.538645 containerd[1580]: time="2025-01-29T12:56:19.538120521Z" level=info msg="StartContainer for \"aa4deb49388f46dfbf5c917b9954c776b7ad8bbfb634ef308257e220d1c32b87\"" Jan 29 12:56:19.609946 containerd[1580]: time="2025-01-29T12:56:19.609708461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cd9546dc-kqbdx,Uid:f40c0f50-7def-47b4-a1b4-bdc72134047a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f\"" Jan 29 12:56:19.631335 containerd[1580]: time="2025-01-29T12:56:19.631144092Z" level=info msg="StartContainer for \"aa4deb49388f46dfbf5c917b9954c776b7ad8bbfb634ef308257e220d1c32b87\" returns successfully" Jan 29 12:56:19.891349 systemd-networkd[1206]: cali9b148c94038: Gained IPv6LL Jan 29 12:56:19.940439 kubelet[2821]: I0129 12:56:19.939633 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-t9ghv" podStartSLOduration=37.939598044 podStartE2EDuration="37.939598044s" podCreationTimestamp="2025-01-29 12:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:56:19.935811619 +0000 UTC m=+50.485994193" watchObservedRunningTime="2025-01-29 12:56:19.939598044 +0000 UTC m=+50.489780659" Jan 29 12:56:20.084505 systemd-networkd[1206]: calibc12303ede8: Gained IPv6LL Jan 29 12:56:21.171618 systemd-networkd[1206]: califb68f8625f4: Gained IPv6LL Jan 29 12:56:21.299456 systemd-networkd[1206]: calieb14f95517e: Gained IPv6LL Jan 29 12:56:21.553891 containerd[1580]: time="2025-01-29T12:56:21.553860422Z" level=info msg="StopPodSandbox for \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\"" Jan 29 12:56:21.556544 containerd[1580]: time="2025-01-29T12:56:21.556114725Z" level=info msg="StopPodSandbox for \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\"" Jan 29 12:56:21.766877 containerd[1580]: 2025-01-29 12:56:21.694 [INFO][4705] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Jan 29 12:56:21.766877 containerd[1580]: 2025-01-29 12:56:21.695 [INFO][4705] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" iface="eth0" netns="/var/run/netns/cni-6f578f9b-5460-fc0d-61e6-940038ce6300" Jan 29 12:56:21.766877 containerd[1580]: 2025-01-29 12:56:21.695 [INFO][4705] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" iface="eth0" netns="/var/run/netns/cni-6f578f9b-5460-fc0d-61e6-940038ce6300" Jan 29 12:56:21.766877 containerd[1580]: 2025-01-29 12:56:21.695 [INFO][4705] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" iface="eth0" netns="/var/run/netns/cni-6f578f9b-5460-fc0d-61e6-940038ce6300" Jan 29 12:56:21.766877 containerd[1580]: 2025-01-29 12:56:21.695 [INFO][4705] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Jan 29 12:56:21.766877 containerd[1580]: 2025-01-29 12:56:21.695 [INFO][4705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Jan 29 12:56:21.766877 containerd[1580]: 2025-01-29 12:56:21.751 [INFO][4718] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" HandleID="k8s-pod-network.91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:21.766877 containerd[1580]: 2025-01-29 12:56:21.752 [INFO][4718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:21.766877 containerd[1580]: 2025-01-29 12:56:21.752 [INFO][4718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:21.766877 containerd[1580]: 2025-01-29 12:56:21.760 [WARNING][4718] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" HandleID="k8s-pod-network.91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:21.766877 containerd[1580]: 2025-01-29 12:56:21.760 [INFO][4718] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" HandleID="k8s-pod-network.91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:21.766877 containerd[1580]: 2025-01-29 12:56:21.763 [INFO][4718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:21.766877 containerd[1580]: 2025-01-29 12:56:21.765 [INFO][4705] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Jan 29 12:56:21.770504 containerd[1580]: time="2025-01-29T12:56:21.768817540Z" level=info msg="TearDown network for sandbox \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\" successfully" Jan 29 12:56:21.770504 containerd[1580]: time="2025-01-29T12:56:21.768852275Z" level=info msg="StopPodSandbox for \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\" returns successfully" Jan 29 12:56:21.775336 containerd[1580]: time="2025-01-29T12:56:21.775305479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wxmjl,Uid:1e22ea74-beab-433f-90fd-803bfaa2c127,Namespace:kube-system,Attempt:1,}" Jan 29 12:56:21.779192 systemd[1]: run-netns-cni\x2d6f578f9b\x2d5460\x2dfc0d\x2d61e6\x2d940038ce6300.mount: Deactivated successfully. Jan 29 12:56:21.794866 containerd[1580]: 2025-01-29 12:56:21.694 [INFO][4697] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Jan 29 12:56:21.794866 containerd[1580]: 2025-01-29 12:56:21.695 [INFO][4697] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" iface="eth0" netns="/var/run/netns/cni-65f77823-b319-d8bf-68a0-87ae0e9762e0" Jan 29 12:56:21.794866 containerd[1580]: 2025-01-29 12:56:21.695 [INFO][4697] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" iface="eth0" netns="/var/run/netns/cni-65f77823-b319-d8bf-68a0-87ae0e9762e0" Jan 29 12:56:21.794866 containerd[1580]: 2025-01-29 12:56:21.696 [INFO][4697] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" iface="eth0" netns="/var/run/netns/cni-65f77823-b319-d8bf-68a0-87ae0e9762e0" Jan 29 12:56:21.794866 containerd[1580]: 2025-01-29 12:56:21.696 [INFO][4697] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Jan 29 12:56:21.794866 containerd[1580]: 2025-01-29 12:56:21.696 [INFO][4697] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Jan 29 12:56:21.794866 containerd[1580]: 2025-01-29 12:56:21.756 [INFO][4719] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" HandleID="k8s-pod-network.841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:21.794866 containerd[1580]: 2025-01-29 12:56:21.756 [INFO][4719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:21.794866 containerd[1580]: 2025-01-29 12:56:21.763 [INFO][4719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:21.794866 containerd[1580]: 2025-01-29 12:56:21.786 [WARNING][4719] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" HandleID="k8s-pod-network.841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:21.794866 containerd[1580]: 2025-01-29 12:56:21.786 [INFO][4719] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" HandleID="k8s-pod-network.841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:21.794866 containerd[1580]: 2025-01-29 12:56:21.789 [INFO][4719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:21.794866 containerd[1580]: 2025-01-29 12:56:21.792 [INFO][4697] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Jan 29 12:56:21.794866 containerd[1580]: time="2025-01-29T12:56:21.794683446Z" level=info msg="TearDown network for sandbox \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\" successfully" Jan 29 12:56:21.794866 containerd[1580]: time="2025-01-29T12:56:21.794824773Z" level=info msg="StopPodSandbox for \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\" returns successfully" Jan 29 12:56:21.796904 containerd[1580]: time="2025-01-29T12:56:21.795697384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8b47,Uid:dfee4c06-00be-4b24-82c3-46cade2a09c8,Namespace:calico-system,Attempt:1,}" Jan 29 12:56:21.801803 systemd[1]: run-netns-cni\x2d65f77823\x2db319\x2dd8bf\x2d68a0\x2d87ae0e9762e0.mount: Deactivated successfully. Jan 29 12:56:22.028266 systemd-networkd[1206]: cali634fd7d0b72: Link UP Jan 29 12:56:22.029498 systemd-networkd[1206]: cali634fd7d0b72: Gained carrier Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:21.900 [INFO][4740] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0 csi-node-driver- calico-system dfee4c06-00be-4b24-82c3-46cade2a09c8 804 0 2025-01-29 12:55:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-e-0a72854eea.novalocal csi-node-driver-x8b47 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali634fd7d0b72 [] []}} ContainerID="cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" Namespace="calico-system" Pod="csi-node-driver-x8b47" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-" Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:21.900 [INFO][4740] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" Namespace="calico-system" Pod="csi-node-driver-x8b47" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:21.959 [INFO][4758] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" HandleID="k8s-pod-network.cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:21.977 [INFO][4758] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" HandleID="k8s-pod-network.cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eebb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-e-0a72854eea.novalocal", "pod":"csi-node-driver-x8b47", "timestamp":"2025-01-29 12:56:21.959784889 +0000 UTC"}, Hostname:"ci-4081-3-0-e-0a72854eea.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:21.977 [INFO][4758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:21.977 [INFO][4758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:21.977 [INFO][4758] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e-0a72854eea.novalocal' Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:21.978 [INFO][4758] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:21.985 [INFO][4758] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:21.992 [INFO][4758] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:21.994 [INFO][4758] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:21.997 [INFO][4758] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:21.997 [INFO][4758] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:21.999 [INFO][4758] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:22.007 [INFO][4758] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:22.015 [INFO][4758] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.5/26] block=192.168.17.0/26 handle="k8s-pod-network.cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:22.016 [INFO][4758] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.5/26] handle="k8s-pod-network.cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:22.016 [INFO][4758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:22.053879 containerd[1580]: 2025-01-29 12:56:22.016 [INFO][4758] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.5/26] IPv6=[] ContainerID="cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" HandleID="k8s-pod-network.cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:22.054514 containerd[1580]: 2025-01-29 12:56:22.019 [INFO][4740] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" Namespace="calico-system" Pod="csi-node-driver-x8b47" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dfee4c06-00be-4b24-82c3-46cade2a09c8", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"", Pod:"csi-node-driver-x8b47", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali634fd7d0b72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:22.054514 containerd[1580]: 2025-01-29 12:56:22.019 [INFO][4740] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.5/32] ContainerID="cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" Namespace="calico-system" Pod="csi-node-driver-x8b47" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:22.054514 containerd[1580]: 2025-01-29 12:56:22.020 [INFO][4740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali634fd7d0b72 ContainerID="cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" Namespace="calico-system" Pod="csi-node-driver-x8b47" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:22.054514 containerd[1580]: 2025-01-29 12:56:22.026 [INFO][4740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" Namespace="calico-system" Pod="csi-node-driver-x8b47" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:22.054514 containerd[1580]: 2025-01-29 12:56:22.026 [INFO][4740] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" Namespace="calico-system" Pod="csi-node-driver-x8b47" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dfee4c06-00be-4b24-82c3-46cade2a09c8", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c", Pod:"csi-node-driver-x8b47", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali634fd7d0b72", MAC:"7e:5a:65:8f:75:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:22.054514 containerd[1580]: 2025-01-29 12:56:22.046 [INFO][4740] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c" Namespace="calico-system" Pod="csi-node-driver-x8b47" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:22.094796 systemd-networkd[1206]: cali55d7548fbc9: Link UP Jan 29 12:56:22.095869 systemd-networkd[1206]: cali55d7548fbc9: Gained carrier Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:21.885 [INFO][4730] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0 coredns-7db6d8ff4d- kube-system 1e22ea74-beab-433f-90fd-803bfaa2c127 803 0 2025-01-29 12:55:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-e-0a72854eea.novalocal coredns-7db6d8ff4d-wxmjl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali55d7548fbc9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxmjl" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-" Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:21.885 [INFO][4730] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxmjl" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:21.963 [INFO][4754] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" HandleID="k8s-pod-network.fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:21.989 [INFO][4754] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" HandleID="k8s-pod-network.fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011aa30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-e-0a72854eea.novalocal", "pod":"coredns-7db6d8ff4d-wxmjl", "timestamp":"2025-01-29 12:56:21.9635029 +0000 UTC"}, Hostname:"ci-4081-3-0-e-0a72854eea.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:21.989 [INFO][4754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:22.016 [INFO][4754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:22.016 [INFO][4754] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e-0a72854eea.novalocal' Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:22.019 [INFO][4754] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:22.030 [INFO][4754] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:22.043 [INFO][4754] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:22.052 [INFO][4754] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:22.059 [INFO][4754] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:22.059 [INFO][4754] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:22.063 [INFO][4754] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9 Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:22.073 [INFO][4754] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:22.082 [INFO][4754] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.6/26] block=192.168.17.0/26 handle="k8s-pod-network.fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:22.082 [INFO][4754] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.6/26] handle="k8s-pod-network.fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" host="ci-4081-3-0-e-0a72854eea.novalocal" Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:22.082 [INFO][4754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:22.126238 containerd[1580]: 2025-01-29 12:56:22.082 [INFO][4754] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.6/26] IPv6=[] ContainerID="fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" HandleID="k8s-pod-network.fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:22.127672 containerd[1580]: 2025-01-29 12:56:22.086 [INFO][4730] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxmjl" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1e22ea74-beab-433f-90fd-803bfaa2c127", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-wxmjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55d7548fbc9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:22.127672 containerd[1580]: 2025-01-29 12:56:22.087 [INFO][4730] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.6/32] ContainerID="fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxmjl" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:22.127672 containerd[1580]: 2025-01-29 12:56:22.088 [INFO][4730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55d7548fbc9 ContainerID="fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxmjl" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:22.127672 containerd[1580]: 2025-01-29 12:56:22.096 [INFO][4730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxmjl" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:22.127672 containerd[1580]: 2025-01-29 12:56:22.097 [INFO][4730] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxmjl" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1e22ea74-beab-433f-90fd-803bfaa2c127", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9", Pod:"coredns-7db6d8ff4d-wxmjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55d7548fbc9", MAC:"fe:86:7e:84:a9:8c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:22.127672 containerd[1580]: 2025-01-29 12:56:22.120 [INFO][4730] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxmjl" WorkloadEndpoint="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:22.136247 containerd[1580]: time="2025-01-29T12:56:22.130912391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:56:22.136247 containerd[1580]: time="2025-01-29T12:56:22.130999075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:56:22.136247 containerd[1580]: time="2025-01-29T12:56:22.131014063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:56:22.136247 containerd[1580]: time="2025-01-29T12:56:22.131116467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:56:22.235654 containerd[1580]: time="2025-01-29T12:56:22.235611100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8b47,Uid:dfee4c06-00be-4b24-82c3-46cade2a09c8,Namespace:calico-system,Attempt:1,} returns sandbox id \"cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c\"" Jan 29 12:56:22.240510 containerd[1580]: time="2025-01-29T12:56:22.239413368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:56:22.240510 containerd[1580]: time="2025-01-29T12:56:22.239745136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:56:22.240510 containerd[1580]: time="2025-01-29T12:56:22.239991100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:56:22.240510 containerd[1580]: time="2025-01-29T12:56:22.240281310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:56:22.316576 containerd[1580]: time="2025-01-29T12:56:22.316463724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wxmjl,Uid:1e22ea74-beab-433f-90fd-803bfaa2c127,Namespace:kube-system,Attempt:1,} returns sandbox id \"fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9\"" Jan 29 12:56:22.325893 containerd[1580]: time="2025-01-29T12:56:22.325853105Z" level=info msg="CreateContainer within sandbox \"fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:56:22.689242 containerd[1580]: time="2025-01-29T12:56:22.688953180Z" level=info msg="CreateContainer within sandbox \"fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e9903331c096cc50c0b51ee97aa9c1af1831271247fb3b50c1e5b044f2c9b144\"" Jan 29 12:56:22.690277 containerd[1580]: time="2025-01-29T12:56:22.689948583Z" level=info msg="StartContainer for \"e9903331c096cc50c0b51ee97aa9c1af1831271247fb3b50c1e5b044f2c9b144\"" Jan 29 12:56:22.765096 containerd[1580]: time="2025-01-29T12:56:22.765059560Z" level=info msg="StartContainer for \"e9903331c096cc50c0b51ee97aa9c1af1831271247fb3b50c1e5b044f2c9b144\" returns successfully" Jan 29 12:56:23.094786 containerd[1580]: time="2025-01-29T12:56:23.094714105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:23.097360 containerd[1580]: time="2025-01-29T12:56:23.096806911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 12:56:23.102265 containerd[1580]: time="2025-01-29T12:56:23.102225974Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:23.105216 containerd[1580]: time="2025-01-29T12:56:23.105153418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:23.106156 containerd[1580]: time="2025-01-29T12:56:23.106121086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.343752199s" Jan 29 12:56:23.106257 containerd[1580]: time="2025-01-29T12:56:23.106239370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 12:56:23.107484 containerd[1580]: time="2025-01-29T12:56:23.107458044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 12:56:23.116108 containerd[1580]: time="2025-01-29T12:56:23.115955815Z" level=info msg="CreateContainer within sandbox \"272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:56:23.152561 containerd[1580]: time="2025-01-29T12:56:23.152523364Z" level=info msg="CreateContainer within sandbox \"272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"341f9cdb20880248c01231daee19d9192147b91482624cc0eaf9a891433c0b55\"" Jan 29 12:56:23.153577 containerd[1580]: time="2025-01-29T12:56:23.153528604Z" level=info msg="StartContainer for \"341f9cdb20880248c01231daee19d9192147b91482624cc0eaf9a891433c0b55\"" Jan 29 12:56:23.154981 systemd-networkd[1206]: cali634fd7d0b72: Gained IPv6LL Jan 29 12:56:23.229829 containerd[1580]: time="2025-01-29T12:56:23.229652590Z" level=info msg="StartContainer for \"341f9cdb20880248c01231daee19d9192147b91482624cc0eaf9a891433c0b55\" returns successfully" Jan 29 12:56:23.795065 systemd-networkd[1206]: cali55d7548fbc9: Gained IPv6LL Jan 29 12:56:23.975356 kubelet[2821]: I0129 12:56:23.973819 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wxmjl" podStartSLOduration=41.97380223 podStartE2EDuration="41.97380223s" podCreationTimestamp="2025-01-29 12:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:56:22.979944984 +0000 UTC m=+53.530127548" watchObservedRunningTime="2025-01-29 12:56:23.97380223 +0000 UTC m=+54.523984804" Jan 29 12:56:23.996521 kubelet[2821]: I0129 12:56:23.995638 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64cd9546dc-s76sw" podStartSLOduration=30.608350354 podStartE2EDuration="34.995619157s" podCreationTimestamp="2025-01-29 12:55:49 +0000 UTC" firstStartedPulling="2025-01-29 12:56:18.71995633 +0000 UTC m=+49.270138894" lastFinishedPulling="2025-01-29 12:56:23.107225133 +0000 UTC m=+53.657407697" observedRunningTime="2025-01-29 12:56:23.977034559 +0000 UTC m=+54.527217174" watchObservedRunningTime="2025-01-29 12:56:23.995619157 +0000 UTC m=+54.545801721" Jan 29 12:56:24.961275 kubelet[2821]: I0129 12:56:24.960951 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:56:26.660149 containerd[1580]: time="2025-01-29T12:56:26.658911789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:26.662134 containerd[1580]: time="2025-01-29T12:56:26.661829099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 12:56:26.664230 containerd[1580]: time="2025-01-29T12:56:26.664198665Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:26.668639 containerd[1580]: time="2025-01-29T12:56:26.668608564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:26.669397 containerd[1580]: time="2025-01-29T12:56:26.669368770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.5618597s" Jan 29 12:56:26.669548 containerd[1580]: time="2025-01-29T12:56:26.669479860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 12:56:26.672790 containerd[1580]: time="2025-01-29T12:56:26.671334102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:56:26.690709 containerd[1580]: time="2025-01-29T12:56:26.690677628Z" level=info msg="CreateContainer within sandbox \"2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 12:56:26.713337 containerd[1580]: time="2025-01-29T12:56:26.713283725Z" level=info msg="CreateContainer within sandbox \"2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9c1a53771d396999afeab4962bbd7ca3c889e7eb9af39032de29d37c4e4a4ff7\"" Jan 29 12:56:26.714761 containerd[1580]: time="2025-01-29T12:56:26.714709168Z" level=info msg="StartContainer for \"9c1a53771d396999afeab4962bbd7ca3c889e7eb9af39032de29d37c4e4a4ff7\"" Jan 29 12:56:26.805943 containerd[1580]: time="2025-01-29T12:56:26.805908633Z" level=info msg="StartContainer for \"9c1a53771d396999afeab4962bbd7ca3c889e7eb9af39032de29d37c4e4a4ff7\" returns successfully" Jan 29 12:56:26.984084 kubelet[2821]: I0129 12:56:26.983751 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68547577c7-5ck74" podStartSLOduration=30.603690757 podStartE2EDuration="37.98373106s" podCreationTimestamp="2025-01-29 12:55:49 +0000 UTC" firstStartedPulling="2025-01-29 12:56:19.290527712 +0000 UTC m=+49.840710286" lastFinishedPulling="2025-01-29 12:56:26.670568025 +0000 UTC m=+57.220750589" observedRunningTime="2025-01-29 12:56:26.980364842 +0000 UTC m=+57.530547416" watchObservedRunningTime="2025-01-29 12:56:26.98373106 +0000 UTC m=+57.533913634" Jan 29 12:56:27.072944 containerd[1580]: time="2025-01-29T12:56:27.072325684Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:27.074127 containerd[1580]: time="2025-01-29T12:56:27.074094415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 12:56:27.077028 containerd[1580]: time="2025-01-29T12:56:27.077003838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 405.630812ms" Jan 29 12:56:27.077206 containerd[1580]: time="2025-01-29T12:56:27.077117083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 12:56:27.080161 containerd[1580]: time="2025-01-29T12:56:27.078963569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 12:56:27.080479 containerd[1580]: time="2025-01-29T12:56:27.080398158Z" level=info msg="CreateContainer within sandbox \"2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:56:27.109629 containerd[1580]: time="2025-01-29T12:56:27.109569052Z" level=info msg="CreateContainer within sandbox \"2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a8b88a5eb2b9c5d2732be5e4cfa281a3002a9d1d1862461dfddccd6ffff2c4db\"" Jan 29 12:56:27.110366 containerd[1580]: time="2025-01-29T12:56:27.110336472Z" level=info msg="StartContainer for \"a8b88a5eb2b9c5d2732be5e4cfa281a3002a9d1d1862461dfddccd6ffff2c4db\"" Jan 29 12:56:27.179552 containerd[1580]: time="2025-01-29T12:56:27.179514123Z" level=info msg="StartContainer for \"a8b88a5eb2b9c5d2732be5e4cfa281a3002a9d1d1862461dfddccd6ffff2c4db\" returns successfully" Jan 29 12:56:27.892453 kubelet[2821]: I0129 12:56:27.892421 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:56:28.979790 kubelet[2821]: I0129 12:56:28.979033 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:56:29.003241 containerd[1580]: time="2025-01-29T12:56:29.003174979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:29.005196 containerd[1580]: time="2025-01-29T12:56:29.005141842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 12:56:29.006665 containerd[1580]: time="2025-01-29T12:56:29.006551132Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:29.010306 containerd[1580]: time="2025-01-29T12:56:29.010257378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:29.012251 containerd[1580]: time="2025-01-29T12:56:29.011708757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.932715501s" Jan 29 12:56:29.012251 containerd[1580]: time="2025-01-29T12:56:29.011755876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 12:56:29.016654 containerd[1580]: time="2025-01-29T12:56:29.016612903Z" level=info msg="CreateContainer within sandbox \"cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 12:56:29.049249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657032001.mount: Deactivated successfully. Jan 29 12:56:29.056730 containerd[1580]: time="2025-01-29T12:56:29.056608025Z" level=info msg="CreateContainer within sandbox \"cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"47f44ae6250073cafd01f03bfe48737964ddf46f9e7501f1116d9ce71f39fd37\"" Jan 29 12:56:29.057703 containerd[1580]: time="2025-01-29T12:56:29.057605949Z" level=info msg="StartContainer for \"47f44ae6250073cafd01f03bfe48737964ddf46f9e7501f1116d9ce71f39fd37\"" Jan 29 12:56:29.139039 containerd[1580]: time="2025-01-29T12:56:29.138975726Z" level=info msg="StartContainer for \"47f44ae6250073cafd01f03bfe48737964ddf46f9e7501f1116d9ce71f39fd37\" returns successfully" Jan 29 12:56:29.143152 containerd[1580]: time="2025-01-29T12:56:29.142822557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 12:56:29.532205 containerd[1580]: time="2025-01-29T12:56:29.532031196Z" level=info msg="StopPodSandbox for \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\"" Jan 29 12:56:29.689668 containerd[1580]: 2025-01-29 12:56:29.619 [WARNING][5148] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dfee4c06-00be-4b24-82c3-46cade2a09c8", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c", Pod:"csi-node-driver-x8b47", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali634fd7d0b72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:29.689668 containerd[1580]: 2025-01-29 12:56:29.619 [INFO][5148] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Jan 29 12:56:29.689668 containerd[1580]: 2025-01-29 12:56:29.619 [INFO][5148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" iface="eth0" netns="" Jan 29 12:56:29.689668 containerd[1580]: 2025-01-29 12:56:29.619 [INFO][5148] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Jan 29 12:56:29.689668 containerd[1580]: 2025-01-29 12:56:29.619 [INFO][5148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Jan 29 12:56:29.689668 containerd[1580]: 2025-01-29 12:56:29.671 [INFO][5157] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" HandleID="k8s-pod-network.841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:29.689668 containerd[1580]: 2025-01-29 12:56:29.672 [INFO][5157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:29.689668 containerd[1580]: 2025-01-29 12:56:29.672 [INFO][5157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:29.689668 containerd[1580]: 2025-01-29 12:56:29.682 [WARNING][5157] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" HandleID="k8s-pod-network.841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:29.689668 containerd[1580]: 2025-01-29 12:56:29.682 [INFO][5157] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" HandleID="k8s-pod-network.841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:29.689668 containerd[1580]: 2025-01-29 12:56:29.684 [INFO][5157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:29.689668 containerd[1580]: 2025-01-29 12:56:29.688 [INFO][5148] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Jan 29 12:56:29.691150 containerd[1580]: time="2025-01-29T12:56:29.689860230Z" level=info msg="TearDown network for sandbox \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\" successfully" Jan 29 12:56:29.691150 containerd[1580]: time="2025-01-29T12:56:29.689885678Z" level=info msg="StopPodSandbox for \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\" returns successfully" Jan 29 12:56:29.691150 containerd[1580]: time="2025-01-29T12:56:29.690504326Z" level=info msg="RemovePodSandbox for \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\"" Jan 29 12:56:29.691150 containerd[1580]: time="2025-01-29T12:56:29.690532660Z" level=info msg="Forcibly stopping sandbox \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\"" Jan 29 12:56:29.781563 containerd[1580]: 2025-01-29 12:56:29.746 [WARNING][5175] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dfee4c06-00be-4b24-82c3-46cade2a09c8", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c", Pod:"csi-node-driver-x8b47", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali634fd7d0b72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:29.781563 containerd[1580]: 2025-01-29 12:56:29.747 [INFO][5175] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Jan 29 12:56:29.781563 containerd[1580]: 2025-01-29 12:56:29.747 [INFO][5175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" iface="eth0" netns="" Jan 29 12:56:29.781563 containerd[1580]: 2025-01-29 12:56:29.747 [INFO][5175] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Jan 29 12:56:29.781563 containerd[1580]: 2025-01-29 12:56:29.747 [INFO][5175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Jan 29 12:56:29.781563 containerd[1580]: 2025-01-29 12:56:29.769 [INFO][5181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" HandleID="k8s-pod-network.841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:29.781563 containerd[1580]: 2025-01-29 12:56:29.769 [INFO][5181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:29.781563 containerd[1580]: 2025-01-29 12:56:29.769 [INFO][5181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:29.781563 containerd[1580]: 2025-01-29 12:56:29.777 [WARNING][5181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" HandleID="k8s-pod-network.841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:29.781563 containerd[1580]: 2025-01-29 12:56:29.777 [INFO][5181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" HandleID="k8s-pod-network.841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-csi--node--driver--x8b47-eth0" Jan 29 12:56:29.781563 containerd[1580]: 2025-01-29 12:56:29.778 [INFO][5181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:29.781563 containerd[1580]: 2025-01-29 12:56:29.779 [INFO][5175] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec" Jan 29 12:56:29.782140 containerd[1580]: time="2025-01-29T12:56:29.781612695Z" level=info msg="TearDown network for sandbox \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\" successfully" Jan 29 12:56:29.790205 containerd[1580]: time="2025-01-29T12:56:29.790061773Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:56:29.790688 containerd[1580]: time="2025-01-29T12:56:29.790309391Z" level=info msg="RemovePodSandbox \"841a4aae333b5f9d931b541cac54f15615165fa6b43f252009312bd673ea48ec\" returns successfully" Jan 29 12:56:29.791156 containerd[1580]: time="2025-01-29T12:56:29.791048856Z" level=info msg="StopPodSandbox for \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\"" Jan 29 12:56:29.876536 containerd[1580]: 2025-01-29 12:56:29.837 [WARNING][5199] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0", GenerateName:"calico-kube-controllers-68547577c7-", Namespace:"calico-system", SelfLink:"", UID:"6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68547577c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94", Pod:"calico-kube-controllers-68547577c7-5ck74", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc12303ede8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:29.876536 containerd[1580]: 2025-01-29 12:56:29.837 [INFO][5199] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Jan 29 12:56:29.876536 containerd[1580]: 2025-01-29 12:56:29.837 [INFO][5199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" iface="eth0" netns="" Jan 29 12:56:29.876536 containerd[1580]: 2025-01-29 12:56:29.837 [INFO][5199] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Jan 29 12:56:29.876536 containerd[1580]: 2025-01-29 12:56:29.837 [INFO][5199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Jan 29 12:56:29.876536 containerd[1580]: 2025-01-29 12:56:29.862 [INFO][5205] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" HandleID="k8s-pod-network.11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:29.876536 containerd[1580]: 2025-01-29 12:56:29.862 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:29.876536 containerd[1580]: 2025-01-29 12:56:29.862 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:29.876536 containerd[1580]: 2025-01-29 12:56:29.869 [WARNING][5205] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" HandleID="k8s-pod-network.11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:29.876536 containerd[1580]: 2025-01-29 12:56:29.870 [INFO][5205] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" HandleID="k8s-pod-network.11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:29.876536 containerd[1580]: 2025-01-29 12:56:29.871 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:29.876536 containerd[1580]: 2025-01-29 12:56:29.872 [INFO][5199] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Jan 29 12:56:29.877368 containerd[1580]: time="2025-01-29T12:56:29.876574918Z" level=info msg="TearDown network for sandbox \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\" successfully" Jan 29 12:56:29.877368 containerd[1580]: time="2025-01-29T12:56:29.876601959Z" level=info msg="StopPodSandbox for \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\" returns successfully" Jan 29 12:56:29.878099 containerd[1580]: time="2025-01-29T12:56:29.877999107Z" level=info msg="RemovePodSandbox for \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\"" Jan 29 12:56:29.878099 containerd[1580]: time="2025-01-29T12:56:29.878038150Z" level=info msg="Forcibly stopping sandbox \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\"" Jan 29 12:56:29.995759 containerd[1580]: 2025-01-29 12:56:29.956 [WARNING][5223] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0", GenerateName:"calico-kube-controllers-68547577c7-", Namespace:"calico-system", SelfLink:"", UID:"6ce86f96-6ca0-4dae-a772-8ea7a02dd7b7", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68547577c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"2f5609484573d64b6db1360f0d2af46347d27dd7d463f1427fb8da1b09873d94", Pod:"calico-kube-controllers-68547577c7-5ck74", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc12303ede8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:29.995759 containerd[1580]: 2025-01-29 12:56:29.956 [INFO][5223] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Jan 29 12:56:29.995759 containerd[1580]: 2025-01-29 12:56:29.956 [INFO][5223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" iface="eth0" netns="" Jan 29 12:56:29.995759 containerd[1580]: 2025-01-29 12:56:29.956 [INFO][5223] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Jan 29 12:56:29.995759 containerd[1580]: 2025-01-29 12:56:29.956 [INFO][5223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Jan 29 12:56:29.995759 containerd[1580]: 2025-01-29 12:56:29.978 [INFO][5232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" HandleID="k8s-pod-network.11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:29.995759 containerd[1580]: 2025-01-29 12:56:29.979 [INFO][5232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:29.995759 containerd[1580]: 2025-01-29 12:56:29.979 [INFO][5232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:29.995759 containerd[1580]: 2025-01-29 12:56:29.988 [WARNING][5232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" HandleID="k8s-pod-network.11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:29.995759 containerd[1580]: 2025-01-29 12:56:29.988 [INFO][5232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" HandleID="k8s-pod-network.11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--kube--controllers--68547577c7--5ck74-eth0" Jan 29 12:56:29.995759 containerd[1580]: 2025-01-29 12:56:29.990 [INFO][5232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:29.995759 containerd[1580]: 2025-01-29 12:56:29.993 [INFO][5223] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83" Jan 29 12:56:29.995759 containerd[1580]: time="2025-01-29T12:56:29.994849314Z" level=info msg="TearDown network for sandbox \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\" successfully" Jan 29 12:56:30.004340 containerd[1580]: time="2025-01-29T12:56:30.004295552Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:56:30.004730 containerd[1580]: time="2025-01-29T12:56:30.004376855Z" level=info msg="RemovePodSandbox \"11fec68aa8dd82f0e35528d777791fe779a48aa9642ff6ac17f2a499bbd8bb83\" returns successfully" Jan 29 12:56:30.006121 containerd[1580]: time="2025-01-29T12:56:30.006092583Z" level=info msg="StopPodSandbox for \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\"" Jan 29 12:56:30.097488 containerd[1580]: 2025-01-29 12:56:30.057 [WARNING][5250] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0", GenerateName:"calico-apiserver-64cd9546dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f40c0f50-7def-47b4-a1b4-bdc72134047a", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cd9546dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f", Pod:"calico-apiserver-64cd9546dc-kqbdx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb68f8625f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:30.097488 containerd[1580]: 2025-01-29 12:56:30.058 [INFO][5250] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Jan 29 12:56:30.097488 containerd[1580]: 2025-01-29 12:56:30.058 [INFO][5250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" iface="eth0" netns="" Jan 29 12:56:30.097488 containerd[1580]: 2025-01-29 12:56:30.058 [INFO][5250] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Jan 29 12:56:30.097488 containerd[1580]: 2025-01-29 12:56:30.058 [INFO][5250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Jan 29 12:56:30.097488 containerd[1580]: 2025-01-29 12:56:30.078 [INFO][5256] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" HandleID="k8s-pod-network.f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:30.097488 containerd[1580]: 2025-01-29 12:56:30.079 [INFO][5256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:30.097488 containerd[1580]: 2025-01-29 12:56:30.079 [INFO][5256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:30.097488 containerd[1580]: 2025-01-29 12:56:30.088 [WARNING][5256] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" HandleID="k8s-pod-network.f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:30.097488 containerd[1580]: 2025-01-29 12:56:30.088 [INFO][5256] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" HandleID="k8s-pod-network.f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:30.097488 containerd[1580]: 2025-01-29 12:56:30.091 [INFO][5256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:30.097488 containerd[1580]: 2025-01-29 12:56:30.092 [INFO][5250] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Jan 29 12:56:30.097488 containerd[1580]: time="2025-01-29T12:56:30.095915109Z" level=info msg="TearDown network for sandbox \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\" successfully" Jan 29 12:56:30.097488 containerd[1580]: time="2025-01-29T12:56:30.095940928Z" level=info msg="StopPodSandbox for \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\" returns successfully" Jan 29 12:56:30.097488 containerd[1580]: time="2025-01-29T12:56:30.096478914Z" level=info msg="RemovePodSandbox for \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\"" Jan 29 12:56:30.097488 containerd[1580]: time="2025-01-29T12:56:30.096505934Z" level=info msg="Forcibly stopping sandbox \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\"" Jan 29 12:56:30.218975 containerd[1580]: 2025-01-29 12:56:30.166 [WARNING][5274] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0", GenerateName:"calico-apiserver-64cd9546dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f40c0f50-7def-47b4-a1b4-bdc72134047a", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cd9546dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"2e3743b540ed6560c5224b6ab2b666f1d0911786cdbce0d3281a98e45cbbd70f", Pod:"calico-apiserver-64cd9546dc-kqbdx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb68f8625f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:30.218975 containerd[1580]: 2025-01-29 12:56:30.167 [INFO][5274] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Jan 29 12:56:30.218975 containerd[1580]: 2025-01-29 12:56:30.167 [INFO][5274] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" iface="eth0" netns="" Jan 29 12:56:30.218975 containerd[1580]: 2025-01-29 12:56:30.167 [INFO][5274] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Jan 29 12:56:30.218975 containerd[1580]: 2025-01-29 12:56:30.167 [INFO][5274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Jan 29 12:56:30.218975 containerd[1580]: 2025-01-29 12:56:30.196 [INFO][5280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" HandleID="k8s-pod-network.f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:30.218975 containerd[1580]: 2025-01-29 12:56:30.196 [INFO][5280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:30.218975 containerd[1580]: 2025-01-29 12:56:30.196 [INFO][5280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:30.218975 containerd[1580]: 2025-01-29 12:56:30.209 [WARNING][5280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" HandleID="k8s-pod-network.f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:30.218975 containerd[1580]: 2025-01-29 12:56:30.209 [INFO][5280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" HandleID="k8s-pod-network.f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--kqbdx-eth0" Jan 29 12:56:30.218975 containerd[1580]: 2025-01-29 12:56:30.213 [INFO][5280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:30.218975 containerd[1580]: 2025-01-29 12:56:30.216 [INFO][5274] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676" Jan 29 12:56:30.218975 containerd[1580]: time="2025-01-29T12:56:30.218938650Z" level=info msg="TearDown network for sandbox \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\" successfully" Jan 29 12:56:30.652380 containerd[1580]: time="2025-01-29T12:56:30.652036641Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:56:30.652380 containerd[1580]: time="2025-01-29T12:56:30.652175112Z" level=info msg="RemovePodSandbox \"f8d76da07cecdb063ab5f134359b619e936e65c347d240e505a80a64a1e23676\" returns successfully" Jan 29 12:56:30.657818 containerd[1580]: time="2025-01-29T12:56:30.655205089Z" level=info msg="StopPodSandbox for \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\"" Jan 29 12:56:30.780240 containerd[1580]: 2025-01-29 12:56:30.747 [WARNING][5299] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1e22ea74-beab-433f-90fd-803bfaa2c127", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9", Pod:"coredns-7db6d8ff4d-wxmjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55d7548fbc9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:30.780240 containerd[1580]: 2025-01-29 12:56:30.747 [INFO][5299] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Jan 29 12:56:30.780240 containerd[1580]: 2025-01-29 12:56:30.747 [INFO][5299] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" iface="eth0" netns="" Jan 29 12:56:30.780240 containerd[1580]: 2025-01-29 12:56:30.747 [INFO][5299] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Jan 29 12:56:30.780240 containerd[1580]: 2025-01-29 12:56:30.747 [INFO][5299] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Jan 29 12:56:30.780240 containerd[1580]: 2025-01-29 12:56:30.768 [INFO][5305] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" HandleID="k8s-pod-network.91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:30.780240 containerd[1580]: 2025-01-29 12:56:30.768 [INFO][5305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:30.780240 containerd[1580]: 2025-01-29 12:56:30.768 [INFO][5305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:30.780240 containerd[1580]: 2025-01-29 12:56:30.776 [WARNING][5305] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" HandleID="k8s-pod-network.91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:30.780240 containerd[1580]: 2025-01-29 12:56:30.776 [INFO][5305] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" HandleID="k8s-pod-network.91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:30.780240 containerd[1580]: 2025-01-29 12:56:30.777 [INFO][5305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:30.780240 containerd[1580]: 2025-01-29 12:56:30.779 [INFO][5299] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Jan 29 12:56:30.782268 containerd[1580]: time="2025-01-29T12:56:30.780275263Z" level=info msg="TearDown network for sandbox \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\" successfully" Jan 29 12:56:30.782268 containerd[1580]: time="2025-01-29T12:56:30.780321230Z" level=info msg="StopPodSandbox for \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\" returns successfully" Jan 29 12:56:30.782268 containerd[1580]: time="2025-01-29T12:56:30.781001494Z" level=info msg="RemovePodSandbox for \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\"" Jan 29 12:56:30.782268 containerd[1580]: time="2025-01-29T12:56:30.781026250Z" level=info msg="Forcibly stopping sandbox \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\"" Jan 29 12:56:30.863688 containerd[1580]: 2025-01-29 12:56:30.821 [WARNING][5323] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1e22ea74-beab-433f-90fd-803bfaa2c127", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"fe3f0c5dec8e147c29c74bf5abad37b66c85a8f75e1dcf82ededece59606a9e9", Pod:"coredns-7db6d8ff4d-wxmjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55d7548fbc9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:30.863688 containerd[1580]: 2025-01-29 12:56:30.821 [INFO][5323] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Jan 29 12:56:30.863688 containerd[1580]: 2025-01-29 12:56:30.821 [INFO][5323] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" iface="eth0" netns="" Jan 29 12:56:30.863688 containerd[1580]: 2025-01-29 12:56:30.821 [INFO][5323] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Jan 29 12:56:30.863688 containerd[1580]: 2025-01-29 12:56:30.821 [INFO][5323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Jan 29 12:56:30.863688 containerd[1580]: 2025-01-29 12:56:30.849 [INFO][5329] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" HandleID="k8s-pod-network.91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:30.863688 containerd[1580]: 2025-01-29 12:56:30.849 [INFO][5329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:30.863688 containerd[1580]: 2025-01-29 12:56:30.849 [INFO][5329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:30.863688 containerd[1580]: 2025-01-29 12:56:30.856 [WARNING][5329] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" HandleID="k8s-pod-network.91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:30.863688 containerd[1580]: 2025-01-29 12:56:30.857 [INFO][5329] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" HandleID="k8s-pod-network.91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--wxmjl-eth0" Jan 29 12:56:30.863688 containerd[1580]: 2025-01-29 12:56:30.860 [INFO][5329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:30.863688 containerd[1580]: 2025-01-29 12:56:30.862 [INFO][5323] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7" Jan 29 12:56:30.863688 containerd[1580]: time="2025-01-29T12:56:30.863758416Z" level=info msg="TearDown network for sandbox \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\" successfully" Jan 29 12:56:30.868410 containerd[1580]: time="2025-01-29T12:56:30.868334943Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:56:30.868565 containerd[1580]: time="2025-01-29T12:56:30.868546972Z" level=info msg="RemovePodSandbox \"91fd0a1910ee34fd8ee989edfac62bf7ab816910e4d5f1f70776c770c7de4af7\" returns successfully" Jan 29 12:56:30.869091 containerd[1580]: time="2025-01-29T12:56:30.869050222Z" level=info msg="StopPodSandbox for \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\"" Jan 29 12:56:30.986041 containerd[1580]: 2025-01-29 12:56:30.918 [WARNING][5347] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7", Pod:"coredns-7db6d8ff4d-t9ghv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb14f95517e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:30.986041 containerd[1580]: 2025-01-29 12:56:30.918 [INFO][5347] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Jan 29 12:56:30.986041 containerd[1580]: 2025-01-29 12:56:30.918 [INFO][5347] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" iface="eth0" netns="" Jan 29 12:56:30.986041 containerd[1580]: 2025-01-29 12:56:30.918 [INFO][5347] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Jan 29 12:56:30.986041 containerd[1580]: 2025-01-29 12:56:30.918 [INFO][5347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Jan 29 12:56:30.986041 containerd[1580]: 2025-01-29 12:56:30.966 [INFO][5353] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" HandleID="k8s-pod-network.419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:30.986041 containerd[1580]: 2025-01-29 12:56:30.966 [INFO][5353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:30.986041 containerd[1580]: 2025-01-29 12:56:30.966 [INFO][5353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:30.986041 containerd[1580]: 2025-01-29 12:56:30.976 [WARNING][5353] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" HandleID="k8s-pod-network.419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:30.986041 containerd[1580]: 2025-01-29 12:56:30.976 [INFO][5353] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" HandleID="k8s-pod-network.419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:30.986041 containerd[1580]: 2025-01-29 12:56:30.979 [INFO][5353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:30.986041 containerd[1580]: 2025-01-29 12:56:30.981 [INFO][5347] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Jan 29 12:56:30.986041 containerd[1580]: time="2025-01-29T12:56:30.985899020Z" level=info msg="TearDown network for sandbox \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\" successfully" Jan 29 12:56:30.986041 containerd[1580]: time="2025-01-29T12:56:30.985920201Z" level=info msg="StopPodSandbox for \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\" returns successfully" Jan 29 12:56:30.987946 containerd[1580]: time="2025-01-29T12:56:30.987491376Z" level=info msg="RemovePodSandbox for \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\"" Jan 29 12:56:30.987946 containerd[1580]: time="2025-01-29T12:56:30.987530269Z" level=info msg="Forcibly stopping sandbox \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\"" Jan 29 12:56:31.162998 containerd[1580]: 2025-01-29 12:56:31.064 [WARNING][5373] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"238b28a6-d7cf-4fd1-bde3-d48cdaf3e60c", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"be08b48b6b57ce74c0717ff941ba3acba5d7edf792420803eecb8ce204c359d7", Pod:"coredns-7db6d8ff4d-t9ghv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb14f95517e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:31.162998 containerd[1580]: 2025-01-29 12:56:31.065 [INFO][5373] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Jan 29 12:56:31.162998 containerd[1580]: 2025-01-29 12:56:31.065 [INFO][5373] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" iface="eth0" netns="" Jan 29 12:56:31.162998 containerd[1580]: 2025-01-29 12:56:31.065 [INFO][5373] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Jan 29 12:56:31.162998 containerd[1580]: 2025-01-29 12:56:31.065 [INFO][5373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Jan 29 12:56:31.162998 containerd[1580]: 2025-01-29 12:56:31.120 [INFO][5379] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" HandleID="k8s-pod-network.419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:31.162998 containerd[1580]: 2025-01-29 12:56:31.121 [INFO][5379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:31.162998 containerd[1580]: 2025-01-29 12:56:31.121 [INFO][5379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:31.162998 containerd[1580]: 2025-01-29 12:56:31.148 [WARNING][5379] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" HandleID="k8s-pod-network.419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:31.162998 containerd[1580]: 2025-01-29 12:56:31.148 [INFO][5379] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" HandleID="k8s-pod-network.419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-coredns--7db6d8ff4d--t9ghv-eth0" Jan 29 12:56:31.162998 containerd[1580]: 2025-01-29 12:56:31.151 [INFO][5379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:31.162998 containerd[1580]: 2025-01-29 12:56:31.159 [INFO][5373] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603" Jan 29 12:56:31.163715 containerd[1580]: time="2025-01-29T12:56:31.163057509Z" level=info msg="TearDown network for sandbox \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\" successfully" Jan 29 12:56:31.173814 containerd[1580]: time="2025-01-29T12:56:31.171068504Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:56:31.173814 containerd[1580]: time="2025-01-29T12:56:31.171136001Z" level=info msg="RemovePodSandbox \"419570325313af238961a05595441082b9adf3ac0751e59af15be93bf053c603\" returns successfully" Jan 29 12:56:31.173814 containerd[1580]: time="2025-01-29T12:56:31.173258326Z" level=info msg="StopPodSandbox for \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\"" Jan 29 12:56:31.327712 containerd[1580]: 2025-01-29 12:56:31.287 [WARNING][5398] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0", GenerateName:"calico-apiserver-64cd9546dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2143d0c-26ec-4f29-91e9-994f575528dd", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cd9546dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f", Pod:"calico-apiserver-64cd9546dc-s76sw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b148c94038", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:31.327712 containerd[1580]: 2025-01-29 12:56:31.288 [INFO][5398] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Jan 29 12:56:31.327712 containerd[1580]: 2025-01-29 12:56:31.288 [INFO][5398] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" iface="eth0" netns="" Jan 29 12:56:31.327712 containerd[1580]: 2025-01-29 12:56:31.288 [INFO][5398] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Jan 29 12:56:31.327712 containerd[1580]: 2025-01-29 12:56:31.288 [INFO][5398] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Jan 29 12:56:31.327712 containerd[1580]: 2025-01-29 12:56:31.315 [INFO][5404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" HandleID="k8s-pod-network.cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:31.327712 containerd[1580]: 2025-01-29 12:56:31.315 [INFO][5404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:31.327712 containerd[1580]: 2025-01-29 12:56:31.316 [INFO][5404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:31.327712 containerd[1580]: 2025-01-29 12:56:31.323 [WARNING][5404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" HandleID="k8s-pod-network.cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:31.327712 containerd[1580]: 2025-01-29 12:56:31.323 [INFO][5404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" HandleID="k8s-pod-network.cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:31.327712 containerd[1580]: 2025-01-29 12:56:31.325 [INFO][5404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:31.327712 containerd[1580]: 2025-01-29 12:56:31.326 [INFO][5398] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Jan 29 12:56:31.328536 containerd[1580]: time="2025-01-29T12:56:31.328283800Z" level=info msg="TearDown network for sandbox \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\" successfully" Jan 29 12:56:31.328536 containerd[1580]: time="2025-01-29T12:56:31.328311492Z" level=info msg="StopPodSandbox for \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\" returns successfully" Jan 29 12:56:31.329103 containerd[1580]: time="2025-01-29T12:56:31.328747755Z" level=info msg="RemovePodSandbox for \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\"" Jan 29 12:56:31.329103 containerd[1580]: time="2025-01-29T12:56:31.328812807Z" level=info msg="Forcibly stopping sandbox \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\"" Jan 29 12:56:31.474198 containerd[1580]: 2025-01-29 12:56:31.408 [WARNING][5423] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0", GenerateName:"calico-apiserver-64cd9546dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2143d0c-26ec-4f29-91e9-994f575528dd", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cd9546dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-0a72854eea.novalocal", ContainerID:"272f20312496532de5c005a49c65840d2c88470249749a39bf8ea1dc302ea18f", Pod:"calico-apiserver-64cd9546dc-s76sw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b148c94038", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:56:31.474198 containerd[1580]: 2025-01-29 12:56:31.408 [INFO][5423] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Jan 29 12:56:31.474198 containerd[1580]: 2025-01-29 12:56:31.408 [INFO][5423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" iface="eth0" netns="" Jan 29 12:56:31.474198 containerd[1580]: 2025-01-29 12:56:31.408 [INFO][5423] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Jan 29 12:56:31.474198 containerd[1580]: 2025-01-29 12:56:31.408 [INFO][5423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Jan 29 12:56:31.474198 containerd[1580]: 2025-01-29 12:56:31.457 [INFO][5433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" HandleID="k8s-pod-network.cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:31.474198 containerd[1580]: 2025-01-29 12:56:31.457 [INFO][5433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:56:31.474198 containerd[1580]: 2025-01-29 12:56:31.457 [INFO][5433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:56:31.474198 containerd[1580]: 2025-01-29 12:56:31.468 [WARNING][5433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" HandleID="k8s-pod-network.cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:31.474198 containerd[1580]: 2025-01-29 12:56:31.468 [INFO][5433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" HandleID="k8s-pod-network.cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Workload="ci--4081--3--0--e--0a72854eea.novalocal-k8s-calico--apiserver--64cd9546dc--s76sw-eth0" Jan 29 12:56:31.474198 containerd[1580]: 2025-01-29 12:56:31.470 [INFO][5433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:56:31.474198 containerd[1580]: 2025-01-29 12:56:31.471 [INFO][5423] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53" Jan 29 12:56:31.474719 containerd[1580]: time="2025-01-29T12:56:31.474262884Z" level=info msg="TearDown network for sandbox \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\" successfully" Jan 29 12:56:31.479275 containerd[1580]: time="2025-01-29T12:56:31.479131310Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:56:31.479355 containerd[1580]: time="2025-01-29T12:56:31.479291763Z" level=info msg="RemovePodSandbox \"cb00048445a23a8b786446f033ef5204238bbd157e4a0a8de98aa945fc97af53\" returns successfully" Jan 29 12:56:31.651829 containerd[1580]: time="2025-01-29T12:56:31.650705226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:31.653269 containerd[1580]: time="2025-01-29T12:56:31.653113740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 12:56:31.655725 containerd[1580]: time="2025-01-29T12:56:31.654518391Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:31.657486 containerd[1580]: time="2025-01-29T12:56:31.657452577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:56:31.658365 containerd[1580]: time="2025-01-29T12:56:31.658326225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.51546207s" Jan 29 12:56:31.658365 containerd[1580]: time="2025-01-29T12:56:31.658359668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 12:56:31.661136 containerd[1580]: time="2025-01-29T12:56:31.660938954Z" level=info msg="CreateContainer within sandbox \"cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 12:56:31.682890 containerd[1580]: time="2025-01-29T12:56:31.682851398Z" level=info msg="CreateContainer within sandbox \"cf014ce31c8f48eb662524aee71aa929094fb02c4e4e6770636df41e066c498c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"470a227f9fd01195ed5d700c5f2ea8a476d4ff582231572163884dc6b45bf3ad\"" Jan 29 12:56:31.684969 containerd[1580]: time="2025-01-29T12:56:31.684937694Z" level=info msg="StartContainer for \"470a227f9fd01195ed5d700c5f2ea8a476d4ff582231572163884dc6b45bf3ad\"" Jan 29 12:56:31.782157 containerd[1580]: time="2025-01-29T12:56:31.782051794Z" level=info msg="StartContainer for \"470a227f9fd01195ed5d700c5f2ea8a476d4ff582231572163884dc6b45bf3ad\" returns successfully" Jan 29 12:56:32.064970 kubelet[2821]: I0129 12:56:32.064890 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-x8b47" podStartSLOduration=33.643690849 podStartE2EDuration="43.064866018s" podCreationTimestamp="2025-01-29 12:55:49 +0000 UTC" firstStartedPulling="2025-01-29 12:56:22.238347112 +0000 UTC m=+52.788529676" lastFinishedPulling="2025-01-29 12:56:31.659522271 +0000 UTC m=+62.209704845" observedRunningTime="2025-01-29 12:56:32.061670309 +0000 UTC m=+62.611852873" watchObservedRunningTime="2025-01-29 12:56:32.064866018 +0000 UTC m=+62.615048612" Jan 29 12:56:32.067200 kubelet[2821]: I0129 12:56:32.065076 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64cd9546dc-kqbdx" podStartSLOduration=35.598444097 podStartE2EDuration="43.065070655s" podCreationTimestamp="2025-01-29 12:55:49 +0000 UTC" firstStartedPulling="2025-01-29 12:56:19.611513747 +0000 UTC m=+50.161696311" lastFinishedPulling="2025-01-29 12:56:27.078140305 +0000 UTC m=+57.628322869" observedRunningTime="2025-01-29 12:56:28.005961677 +0000 UTC m=+58.556144261" watchObservedRunningTime="2025-01-29 12:56:32.065070655 +0000 UTC m=+62.615253219" Jan 29 12:56:32.692870 kubelet[2821]: I0129 12:56:32.692578 2821 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 12:56:32.692870 kubelet[2821]: I0129 12:56:32.692639 2821 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 12:56:49.608898 update_engine[1567]: I20250129 12:56:49.608271 1567 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 12:56:49.608898 update_engine[1567]: I20250129 12:56:49.608376 1567 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 12:56:49.608898 update_engine[1567]: I20250129 12:56:49.608758 1567 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 12:56:49.610737 update_engine[1567]: I20250129 12:56:49.610610 1567 omaha_request_params.cc:62] Current group set to lts Jan 29 12:56:49.615267 update_engine[1567]: I20250129 12:56:49.615023 1567 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 12:56:49.615267 update_engine[1567]: I20250129 12:56:49.615069 1567 update_attempter.cc:643] Scheduling an action processor start. Jan 29 12:56:49.615267 update_engine[1567]: I20250129 12:56:49.615105 1567 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 12:56:49.615267 update_engine[1567]: I20250129 12:56:49.615170 1567 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 12:56:49.615581 update_engine[1567]: I20250129 12:56:49.615326 1567 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 12:56:49.615581 update_engine[1567]: I20250129 12:56:49.615352 1567 omaha_request_action.cc:272] Request: Jan 29 12:56:49.615581 update_engine[1567]: Jan 29 12:56:49.615581 update_engine[1567]: Jan 29 12:56:49.615581 update_engine[1567]: Jan 29 12:56:49.615581 update_engine[1567]: Jan 29 12:56:49.615581 update_engine[1567]: Jan 29 12:56:49.615581 update_engine[1567]: Jan 29 12:56:49.615581 update_engine[1567]: Jan 29 12:56:49.615581 update_engine[1567]: Jan 29 12:56:49.615581 update_engine[1567]: I20250129 12:56:49.615366 1567 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:56:49.616753 locksmithd[1599]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 12:56:49.623361 update_engine[1567]: I20250129 12:56:49.623279 1567 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:56:49.623978 update_engine[1567]: I20250129 12:56:49.623906 1567 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:56:49.636597 update_engine[1567]: E20250129 12:56:49.636511 1567 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:56:49.636744 update_engine[1567]: I20250129 12:56:49.636685 1567 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 12:56:58.501749 kubelet[2821]: I0129 12:56:58.501298 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:56:59.542628 update_engine[1567]: I20250129 12:56:59.542480 1567 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:56:59.543687 update_engine[1567]: I20250129 12:56:59.542979 1567 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:56:59.543687 update_engine[1567]: I20250129 12:56:59.543370 1567 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:56:59.555045 update_engine[1567]: E20250129 12:56:59.554726 1567 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:56:59.555271 update_engine[1567]: I20250129 12:56:59.555105 1567 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 29 12:57:06.405960 systemd[1]: run-containerd-runc-k8s.io-9c1a53771d396999afeab4962bbd7ca3c889e7eb9af39032de29d37c4e4a4ff7-runc.HZqjxR.mount: Deactivated successfully. Jan 29 12:57:09.442144 systemd[1]: Started sshd@7-172.24.4.72:22-172.24.4.1:40236.service - OpenSSH per-connection server daemon (172.24.4.1:40236). Jan 29 12:57:09.536240 update_engine[1567]: I20250129 12:57:09.536167 1567 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:57:09.537204 update_engine[1567]: I20250129 12:57:09.536345 1567 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:57:09.537204 update_engine[1567]: I20250129 12:57:09.536542 1567 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:57:09.548019 update_engine[1567]: E20250129 12:57:09.547955 1567 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:57:09.548210 update_engine[1567]: I20250129 12:57:09.548048 1567 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 29 12:57:10.863912 sshd[5585]: Accepted publickey for core from 172.24.4.1 port 40236 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:57:10.867902 sshd[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:10.879218 systemd-logind[1561]: New session 10 of user core. Jan 29 12:57:10.888383 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:57:11.652363 sshd[5585]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:11.658960 systemd[1]: sshd@7-172.24.4.72:22-172.24.4.1:40236.service: Deactivated successfully. Jan 29 12:57:11.669389 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:57:11.672302 systemd-logind[1561]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:57:11.674820 systemd-logind[1561]: Removed session 10. Jan 29 12:57:16.664412 systemd[1]: Started sshd@8-172.24.4.72:22-172.24.4.1:48420.service - OpenSSH per-connection server daemon (172.24.4.1:48420). Jan 29 12:57:18.235943 sshd[5602]: Accepted publickey for core from 172.24.4.1 port 48420 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:57:18.238980 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:18.250315 systemd-logind[1561]: New session 11 of user core. Jan 29 12:57:18.258359 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:57:18.969249 sshd[5602]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:18.975387 systemd[1]: sshd@8-172.24.4.72:22-172.24.4.1:48420.service: Deactivated successfully. Jan 29 12:57:18.980899 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:57:18.982561 systemd-logind[1561]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:57:18.983838 systemd-logind[1561]: Removed session 11. Jan 29 12:57:19.533450 update_engine[1567]: I20250129 12:57:19.533118 1567 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:57:19.534304 update_engine[1567]: I20250129 12:57:19.533557 1567 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:57:19.534304 update_engine[1567]: I20250129 12:57:19.534018 1567 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:57:19.545237 update_engine[1567]: E20250129 12:57:19.545158 1567 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:57:19.545451 update_engine[1567]: I20250129 12:57:19.545262 1567 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 12:57:19.545451 update_engine[1567]: I20250129 12:57:19.545281 1567 omaha_request_action.cc:617] Omaha request response: Jan 29 12:57:19.545451 update_engine[1567]: E20250129 12:57:19.545416 1567 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 29 12:57:19.545713 update_engine[1567]: I20250129 12:57:19.545478 1567 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 29 12:57:19.545713 update_engine[1567]: I20250129 12:57:19.545528 1567 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:57:19.545713 update_engine[1567]: I20250129 12:57:19.545543 1567 update_attempter.cc:306] Processing Done. Jan 29 12:57:19.545713 update_engine[1567]: E20250129 12:57:19.545567 1567 update_attempter.cc:619] Update failed. Jan 29 12:57:19.545713 update_engine[1567]: I20250129 12:57:19.545579 1567 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 29 12:57:19.545713 update_engine[1567]: I20250129 12:57:19.545590 1567 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 29 12:57:19.545713 update_engine[1567]: I20250129 12:57:19.545603 1567 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 29 12:57:19.546162 update_engine[1567]: I20250129 12:57:19.546015 1567 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 12:57:19.546162 update_engine[1567]: I20250129 12:57:19.546077 1567 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 12:57:19.546162 update_engine[1567]: I20250129 12:57:19.546089 1567 omaha_request_action.cc:272] Request: Jan 29 12:57:19.546162 update_engine[1567]: Jan 29 12:57:19.546162 update_engine[1567]: Jan 29 12:57:19.546162 update_engine[1567]: Jan 29 12:57:19.546162 update_engine[1567]: Jan 29 12:57:19.546162 update_engine[1567]: Jan 29 12:57:19.546162 update_engine[1567]: Jan 29 12:57:19.546162 update_engine[1567]: I20250129 12:57:19.546103 1567 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:57:19.547010 update_engine[1567]: I20250129 12:57:19.546362 1567 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:57:19.547010 update_engine[1567]: I20250129 12:57:19.546809 1567 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:57:19.547299 locksmithd[1599]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 29 12:57:19.558402 update_engine[1567]: E20250129 12:57:19.558316 1567 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:57:19.558614 update_engine[1567]: I20250129 12:57:19.558429 1567 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 12:57:19.558614 update_engine[1567]: I20250129 12:57:19.558449 1567 omaha_request_action.cc:617] Omaha request response: Jan 29 12:57:19.559301 update_engine[1567]: I20250129 12:57:19.558465 1567 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:57:19.559301 update_engine[1567]: I20250129 12:57:19.559255 1567 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:57:19.559301 update_engine[1567]: I20250129 12:57:19.559278 1567 update_attempter.cc:306] Processing Done. Jan 29 12:57:19.559301 update_engine[1567]: I20250129 12:57:19.559293 1567 update_attempter.cc:310] Error event sent. Jan 29 12:57:19.559642 update_engine[1567]: I20250129 12:57:19.559315 1567 update_check_scheduler.cc:74] Next update check in 48m43s Jan 29 12:57:19.560560 locksmithd[1599]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 29 12:57:23.982908 systemd[1]: Started sshd@9-172.24.4.72:22-172.24.4.1:57900.service - OpenSSH per-connection server daemon (172.24.4.1:57900). Jan 29 12:57:25.430311 sshd[5623]: Accepted publickey for core from 172.24.4.1 port 57900 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:57:25.434652 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:25.442994 systemd-logind[1561]: New session 12 of user core. Jan 29 12:57:25.446060 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:57:26.148334 sshd[5623]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:26.156287 systemd[1]: Started sshd@10-172.24.4.72:22-172.24.4.1:57916.service - OpenSSH per-connection server daemon (172.24.4.1:57916). Jan 29 12:57:26.159180 systemd[1]: sshd@9-172.24.4.72:22-172.24.4.1:57900.service: Deactivated successfully. Jan 29 12:57:26.170867 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:57:26.172146 systemd-logind[1561]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:57:26.173706 systemd-logind[1561]: Removed session 12. Jan 29 12:57:27.437826 sshd[5650]: Accepted publickey for core from 172.24.4.1 port 57916 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:57:27.441953 sshd[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:27.455507 systemd-logind[1561]: New session 13 of user core. Jan 29 12:57:27.461357 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:57:28.240650 sshd[5650]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:28.247460 systemd[1]: sshd@10-172.24.4.72:22-172.24.4.1:57916.service: Deactivated successfully. Jan 29 12:57:28.256636 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:57:28.257082 systemd-logind[1561]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:57:28.269417 systemd[1]: Started sshd@11-172.24.4.72:22-172.24.4.1:57932.service - OpenSSH per-connection server daemon (172.24.4.1:57932). Jan 29 12:57:28.271306 systemd-logind[1561]: Removed session 13. Jan 29 12:57:29.621164 sshd[5665]: Accepted publickey for core from 172.24.4.1 port 57932 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:57:29.625760 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:29.636585 systemd-logind[1561]: New session 14 of user core. Jan 29 12:57:29.644425 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:57:30.363273 sshd[5665]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:30.374210 systemd[1]: sshd@11-172.24.4.72:22-172.24.4.1:57932.service: Deactivated successfully. Jan 29 12:57:30.384206 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:57:30.385683 systemd-logind[1561]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:57:30.391485 systemd-logind[1561]: Removed session 14. Jan 29 12:57:35.372352 systemd[1]: Started sshd@12-172.24.4.72:22-172.24.4.1:52710.service - OpenSSH per-connection server daemon (172.24.4.1:52710). Jan 29 12:57:36.486670 sshd[5686]: Accepted publickey for core from 172.24.4.1 port 52710 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:57:36.489570 sshd[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:36.499761 systemd-logind[1561]: New session 15 of user core. Jan 29 12:57:36.505318 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:57:37.250043 sshd[5686]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:37.256913 systemd[1]: sshd@12-172.24.4.72:22-172.24.4.1:52710.service: Deactivated successfully. Jan 29 12:57:37.267824 systemd-logind[1561]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:57:37.268440 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:57:37.271424 systemd-logind[1561]: Removed session 15. Jan 29 12:57:42.261331 systemd[1]: Started sshd@13-172.24.4.72:22-172.24.4.1:52718.service - OpenSSH per-connection server daemon (172.24.4.1:52718). Jan 29 12:57:43.714254 sshd[5727]: Accepted publickey for core from 172.24.4.1 port 52718 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:57:43.717202 sshd[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:43.727450 systemd-logind[1561]: New session 16 of user core. Jan 29 12:57:43.734468 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:57:44.492463 sshd[5727]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:44.495980 systemd[1]: sshd@13-172.24.4.72:22-172.24.4.1:52718.service: Deactivated successfully. Jan 29 12:57:44.503151 systemd-logind[1561]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:57:44.504284 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:57:44.506540 systemd-logind[1561]: Removed session 16. Jan 29 12:57:49.508277 systemd[1]: Started sshd@14-172.24.4.72:22-172.24.4.1:54642.service - OpenSSH per-connection server daemon (172.24.4.1:54642). Jan 29 12:57:50.794082 sshd[5744]: Accepted publickey for core from 172.24.4.1 port 54642 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:57:50.800017 sshd[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:50.815852 systemd-logind[1561]: New session 17 of user core. Jan 29 12:57:50.820212 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:57:51.609657 sshd[5744]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:51.616485 systemd[1]: sshd@14-172.24.4.72:22-172.24.4.1:54642.service: Deactivated successfully. Jan 29 12:57:51.622257 systemd-logind[1561]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:57:51.623153 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:57:51.626116 systemd-logind[1561]: Removed session 17. Jan 29 12:57:56.622097 systemd[1]: Started sshd@15-172.24.4.72:22-172.24.4.1:37978.service - OpenSSH per-connection server daemon (172.24.4.1:37978). Jan 29 12:57:57.976480 sshd[5817]: Accepted publickey for core from 172.24.4.1 port 37978 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:57:57.979999 sshd[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:57.989806 systemd-logind[1561]: New session 18 of user core. Jan 29 12:57:57.998280 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:57:58.800112 sshd[5817]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:58.809506 systemd[1]: Started sshd@16-172.24.4.72:22-172.24.4.1:37982.service - OpenSSH per-connection server daemon (172.24.4.1:37982). Jan 29 12:57:58.812986 systemd[1]: sshd@15-172.24.4.72:22-172.24.4.1:37978.service: Deactivated successfully. Jan 29 12:57:58.822276 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:57:58.826536 systemd-logind[1561]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:57:58.829724 systemd-logind[1561]: Removed session 18. Jan 29 12:58:00.309332 sshd[5827]: Accepted publickey for core from 172.24.4.1 port 37982 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:58:00.312257 sshd[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:58:00.323388 systemd-logind[1561]: New session 19 of user core. Jan 29 12:58:00.329958 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:58:01.335657 sshd[5827]: pam_unix(sshd:session): session closed for user core Jan 29 12:58:01.345430 systemd[1]: Started sshd@17-172.24.4.72:22-172.24.4.1:37992.service - OpenSSH per-connection server daemon (172.24.4.1:37992). Jan 29 12:58:01.349676 systemd[1]: sshd@16-172.24.4.72:22-172.24.4.1:37982.service: Deactivated successfully. Jan 29 12:58:01.362146 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:58:01.365999 systemd-logind[1561]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:58:01.370414 systemd-logind[1561]: Removed session 19. Jan 29 12:58:02.476247 sshd[5839]: Accepted publickey for core from 172.24.4.1 port 37992 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:58:02.479332 sshd[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:58:02.490374 systemd-logind[1561]: New session 20 of user core. Jan 29 12:58:02.495718 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:58:05.543139 sshd[5839]: pam_unix(sshd:session): session closed for user core Jan 29 12:58:05.554243 systemd[1]: Started sshd@18-172.24.4.72:22-172.24.4.1:45502.service - OpenSSH per-connection server daemon (172.24.4.1:45502). Jan 29 12:58:05.554725 systemd[1]: sshd@17-172.24.4.72:22-172.24.4.1:37992.service: Deactivated successfully. Jan 29 12:58:05.558470 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:58:05.561610 systemd-logind[1561]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:58:05.564313 systemd-logind[1561]: Removed session 20. Jan 29 12:58:06.902685 sshd[5859]: Accepted publickey for core from 172.24.4.1 port 45502 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:58:06.905855 sshd[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:58:06.917306 systemd-logind[1561]: New session 21 of user core. Jan 29 12:58:06.922303 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:58:07.816092 sshd[5859]: pam_unix(sshd:session): session closed for user core Jan 29 12:58:07.823185 systemd[1]: Started sshd@19-172.24.4.72:22-172.24.4.1:45506.service - OpenSSH per-connection server daemon (172.24.4.1:45506). Jan 29 12:58:07.825041 systemd[1]: sshd@18-172.24.4.72:22-172.24.4.1:45502.service: Deactivated successfully. Jan 29 12:58:07.829597 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:58:07.831463 systemd-logind[1561]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:58:07.834683 systemd-logind[1561]: Removed session 21. Jan 29 12:58:09.221894 sshd[5890]: Accepted publickey for core from 172.24.4.1 port 45506 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:58:09.224866 sshd[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:58:09.232966 systemd-logind[1561]: New session 22 of user core. Jan 29 12:58:09.238133 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:58:10.078579 sshd[5890]: pam_unix(sshd:session): session closed for user core Jan 29 12:58:10.080946 systemd[1]: sshd@19-172.24.4.72:22-172.24.4.1:45506.service: Deactivated successfully. Jan 29 12:58:10.085138 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:58:10.086341 systemd-logind[1561]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:58:10.087290 systemd-logind[1561]: Removed session 22. Jan 29 12:58:15.089036 systemd[1]: Started sshd@20-172.24.4.72:22-172.24.4.1:39792.service - OpenSSH per-connection server daemon (172.24.4.1:39792). Jan 29 12:58:16.316616 sshd[5912]: Accepted publickey for core from 172.24.4.1 port 39792 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:58:16.321313 sshd[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:58:16.333410 systemd-logind[1561]: New session 23 of user core. Jan 29 12:58:16.341644 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:58:17.264140 sshd[5912]: pam_unix(sshd:session): session closed for user core Jan 29 12:58:17.269811 systemd[1]: sshd@20-172.24.4.72:22-172.24.4.1:39792.service: Deactivated successfully. Jan 29 12:58:17.279451 systemd-logind[1561]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:58:17.280967 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:58:17.283306 systemd-logind[1561]: Removed session 23. Jan 29 12:58:22.277406 systemd[1]: Started sshd@21-172.24.4.72:22-172.24.4.1:39800.service - OpenSSH per-connection server daemon (172.24.4.1:39800). Jan 29 12:58:23.474844 sshd[5926]: Accepted publickey for core from 172.24.4.1 port 39800 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:58:23.479222 sshd[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:58:23.490968 systemd-logind[1561]: New session 24 of user core. Jan 29 12:58:23.501381 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:58:24.284611 sshd[5926]: pam_unix(sshd:session): session closed for user core Jan 29 12:58:24.291376 systemd[1]: sshd@21-172.24.4.72:22-172.24.4.1:39800.service: Deactivated successfully. Jan 29 12:58:24.298195 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:58:24.299806 systemd-logind[1561]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:58:24.304060 systemd-logind[1561]: Removed session 24. Jan 29 12:58:29.298337 systemd[1]: Started sshd@22-172.24.4.72:22-172.24.4.1:57466.service - OpenSSH per-connection server daemon (172.24.4.1:57466). Jan 29 12:58:30.476330 sshd[5962]: Accepted publickey for core from 172.24.4.1 port 57466 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:58:30.479384 sshd[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:58:30.492215 systemd-logind[1561]: New session 25 of user core. Jan 29 12:58:30.500315 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 12:58:31.338377 sshd[5962]: pam_unix(sshd:session): session closed for user core Jan 29 12:58:31.344468 systemd[1]: sshd@22-172.24.4.72:22-172.24.4.1:57466.service: Deactivated successfully. Jan 29 12:58:31.353930 systemd-logind[1561]: Session 25 logged out. Waiting for processes to exit. Jan 29 12:58:31.355006 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 12:58:31.358340 systemd-logind[1561]: Removed session 25. Jan 29 12:58:36.350366 systemd[1]: Started sshd@23-172.24.4.72:22-172.24.4.1:42150.service - OpenSSH per-connection server daemon (172.24.4.1:42150). Jan 29 12:58:37.785813 sshd[5978]: Accepted publickey for core from 172.24.4.1 port 42150 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:58:37.788148 sshd[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:58:37.799699 systemd-logind[1561]: New session 26 of user core. Jan 29 12:58:37.807446 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 12:58:39.112540 sshd[5978]: pam_unix(sshd:session): session closed for user core Jan 29 12:58:39.122954 systemd[1]: sshd@23-172.24.4.72:22-172.24.4.1:42150.service: Deactivated successfully. Jan 29 12:58:39.129434 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 12:58:39.130203 systemd-logind[1561]: Session 26 logged out. Waiting for processes to exit. Jan 29 12:58:39.133582 systemd-logind[1561]: Removed session 26. Jan 29 12:58:44.129379 systemd[1]: Started sshd@24-172.24.4.72:22-172.24.4.1:59924.service - OpenSSH per-connection server daemon (172.24.4.1:59924). Jan 29 12:58:45.542848 sshd[6017]: Accepted publickey for core from 172.24.4.1 port 59924 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:58:45.545936 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:58:45.558192 systemd-logind[1561]: New session 27 of user core. Jan 29 12:58:45.565333 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 12:58:46.259427 sshd[6017]: pam_unix(sshd:session): session closed for user core Jan 29 12:58:46.267331 systemd[1]: sshd@24-172.24.4.72:22-172.24.4.1:59924.service: Deactivated successfully. Jan 29 12:58:46.275174 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 12:58:46.277608 systemd-logind[1561]: Session 27 logged out. Waiting for processes to exit. Jan 29 12:58:46.280059 systemd-logind[1561]: Removed session 27.