Jan 29 12:48:10.957808 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:48:10.957834 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:48:10.957844 kernel: BIOS-provided physical RAM map: Jan 29 12:48:10.957852 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 12:48:10.957859 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 12:48:10.957869 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 12:48:10.957879 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 29 12:48:10.957886 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 29 12:48:10.957894 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 12:48:10.957902 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 12:48:10.957910 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 29 12:48:10.957918 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 12:48:10.957925 kernel: NX (Execute Disable) protection: active Jan 29 12:48:10.957933 kernel: APIC: Static calls initialized Jan 29 12:48:10.957944 kernel: SMBIOS 3.0.0 present. Jan 29 12:48:10.957953 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 29 12:48:10.957961 kernel: Hypervisor detected: KVM Jan 29 12:48:10.957969 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 12:48:10.957977 kernel: kvm-clock: using sched offset of 3385740712 cycles Jan 29 12:48:10.957988 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 12:48:10.957997 kernel: tsc: Detected 1996.249 MHz processor Jan 29 12:48:10.958005 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:48:10.958014 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:48:10.958023 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 29 12:48:10.958031 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 12:48:10.958040 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:48:10.958048 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 29 12:48:10.958056 kernel: ACPI: Early table checksum verification disabled Jan 29 12:48:10.958066 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 29 12:48:10.958075 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:48:10.958083 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:48:10.958092 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:48:10.958100 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 29 12:48:10.958108 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:48:10.958117 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:48:10.958125 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 29 12:48:10.958133 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 29 12:48:10.958144 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 29 12:48:10.958152 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 29 12:48:10.958160 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 29 12:48:10.958172 kernel: No NUMA configuration found Jan 29 12:48:10.958181 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 29 12:48:10.958189 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 29 12:48:10.958200 kernel: Zone ranges: Jan 29 12:48:10.958209 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:48:10.958218 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 12:48:10.958226 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 29 12:48:10.958235 kernel: Movable zone start for each node Jan 29 12:48:10.958244 kernel: Early memory node ranges Jan 29 12:48:10.958252 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 12:48:10.958261 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 29 12:48:10.958272 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 29 12:48:10.958281 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 29 12:48:10.958289 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:48:10.958298 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 12:48:10.958307 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 29 12:48:10.958315 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 12:48:10.958324 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 12:48:10.958333 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:48:10.958342 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 12:48:10.958352 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 12:48:10.958361 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:48:10.958370 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 12:48:10.958379 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 12:48:10.958387 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:48:10.958413 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 12:48:10.958423 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 12:48:10.958441 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 29 12:48:10.958450 kernel: Booting paravirtualized kernel on KVM Jan 29 12:48:10.958461 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:48:10.958469 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 12:48:10.958477 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 12:48:10.958485 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 12:48:10.958493 kernel: pcpu-alloc: [0] 0 1 Jan 29 12:48:10.958501 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 12:48:10.958511 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:48:10.958520 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:48:10.958530 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:48:10.958538 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:48:10.958546 kernel: Fallback order for Node 0: 0 Jan 29 12:48:10.958554 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 29 12:48:10.958562 kernel: Policy zone: Normal Jan 29 12:48:10.958571 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:48:10.958579 kernel: software IO TLB: area num 2. Jan 29 12:48:10.958587 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 227308K reserved, 0K cma-reserved) Jan 29 12:48:10.958596 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:48:10.958606 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:48:10.958614 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:48:10.958622 kernel: Dynamic Preempt: voluntary Jan 29 12:48:10.958630 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:48:10.958641 kernel: rcu: RCU event tracing is enabled. Jan 29 12:48:10.958650 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:48:10.958658 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:48:10.958666 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:48:10.958675 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:48:10.958684 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:48:10.958693 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:48:10.958701 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 12:48:10.958709 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:48:10.958717 kernel: Console: colour VGA+ 80x25 Jan 29 12:48:10.958725 kernel: printk: console [tty0] enabled Jan 29 12:48:10.958733 kernel: printk: console [ttyS0] enabled Jan 29 12:48:10.958741 kernel: ACPI: Core revision 20230628 Jan 29 12:48:10.958750 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:48:10.958758 kernel: x2apic enabled Jan 29 12:48:10.958768 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 12:48:10.958776 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 12:48:10.958784 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 12:48:10.958792 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 29 12:48:10.958801 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 12:48:10.958809 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 12:48:10.958817 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:48:10.958825 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:48:10.958834 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:48:10.958844 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:48:10.958852 kernel: Speculative Store Bypass: Vulnerable Jan 29 12:48:10.958860 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 29 12:48:10.958868 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:48:10.958882 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:48:10.958892 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:48:10.958901 kernel: landlock: Up and running. Jan 29 12:48:10.958909 kernel: SELinux: Initializing. Jan 29 12:48:10.958918 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:48:10.958927 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:48:10.958936 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 29 12:48:10.958946 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:48:10.958955 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:48:10.958964 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:48:10.958973 kernel: Performance Events: AMD PMU driver. Jan 29 12:48:10.958981 kernel: ... version: 0 Jan 29 12:48:10.958992 kernel: ... bit width: 48 Jan 29 12:48:10.959000 kernel: ... generic registers: 4 Jan 29 12:48:10.959009 kernel: ... value mask: 0000ffffffffffff Jan 29 12:48:10.959017 kernel: ... max period: 00007fffffffffff Jan 29 12:48:10.959026 kernel: ... fixed-purpose events: 0 Jan 29 12:48:10.959034 kernel: ... event mask: 000000000000000f Jan 29 12:48:10.959043 kernel: signal: max sigframe size: 1440 Jan 29 12:48:10.959051 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:48:10.959060 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:48:10.959070 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:48:10.959079 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:48:10.959087 kernel: .... node #0, CPUs: #1 Jan 29 12:48:10.959096 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:48:10.959105 kernel: smpboot: Max logical packages: 2 Jan 29 12:48:10.959113 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 29 12:48:10.959122 kernel: devtmpfs: initialized Jan 29 12:48:10.959130 kernel: x86/mm: Memory block size: 128MB Jan 29 12:48:10.959139 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:48:10.959148 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:48:10.959159 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:48:10.959167 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:48:10.959176 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:48:10.959185 kernel: audit: type=2000 audit(1738154889.609:1): state=initialized audit_enabled=0 res=1 Jan 29 12:48:10.959193 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:48:10.959202 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:48:10.959211 kernel: cpuidle: using governor menu Jan 29 12:48:10.959219 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:48:10.959229 kernel: dca service started, version 1.12.1 Jan 29 12:48:10.959238 kernel: PCI: Using configuration type 1 for base access Jan 29 12:48:10.959247 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:48:10.959255 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:48:10.959264 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:48:10.959273 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:48:10.959281 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:48:10.959290 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:48:10.959298 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:48:10.959307 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:48:10.959318 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:48:10.959326 kernel: ACPI: Interpreter enabled Jan 29 12:48:10.959335 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 12:48:10.959343 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:48:10.959352 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:48:10.959361 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 12:48:10.959369 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 12:48:10.959378 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:48:10.959522 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:48:10.959626 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 12:48:10.959716 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 12:48:10.959730 kernel: acpiphp: Slot [3] registered Jan 29 12:48:10.959739 kernel: acpiphp: Slot [4] registered Jan 29 12:48:10.959747 kernel: acpiphp: Slot [5] registered Jan 29 12:48:10.959756 kernel: acpiphp: Slot [6] registered Jan 29 12:48:10.959764 kernel: acpiphp: Slot [7] registered Jan 29 12:48:10.959776 kernel: acpiphp: Slot [8] registered Jan 29 12:48:10.959785 kernel: acpiphp: Slot [9] registered Jan 29 12:48:10.959793 kernel: acpiphp: Slot [10] registered Jan 29 12:48:10.959802 kernel: acpiphp: Slot [11] registered Jan 29 12:48:10.959810 kernel: acpiphp: Slot [12] registered Jan 29 12:48:10.959818 kernel: acpiphp: Slot [13] registered Jan 29 12:48:10.959827 kernel: acpiphp: Slot [14] registered Jan 29 12:48:10.959836 kernel: acpiphp: Slot [15] registered Jan 29 12:48:10.959844 kernel: acpiphp: Slot [16] registered Jan 29 12:48:10.959854 kernel: acpiphp: Slot [17] registered Jan 29 12:48:10.959862 kernel: acpiphp: Slot [18] registered Jan 29 12:48:10.959871 kernel: acpiphp: Slot [19] registered Jan 29 12:48:10.959879 kernel: acpiphp: Slot [20] registered Jan 29 12:48:10.959887 kernel: acpiphp: Slot [21] registered Jan 29 12:48:10.959896 kernel: acpiphp: Slot [22] registered Jan 29 12:48:10.959904 kernel: acpiphp: Slot [23] registered Jan 29 12:48:10.959913 kernel: acpiphp: Slot [24] registered Jan 29 12:48:10.959921 kernel: acpiphp: Slot [25] registered Jan 29 12:48:10.959929 kernel: acpiphp: Slot [26] registered Jan 29 12:48:10.959940 kernel: acpiphp: Slot [27] registered Jan 29 12:48:10.959948 kernel: acpiphp: Slot [28] registered Jan 29 12:48:10.959956 kernel: acpiphp: Slot [29] registered Jan 29 12:48:10.959965 kernel: acpiphp: Slot [30] registered Jan 29 12:48:10.959973 kernel: acpiphp: Slot [31] registered Jan 29 12:48:10.959982 kernel: PCI host bridge to bus 0000:00 Jan 29 12:48:10.960072 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:48:10.960156 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:48:10.960241 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:48:10.960873 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 12:48:10.960961 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 29 12:48:10.961046 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:48:10.961180 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 12:48:10.961292 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 12:48:10.961423 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 12:48:10.961528 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 29 12:48:10.961634 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 12:48:10.961729 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 12:48:10.961827 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 12:48:10.961923 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 12:48:10.962027 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 12:48:10.962130 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 12:48:10.962226 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 12:48:10.962331 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 12:48:10.963463 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 12:48:10.963563 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 29 12:48:10.963656 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 29 12:48:10.963747 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 29 12:48:10.963844 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 12:48:10.963944 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 12:48:10.964036 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 29 12:48:10.964127 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 29 12:48:10.964218 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 29 12:48:10.964308 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 29 12:48:10.964423 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 12:48:10.964523 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 12:48:10.964616 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 29 12:48:10.964706 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 29 12:48:10.964807 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 12:48:10.964900 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 29 12:48:10.964992 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 29 12:48:10.965091 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 12:48:10.965207 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 29 12:48:10.965299 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 29 12:48:10.965390 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 29 12:48:10.965436 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 12:48:10.965445 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 12:48:10.965454 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 12:48:10.965463 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 12:48:10.965472 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 12:48:10.965485 kernel: iommu: Default domain type: Translated Jan 29 12:48:10.965493 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:48:10.965502 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:48:10.965511 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:48:10.965519 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 12:48:10.965528 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 29 12:48:10.965625 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 12:48:10.965717 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 12:48:10.965815 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 12:48:10.965829 kernel: vgaarb: loaded Jan 29 12:48:10.965838 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 12:48:10.965846 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:48:10.965855 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:48:10.965864 kernel: pnp: PnP ACPI init Jan 29 12:48:10.965960 kernel: pnp 00:03: [dma 2] Jan 29 12:48:10.965974 kernel: pnp: PnP ACPI: found 5 devices Jan 29 12:48:10.965984 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:48:10.965996 kernel: NET: Registered PF_INET protocol family Jan 29 12:48:10.966005 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:48:10.966014 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 12:48:10.966023 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:48:10.966032 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:48:10.966041 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 12:48:10.966049 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 12:48:10.966058 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:48:10.966069 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:48:10.966078 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:48:10.966086 kernel: NET: Registered PF_XDP protocol family Jan 29 12:48:10.966168 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:48:10.966250 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:48:10.966330 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:48:10.966451 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 29 12:48:10.966533 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 29 12:48:10.966627 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 12:48:10.966724 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 12:48:10.966738 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:48:10.966747 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 12:48:10.966756 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 29 12:48:10.966765 kernel: Initialise system trusted keyrings Jan 29 12:48:10.966773 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 12:48:10.966782 kernel: Key type asymmetric registered Jan 29 12:48:10.966791 kernel: Asymmetric key parser 'x509' registered Jan 29 12:48:10.966802 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:48:10.966812 kernel: io scheduler mq-deadline registered Jan 29 12:48:10.966820 kernel: io scheduler kyber registered Jan 29 12:48:10.966829 kernel: io scheduler bfq registered Jan 29 12:48:10.966838 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:48:10.966848 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 12:48:10.966857 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 12:48:10.966866 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 12:48:10.966875 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 12:48:10.966885 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:48:10.966894 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:48:10.966903 kernel: random: crng init done Jan 29 12:48:10.966911 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 12:48:10.966920 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 12:48:10.966929 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 12:48:10.967022 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 12:48:10.967037 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 12:48:10.967122 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 12:48:10.967217 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T12:48:10 UTC (1738154890) Jan 29 12:48:10.967308 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 29 12:48:10.967322 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 12:48:10.967332 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:48:10.967341 kernel: Segment Routing with IPv6 Jan 29 12:48:10.967350 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:48:10.967360 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:48:10.967369 kernel: Key type dns_resolver registered Jan 29 12:48:10.967382 kernel: IPI shorthand broadcast: enabled Jan 29 12:48:10.968521 kernel: sched_clock: Marking stable (973006816, 167324740)->(1183808723, -43477167) Jan 29 12:48:10.968541 kernel: registered taskstats version 1 Jan 29 12:48:10.968551 kernel: Loading compiled-in X.509 certificates Jan 29 12:48:10.968560 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:48:10.968570 kernel: Key type .fscrypt registered Jan 29 12:48:10.968578 kernel: Key type fscrypt-provisioning registered Jan 29 12:48:10.968587 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:48:10.968596 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:48:10.968608 kernel: ima: No architecture policies found Jan 29 12:48:10.968617 kernel: clk: Disabling unused clocks Jan 29 12:48:10.968626 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:48:10.968634 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:48:10.968643 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:48:10.968652 kernel: Run /init as init process Jan 29 12:48:10.968660 kernel: with arguments: Jan 29 12:48:10.968669 kernel: /init Jan 29 12:48:10.968677 kernel: with environment: Jan 29 12:48:10.968688 kernel: HOME=/ Jan 29 12:48:10.968696 kernel: TERM=linux Jan 29 12:48:10.968704 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:48:10.968716 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:48:10.968728 systemd[1]: Detected virtualization kvm. Jan 29 12:48:10.968738 systemd[1]: Detected architecture x86-64. Jan 29 12:48:10.968747 systemd[1]: Running in initrd. Jan 29 12:48:10.968758 systemd[1]: No hostname configured, using default hostname. Jan 29 12:48:10.968767 systemd[1]: Hostname set to . Jan 29 12:48:10.968777 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:48:10.968786 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:48:10.968795 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:48:10.968805 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:48:10.968815 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:48:10.968834 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:48:10.968846 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:48:10.968856 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:48:10.968867 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:48:10.968877 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:48:10.968887 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:48:10.968899 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:48:10.968908 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:48:10.968918 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:48:10.968928 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:48:10.968937 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:48:10.968947 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:48:10.968957 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:48:10.968967 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:48:10.968979 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:48:10.968988 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:48:10.968998 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:48:10.969008 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:48:10.969018 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:48:10.969027 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:48:10.969037 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:48:10.969047 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:48:10.969056 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:48:10.969068 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:48:10.969077 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:48:10.969124 systemd-journald[184]: Collecting audit messages is disabled. Jan 29 12:48:10.969148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:48:10.969162 systemd-journald[184]: Journal started Jan 29 12:48:10.969184 systemd-journald[184]: Runtime Journal (/run/log/journal/4a90038c6a8e4b638bf1e12013b67a6a) is 8.0M, max 78.3M, 70.3M free. Jan 29 12:48:10.982537 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:48:10.984152 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:48:10.984775 systemd-modules-load[185]: Inserted module 'overlay' Jan 29 12:48:10.986314 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:48:10.988123 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:48:10.999763 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:48:11.008801 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:48:11.053418 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:48:11.053443 kernel: Bridge firewalling registered Jan 29 12:48:11.029129 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 29 12:48:11.052899 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:48:11.059531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:48:11.063883 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:48:11.067549 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:48:11.069509 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:48:11.071234 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:48:11.083136 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:48:11.088541 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:48:11.090963 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:48:11.098553 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:48:11.100606 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:48:11.102534 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:48:11.123939 dracut-cmdline[219]: dracut-dracut-053 Jan 29 12:48:11.128142 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:48:11.133386 systemd-resolved[213]: Positive Trust Anchors: Jan 29 12:48:11.133423 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:48:11.135018 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:48:11.140155 systemd-resolved[213]: Defaulting to hostname 'linux'. Jan 29 12:48:11.141120 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:48:11.142026 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:48:11.194467 kernel: SCSI subsystem initialized Jan 29 12:48:11.205449 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:48:11.217763 kernel: iscsi: registered transport (tcp) Jan 29 12:48:11.240597 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:48:11.240673 kernel: QLogic iSCSI HBA Driver Jan 29 12:48:11.281207 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:48:11.292655 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:48:11.340226 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:48:11.340312 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:48:11.340340 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:48:11.385499 kernel: raid6: sse2x4 gen() 12795 MB/s Jan 29 12:48:11.403481 kernel: raid6: sse2x2 gen() 14893 MB/s Jan 29 12:48:11.422459 kernel: raid6: sse2x1 gen() 9900 MB/s Jan 29 12:48:11.422548 kernel: raid6: using algorithm sse2x2 gen() 14893 MB/s Jan 29 12:48:11.441449 kernel: raid6: .... xor() 8706 MB/s, rmw enabled Jan 29 12:48:11.441513 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 12:48:11.463458 kernel: xor: measuring software checksum speed Jan 29 12:48:11.463523 kernel: prefetch64-sse : 16287 MB/sec Jan 29 12:48:11.466430 kernel: generic_sse : 15548 MB/sec Jan 29 12:48:11.466484 kernel: xor: using function: prefetch64-sse (16287 MB/sec) Jan 29 12:48:11.648472 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:48:11.661236 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:48:11.666674 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:48:11.682509 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 29 12:48:11.687138 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:48:11.697738 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:48:11.716191 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jan 29 12:48:11.758061 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:48:11.766676 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:48:11.811778 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:48:11.822789 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:48:11.845796 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:48:11.852958 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:48:11.854290 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:48:11.857382 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:48:11.866907 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:48:11.895685 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:48:11.925991 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 29 12:48:11.963601 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 29 12:48:11.963719 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:48:11.963733 kernel: GPT:17805311 != 20971519 Jan 29 12:48:11.963744 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:48:11.963756 kernel: GPT:17805311 != 20971519 Jan 29 12:48:11.963766 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:48:11.963777 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:48:11.963788 kernel: libata version 3.00 loaded. Jan 29 12:48:11.927716 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:48:11.927845 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:48:11.933195 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:48:11.934118 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:48:11.934285 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:48:11.935575 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:48:11.941861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:48:11.973525 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 12:48:11.981635 kernel: scsi host0: ata_piix Jan 29 12:48:11.981773 kernel: scsi host1: ata_piix Jan 29 12:48:11.981910 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 29 12:48:11.981931 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 29 12:48:11.997431 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Jan 29 12:48:12.009420 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (449) Jan 29 12:48:12.020922 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 12:48:12.040481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:48:12.046556 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 12:48:12.052137 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:48:12.056763 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 12:48:12.057333 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 12:48:12.069582 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:48:12.072130 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:48:12.093441 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:48:12.094820 disk-uuid[502]: Primary Header is updated. Jan 29 12:48:12.094820 disk-uuid[502]: Secondary Entries is updated. Jan 29 12:48:12.094820 disk-uuid[502]: Secondary Header is updated. Jan 29 12:48:12.110968 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:48:12.113700 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:48:13.123468 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:48:13.123543 disk-uuid[509]: The operation has completed successfully. Jan 29 12:48:13.170854 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:48:13.170963 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:48:13.200681 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:48:13.205157 sh[527]: Success Jan 29 12:48:13.234460 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 29 12:48:13.297495 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:48:13.306499 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:48:13.307301 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:48:13.331906 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:48:13.331977 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:48:13.333906 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:48:13.337233 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:48:13.337275 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:48:13.353950 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:48:13.356363 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:48:13.361680 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:48:13.364865 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:48:13.380737 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:48:13.380818 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:48:13.382733 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:48:13.390461 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:48:13.398677 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:48:13.400877 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:48:13.411683 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:48:13.420695 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:48:13.488932 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:48:13.500862 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:48:13.524205 systemd-networkd[711]: lo: Link UP Jan 29 12:48:13.524978 systemd-networkd[711]: lo: Gained carrier Jan 29 12:48:13.526205 systemd-networkd[711]: Enumeration completed Jan 29 12:48:13.526609 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:48:13.527240 systemd[1]: Reached target network.target - Network. Jan 29 12:48:13.529277 systemd-networkd[711]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:48:13.529280 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:48:13.531799 systemd-networkd[711]: eth0: Link UP Jan 29 12:48:13.531802 systemd-networkd[711]: eth0: Gained carrier Jan 29 12:48:13.531809 systemd-networkd[711]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:48:13.546784 systemd-networkd[711]: eth0: DHCPv4 address 172.24.4.220/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 12:48:13.575528 ignition[629]: Ignition 2.19.0 Jan 29 12:48:13.575539 ignition[629]: Stage: fetch-offline Jan 29 12:48:13.577591 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:48:13.575580 ignition[629]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:48:13.575591 ignition[629]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:48:13.575701 ignition[629]: parsed url from cmdline: "" Jan 29 12:48:13.575706 ignition[629]: no config URL provided Jan 29 12:48:13.575713 ignition[629]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:48:13.575725 ignition[629]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:48:13.575731 ignition[629]: failed to fetch config: resource requires networking Jan 29 12:48:13.575973 ignition[629]: Ignition finished successfully Jan 29 12:48:13.587573 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:48:13.599677 ignition[721]: Ignition 2.19.0 Jan 29 12:48:13.599689 ignition[721]: Stage: fetch Jan 29 12:48:13.599875 ignition[721]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:48:13.599887 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:48:13.599986 ignition[721]: parsed url from cmdline: "" Jan 29 12:48:13.599990 ignition[721]: no config URL provided Jan 29 12:48:13.599996 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:48:13.600004 ignition[721]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:48:13.600123 ignition[721]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 12:48:13.600230 ignition[721]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 12:48:13.600267 ignition[721]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 12:48:13.752632 ignition[721]: GET result: OK Jan 29 12:48:13.752785 ignition[721]: parsing config with SHA512: ca32df74acb410e60dcebd283f3d15f4de9c853933243c3e9f46cbfd99e0237d10281b1e914844887b76bf181f1fa91276ff1a1e0b28b951d3b3988e03c8a13e Jan 29 12:48:13.762798 unknown[721]: fetched base config from "system" Jan 29 12:48:13.762822 unknown[721]: fetched base config from "system" Jan 29 12:48:13.763699 ignition[721]: fetch: fetch complete Jan 29 12:48:13.762837 unknown[721]: fetched user config from "openstack" Jan 29 12:48:13.763711 ignition[721]: fetch: fetch passed Jan 29 12:48:13.767020 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:48:13.763789 ignition[721]: Ignition finished successfully Jan 29 12:48:13.774618 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:48:13.820551 ignition[727]: Ignition 2.19.0 Jan 29 12:48:13.820575 ignition[727]: Stage: kargs Jan 29 12:48:13.820977 ignition[727]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:48:13.821003 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:48:13.823719 ignition[727]: kargs: kargs passed Jan 29 12:48:13.825821 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:48:13.823816 ignition[727]: Ignition finished successfully Jan 29 12:48:13.832657 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:48:13.860017 ignition[733]: Ignition 2.19.0 Jan 29 12:48:13.861330 ignition[733]: Stage: disks Jan 29 12:48:13.861687 ignition[733]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:48:13.861707 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:48:13.864798 ignition[733]: disks: disks passed Jan 29 12:48:13.864893 ignition[733]: Ignition finished successfully Jan 29 12:48:13.867008 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:48:13.869751 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:48:13.871208 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:48:13.873824 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:48:13.876265 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:48:13.878468 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:48:13.889714 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:48:13.920255 systemd-fsck[741]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 12:48:13.935098 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:48:13.945600 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:48:14.091480 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:48:14.093006 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:48:14.094817 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:48:14.102549 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:48:14.105687 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:48:14.107930 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:48:14.110698 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 12:48:14.112250 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:48:14.112306 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:48:14.132220 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (749) Jan 29 12:48:14.132244 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:48:14.132257 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:48:14.132270 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:48:14.119792 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:48:14.146905 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:48:14.157451 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:48:14.167734 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:48:14.256036 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:48:14.264999 initrd-setup-root[785]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:48:14.268365 initrd-setup-root[792]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:48:14.272099 initrd-setup-root[799]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:48:14.354616 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:48:14.359524 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:48:14.362524 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:48:14.367423 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:48:14.370494 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:48:14.396477 ignition[866]: INFO : Ignition 2.19.0 Jan 29 12:48:14.396477 ignition[866]: INFO : Stage: mount Jan 29 12:48:14.397828 ignition[866]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:48:14.397828 ignition[866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:48:14.400890 ignition[866]: INFO : mount: mount passed Jan 29 12:48:14.400890 ignition[866]: INFO : Ignition finished successfully Jan 29 12:48:14.399466 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:48:14.404105 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:48:15.287681 systemd-networkd[711]: eth0: Gained IPv6LL Jan 29 12:48:21.332617 coreos-metadata[751]: Jan 29 12:48:21.332 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:48:21.373194 coreos-metadata[751]: Jan 29 12:48:21.373 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 12:48:21.389369 coreos-metadata[751]: Jan 29 12:48:21.389 INFO Fetch successful Jan 29 12:48:21.389369 coreos-metadata[751]: Jan 29 12:48:21.389 INFO wrote hostname ci-4081-3-0-6-7edc95d587.novalocal to /sysroot/etc/hostname Jan 29 12:48:21.393312 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 12:48:21.393586 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 12:48:21.406617 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:48:21.426750 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:48:21.455484 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (883) Jan 29 12:48:21.464673 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:48:21.464750 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:48:21.469234 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:48:21.480529 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:48:21.485182 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:48:21.532238 ignition[901]: INFO : Ignition 2.19.0 Jan 29 12:48:21.532238 ignition[901]: INFO : Stage: files Jan 29 12:48:21.535218 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:48:21.535218 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:48:21.535218 ignition[901]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:48:21.541128 ignition[901]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:48:21.541128 ignition[901]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:48:21.541128 ignition[901]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:48:21.547071 ignition[901]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:48:21.547071 ignition[901]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:48:21.547071 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 12:48:21.547071 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 29 12:48:21.541533 unknown[901]: wrote ssh authorized keys file for user: core Jan 29 12:48:21.619721 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 12:48:21.942229 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 12:48:21.942229 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:48:21.947084 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:48:21.947084 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:48:21.947084 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:48:21.947084 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:48:21.947084 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:48:21.947084 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:48:21.947084 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:48:21.947084 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:48:21.947084 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:48:21.947084 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 12:48:21.947084 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 12:48:21.947084 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 12:48:21.947084 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 29 12:48:22.490850 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 12:48:24.070731 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 12:48:24.070731 ignition[901]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 12:48:24.079007 ignition[901]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:48:24.079007 ignition[901]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:48:24.079007 ignition[901]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 12:48:24.079007 ignition[901]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:48:24.079007 ignition[901]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:48:24.079007 ignition[901]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:48:24.079007 ignition[901]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:48:24.079007 ignition[901]: INFO : files: files passed Jan 29 12:48:24.079007 ignition[901]: INFO : Ignition finished successfully Jan 29 12:48:24.077389 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:48:24.089609 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:48:24.095542 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:48:24.096604 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:48:24.096681 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:48:24.117928 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:48:24.117928 initrd-setup-root-after-ignition[930]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:48:24.120872 initrd-setup-root-after-ignition[934]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:48:24.123521 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:48:24.124523 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:48:24.131525 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:48:24.185168 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:48:24.185458 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:48:24.189142 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:48:24.200962 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:48:24.203922 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:48:24.213683 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:48:24.242681 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:48:24.252674 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:48:24.282451 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:48:24.286134 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:48:24.287909 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:48:24.290844 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:48:24.291136 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:48:24.294443 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:48:24.296302 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:48:24.299302 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:48:24.301972 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:48:24.304604 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:48:24.307614 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:48:24.310626 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:48:24.313708 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:48:24.316663 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:48:24.319716 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:48:24.322389 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:48:24.322710 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:48:24.325891 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:48:24.327869 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:48:24.330474 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:48:24.330746 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:48:24.333537 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:48:24.333889 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:48:24.337635 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:48:24.337936 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:48:24.339871 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:48:24.340208 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:48:24.350945 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:48:24.353285 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:48:24.353768 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:48:24.363871 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:48:24.365210 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:48:24.365651 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:48:24.373353 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:48:24.374038 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:48:24.382355 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:48:24.383078 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:48:24.390212 ignition[954]: INFO : Ignition 2.19.0 Jan 29 12:48:24.390212 ignition[954]: INFO : Stage: umount Jan 29 12:48:24.390212 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:48:24.390212 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:48:24.390212 ignition[954]: INFO : umount: umount passed Jan 29 12:48:24.390212 ignition[954]: INFO : Ignition finished successfully Jan 29 12:48:24.392723 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:48:24.392809 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:48:24.394006 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:48:24.394080 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:48:24.395665 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:48:24.395707 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:48:24.396343 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:48:24.396382 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:48:24.396906 systemd[1]: Stopped target network.target - Network. Jan 29 12:48:24.399610 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:48:24.399654 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:48:24.400600 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:48:24.401082 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:48:24.401750 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:48:24.402895 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:48:24.404471 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:48:24.404959 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:48:24.404993 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:48:24.406042 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:48:24.406075 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:48:24.408601 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:48:24.408642 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:48:24.409207 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:48:24.409248 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:48:24.409871 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:48:24.410595 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:48:24.412955 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:48:24.414435 systemd-networkd[711]: eth0: DHCPv6 lease lost Jan 29 12:48:24.415389 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:48:24.415540 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:48:24.417669 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:48:24.417748 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:48:24.419932 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:48:24.419975 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:48:24.428497 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:48:24.430688 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:48:24.430741 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:48:24.431808 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:48:24.431850 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:48:24.437472 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:48:24.437512 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:48:24.438521 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:48:24.438560 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:48:24.439775 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:48:24.450789 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:48:24.450928 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:48:24.452539 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:48:24.452585 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:48:24.453725 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:48:24.453759 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:48:24.454873 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:48:24.454915 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:48:24.456639 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:48:24.456679 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:48:24.457796 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:48:24.457836 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:48:24.467586 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:48:24.468742 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:48:24.468807 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:48:24.469371 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:48:24.469428 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:48:24.470242 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:48:24.472102 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:48:24.473313 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:48:24.473571 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:48:24.651608 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:48:24.651852 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:48:24.655809 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:48:24.657260 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:48:24.657384 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:48:24.666736 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:48:24.694833 systemd[1]: Switching root. Jan 29 12:48:24.737097 systemd-journald[184]: Journal stopped Jan 29 12:48:26.196376 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 29 12:48:26.196516 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:48:26.196536 kernel: SELinux: policy capability open_perms=1 Jan 29 12:48:26.196547 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:48:26.196559 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:48:26.196570 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:48:26.196581 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:48:26.196592 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:48:26.196609 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:48:26.196621 kernel: audit: type=1403 audit(1738154905.192:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:48:26.196633 systemd[1]: Successfully loaded SELinux policy in 80.926ms. Jan 29 12:48:26.196653 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.571ms. Jan 29 12:48:26.196666 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:48:26.196678 systemd[1]: Detected virtualization kvm. Jan 29 12:48:26.196690 systemd[1]: Detected architecture x86-64. Jan 29 12:48:26.196704 systemd[1]: Detected first boot. Jan 29 12:48:26.196718 systemd[1]: Hostname set to . Jan 29 12:48:26.196730 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:48:26.196741 zram_generator::config[997]: No configuration found. Jan 29 12:48:26.196755 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:48:26.196767 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 12:48:26.196779 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 12:48:26.196790 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 12:48:26.196805 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:48:26.196817 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:48:26.196828 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:48:26.196840 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:48:26.196852 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:48:26.196864 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:48:26.196876 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:48:26.196888 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:48:26.196900 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:48:26.196914 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:48:26.196926 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:48:26.196942 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:48:26.196954 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:48:26.196966 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:48:26.196980 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:48:26.196992 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:48:26.197004 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 12:48:26.197018 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 12:48:26.197030 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 12:48:26.197041 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:48:26.197067 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:48:26.197080 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:48:26.197091 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:48:26.197103 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:48:26.197117 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:48:26.197129 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:48:26.197141 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:48:26.197153 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:48:26.197164 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:48:26.197180 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:48:26.197192 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:48:26.197203 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:48:26.197215 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:48:26.197231 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:48:26.197243 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:48:26.197256 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:48:26.197268 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:48:26.197280 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:48:26.197293 systemd[1]: Reached target machines.target - Containers. Jan 29 12:48:26.197304 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:48:26.197316 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:48:26.197330 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:48:26.197342 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:48:26.197353 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:48:26.197365 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:48:26.197377 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:48:26.197389 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:48:26.197432 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:48:26.197446 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:48:26.197458 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 12:48:26.197472 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 12:48:26.197484 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 12:48:26.197496 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 12:48:26.197507 kernel: loop: module loaded Jan 29 12:48:26.197519 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:48:26.197531 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:48:26.197543 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:48:26.197556 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:48:26.197567 kernel: ACPI: bus type drm_connector registered Jan 29 12:48:26.197580 kernel: fuse: init (API version 7.39) Jan 29 12:48:26.197606 systemd-journald[1097]: Collecting audit messages is disabled. Jan 29 12:48:26.197630 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:48:26.197642 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 12:48:26.197654 systemd-journald[1097]: Journal started Jan 29 12:48:26.197682 systemd-journald[1097]: Runtime Journal (/run/log/journal/4a90038c6a8e4b638bf1e12013b67a6a) is 8.0M, max 78.3M, 70.3M free. Jan 29 12:48:25.853797 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:48:25.880424 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 12:48:25.880772 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 12:48:26.198633 systemd[1]: Stopped verity-setup.service. Jan 29 12:48:26.203628 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:48:26.209420 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:48:26.210005 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:48:26.210704 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:48:26.211290 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:48:26.211877 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:48:26.213659 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:48:26.214300 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:48:26.215007 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:48:26.215803 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:48:26.216579 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:48:26.217444 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:48:26.218208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:48:26.218324 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:48:26.219047 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:48:26.219162 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:48:26.220873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:48:26.221018 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:48:26.221846 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:48:26.221969 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:48:26.222689 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:48:26.222803 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:48:26.223610 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:48:26.224303 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:48:26.225031 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:48:26.234835 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:48:26.241521 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:48:26.247602 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:48:26.248215 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:48:26.248251 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:48:26.250847 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:48:26.259582 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:48:26.264578 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:48:26.265233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:48:26.272727 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:48:26.276181 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:48:26.277267 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:48:26.284631 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:48:26.286651 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:48:26.287703 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:48:26.293997 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:48:26.296517 systemd-journald[1097]: Time spent on flushing to /var/log/journal/4a90038c6a8e4b638bf1e12013b67a6a is 77.521ms for 940 entries. Jan 29 12:48:26.296517 systemd-journald[1097]: System Journal (/var/log/journal/4a90038c6a8e4b638bf1e12013b67a6a) is 8.0M, max 584.8M, 576.8M free. Jan 29 12:48:26.387031 systemd-journald[1097]: Received client request to flush runtime journal. Jan 29 12:48:26.387072 kernel: loop0: detected capacity change from 0 to 140768 Jan 29 12:48:26.303600 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:48:26.307206 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:48:26.308743 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:48:26.310118 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:48:26.311483 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:48:26.328581 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:48:26.329500 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:48:26.335338 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:48:26.344607 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:48:26.367541 udevadm[1136]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 12:48:26.390161 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:48:26.401291 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:48:26.449930 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:48:26.454933 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:48:26.455744 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:48:26.461638 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:48:26.474451 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:48:26.503467 kernel: loop1: detected capacity change from 0 to 218376 Jan 29 12:48:26.518781 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Jan 29 12:48:26.519181 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Jan 29 12:48:26.527165 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:48:26.562439 kernel: loop2: detected capacity change from 0 to 142488 Jan 29 12:48:26.624486 kernel: loop3: detected capacity change from 0 to 8 Jan 29 12:48:26.653142 kernel: loop4: detected capacity change from 0 to 140768 Jan 29 12:48:26.695427 kernel: loop5: detected capacity change from 0 to 218376 Jan 29 12:48:26.746472 kernel: loop6: detected capacity change from 0 to 142488 Jan 29 12:48:26.815094 kernel: loop7: detected capacity change from 0 to 8 Jan 29 12:48:26.817835 (sd-merge)[1156]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 12:48:26.820365 (sd-merge)[1156]: Merged extensions into '/usr'. Jan 29 12:48:26.828064 systemd[1]: Reloading requested from client PID 1130 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:48:26.828081 systemd[1]: Reloading... Jan 29 12:48:26.935419 zram_generator::config[1182]: No configuration found. Jan 29 12:48:27.119376 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:48:27.176205 systemd[1]: Reloading finished in 347 ms. Jan 29 12:48:27.212232 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:48:27.213127 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:48:27.228674 systemd[1]: Starting ensure-sysext.service... Jan 29 12:48:27.232543 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:48:27.236657 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:48:27.242682 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:48:27.242699 systemd[1]: Reloading... Jan 29 12:48:27.262550 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:48:27.263290 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:48:27.266262 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:48:27.266676 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 29 12:48:27.266745 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 29 12:48:27.271168 ldconfig[1125]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:48:27.273559 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:48:27.273660 systemd-tmpfiles[1239]: Skipping /boot Jan 29 12:48:27.282698 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:48:27.282798 systemd-tmpfiles[1239]: Skipping /boot Jan 29 12:48:27.314110 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Jan 29 12:48:27.317409 zram_generator::config[1264]: No configuration found. Jan 29 12:48:27.485439 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 12:48:27.506842 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 12:48:27.514412 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1291) Jan 29 12:48:27.542978 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:48:27.559629 kernel: ACPI: button: Power Button [PWRF] Jan 29 12:48:27.603420 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 12:48:27.612440 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:48:27.628349 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 12:48:27.628436 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 12:48:27.634178 kernel: Console: switching to colour dummy device 80x25 Jan 29 12:48:27.634219 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 12:48:27.634237 kernel: [drm] features: -context_init Jan 29 12:48:27.635899 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 12:48:27.636421 kernel: [drm] number of scanouts: 1 Jan 29 12:48:27.636471 kernel: [drm] number of cap sets: 0 Jan 29 12:48:27.636604 systemd[1]: Reloading finished in 393 ms. Jan 29 12:48:27.639412 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 12:48:27.642457 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 12:48:27.642540 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 12:48:27.650623 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:48:27.655113 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 12:48:27.657786 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:48:27.663865 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:48:27.686961 systemd[1]: Finished ensure-sysext.service. Jan 29 12:48:27.691228 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:48:27.702264 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:48:27.704843 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:48:27.709542 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:48:27.717589 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:48:27.718189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:48:27.719599 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:48:27.724552 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:48:27.726264 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:48:27.728627 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:48:27.731251 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:48:27.732737 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:48:27.736601 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:48:27.738238 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:48:27.743260 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:48:27.757588 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:48:27.769519 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 12:48:27.772525 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:48:27.774588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:48:27.775866 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:48:27.776538 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:48:27.776931 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:48:27.777742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:48:27.777995 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:48:27.778275 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:48:27.778388 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:48:27.782806 lvm[1362]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:48:27.785230 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:48:27.785648 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:48:27.791633 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:48:27.806884 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:48:27.807363 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:48:27.811923 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:48:27.825654 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:48:27.835640 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:48:27.845010 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:48:27.848126 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:48:27.852848 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:48:27.865639 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:48:27.875649 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:48:27.882938 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:48:27.884582 augenrules[1402]: No rules Jan 29 12:48:27.885420 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:48:27.892080 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:48:27.909780 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:48:27.912156 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:48:27.916502 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:48:27.984714 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:48:27.985143 systemd-networkd[1374]: lo: Link UP Jan 29 12:48:27.985384 systemd-networkd[1374]: lo: Gained carrier Jan 29 12:48:27.987836 systemd-networkd[1374]: Enumeration completed Jan 29 12:48:27.988227 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:48:27.992265 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:48:27.992276 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:48:27.993105 systemd-networkd[1374]: eth0: Link UP Jan 29 12:48:27.993109 systemd-networkd[1374]: eth0: Gained carrier Jan 29 12:48:27.993124 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:48:27.998574 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:48:28.008232 systemd-resolved[1375]: Positive Trust Anchors: Jan 29 12:48:28.008249 systemd-resolved[1375]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:48:28.008290 systemd-resolved[1375]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:48:28.008501 systemd-networkd[1374]: eth0: DHCPv4 address 172.24.4.220/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 12:48:28.010117 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 12:48:28.012687 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:48:28.015669 systemd-resolved[1375]: Using system hostname 'ci-4081-3-0-6-7edc95d587.novalocal'. Jan 29 12:48:28.017132 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:48:28.019315 systemd[1]: Reached target network.target - Network. Jan 29 12:48:28.020358 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:48:28.021231 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:48:28.022303 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:48:28.023247 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:48:28.024099 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:48:28.025555 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:48:28.026436 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:48:28.027482 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:48:28.027502 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:48:28.028282 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:48:28.030351 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:48:28.033313 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:48:28.061135 systemd-timesyncd[1376]: Contacted time server 45.13.105.44:123 (0.flatcar.pool.ntp.org). Jan 29 12:48:28.061181 systemd-timesyncd[1376]: Initial clock synchronization to Wed 2025-01-29 12:48:28.057043 UTC. Jan 29 12:48:28.076338 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:48:28.080310 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:48:28.082970 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:48:28.085382 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:48:28.087999 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:48:28.088185 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:48:28.098620 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:48:28.104375 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:48:28.114757 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:48:28.126697 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:48:28.138372 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:48:28.142329 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:48:28.152871 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:48:28.157537 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:48:28.168574 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:48:28.172800 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:48:28.187306 jq[1430]: false Jan 29 12:48:28.187584 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:48:28.190990 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:48:28.192553 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:48:28.194564 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:48:28.199716 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:48:28.204235 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:48:28.205000 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:48:28.206787 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:48:28.207125 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:48:28.214496 jq[1441]: true Jan 29 12:48:28.234921 (ntainerd)[1452]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:48:28.238637 jq[1445]: true Jan 29 12:48:28.241532 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:48:28.241713 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:48:28.250335 extend-filesystems[1431]: Found loop4 Jan 29 12:48:28.256539 extend-filesystems[1431]: Found loop5 Jan 29 12:48:28.256539 extend-filesystems[1431]: Found loop6 Jan 29 12:48:28.256539 extend-filesystems[1431]: Found loop7 Jan 29 12:48:28.256539 extend-filesystems[1431]: Found vda Jan 29 12:48:28.256539 extend-filesystems[1431]: Found vda1 Jan 29 12:48:28.256539 extend-filesystems[1431]: Found vda2 Jan 29 12:48:28.256539 extend-filesystems[1431]: Found vda3 Jan 29 12:48:28.256539 extend-filesystems[1431]: Found usr Jan 29 12:48:28.256539 extend-filesystems[1431]: Found vda4 Jan 29 12:48:28.256539 extend-filesystems[1431]: Found vda6 Jan 29 12:48:28.256539 extend-filesystems[1431]: Found vda7 Jan 29 12:48:28.256539 extend-filesystems[1431]: Found vda9 Jan 29 12:48:28.256539 extend-filesystems[1431]: Checking size of /dev/vda9 Jan 29 12:48:28.276165 update_engine[1439]: I20250129 12:48:28.264194 1439 main.cc:92] Flatcar Update Engine starting Jan 29 12:48:28.371087 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:48:28.446908 systemd-logind[1437]: New seat seat0. Jan 29 12:48:28.453488 systemd-logind[1437]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 12:48:28.459052 tar[1444]: linux-amd64/LICENSE Jan 29 12:48:28.459052 tar[1444]: linux-amd64/helm Jan 29 12:48:28.453565 systemd-logind[1437]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:48:28.454131 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:48:28.487472 extend-filesystems[1431]: Resized partition /dev/vda9 Jan 29 12:48:28.523433 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1287) Jan 29 12:48:28.542989 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:48:28.615604 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 29 12:48:28.627278 dbus-daemon[1427]: [system] SELinux support is enabled Jan 29 12:48:28.699466 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 29 12:48:28.699587 update_engine[1439]: I20250129 12:48:28.670478 1439 update_check_scheduler.cc:74] Next update check in 10m2s Jan 29 12:48:28.628180 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:48:28.669550 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 12:48:28.659754 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:48:28.659777 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:48:28.661606 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:48:28.661624 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:48:28.670668 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:48:28.694283 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:48:28.702919 bash[1475]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:48:28.703300 extend-filesystems[1482]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 12:48:28.703300 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:48:28.703300 extend-filesystems[1482]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 29 12:48:28.705785 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Jan 29 12:48:28.704124 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:48:28.704289 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:48:28.709607 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:48:28.725810 systemd[1]: Starting sshkeys.service... Jan 29 12:48:28.758104 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 12:48:28.767748 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 12:48:28.864222 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:48:29.056865 containerd[1452]: time="2025-01-29T12:48:29.056729346Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:48:29.122000 containerd[1452]: time="2025-01-29T12:48:29.121444865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:48:29.124694 containerd[1452]: time="2025-01-29T12:48:29.124656243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:48:29.124694 containerd[1452]: time="2025-01-29T12:48:29.124689898Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:48:29.124775 containerd[1452]: time="2025-01-29T12:48:29.124708318Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:48:29.124979 containerd[1452]: time="2025-01-29T12:48:29.124869590Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:48:29.124979 containerd[1452]: time="2025-01-29T12:48:29.124899669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:48:29.124979 containerd[1452]: time="2025-01-29T12:48:29.124963883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:48:29.125064 containerd[1452]: time="2025-01-29T12:48:29.124979589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:48:29.127861 containerd[1452]: time="2025-01-29T12:48:29.127515319Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:48:29.127861 containerd[1452]: time="2025-01-29T12:48:29.127557267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:48:29.127861 containerd[1452]: time="2025-01-29T12:48:29.127580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:48:29.127861 containerd[1452]: time="2025-01-29T12:48:29.127593166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:48:29.127861 containerd[1452]: time="2025-01-29T12:48:29.127678475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:48:29.127983 containerd[1452]: time="2025-01-29T12:48:29.127873752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:48:29.128006 containerd[1452]: time="2025-01-29T12:48:29.127980826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:48:29.128006 containerd[1452]: time="2025-01-29T12:48:29.127998163Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:48:29.128096 containerd[1452]: time="2025-01-29T12:48:29.128073386Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:48:29.128151 containerd[1452]: time="2025-01-29T12:48:29.128130238Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:48:29.145151 containerd[1452]: time="2025-01-29T12:48:29.144564363Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:48:29.145151 containerd[1452]: time="2025-01-29T12:48:29.144634266Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:48:29.145151 containerd[1452]: time="2025-01-29T12:48:29.144653308Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:48:29.145151 containerd[1452]: time="2025-01-29T12:48:29.144671407Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:48:29.145151 containerd[1452]: time="2025-01-29T12:48:29.144726927Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:48:29.145151 containerd[1452]: time="2025-01-29T12:48:29.144862718Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:48:29.148609 containerd[1452]: time="2025-01-29T12:48:29.148573408Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:48:29.148772 containerd[1452]: time="2025-01-29T12:48:29.148748862Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:48:29.148801 containerd[1452]: time="2025-01-29T12:48:29.148774604Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:48:29.148801 containerd[1452]: time="2025-01-29T12:48:29.148792043Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:48:29.148851 containerd[1452]: time="2025-01-29T12:48:29.148806817Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:48:29.148851 containerd[1452]: time="2025-01-29T12:48:29.148822021Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:48:29.148851 containerd[1452]: time="2025-01-29T12:48:29.148835584Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:48:29.148913 containerd[1452]: time="2025-01-29T12:48:29.148849987Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:48:29.148913 containerd[1452]: time="2025-01-29T12:48:29.148866293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:48:29.148913 containerd[1452]: time="2025-01-29T12:48:29.148880246Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:48:29.148913 containerd[1452]: time="2025-01-29T12:48:29.148893718Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:48:29.148913 containerd[1452]: time="2025-01-29T12:48:29.148908082Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:48:29.149020 containerd[1452]: time="2025-01-29T12:48:29.148930207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.149020 containerd[1452]: time="2025-01-29T12:48:29.148946113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.149020 containerd[1452]: time="2025-01-29T12:48:29.148960095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.149020 containerd[1452]: time="2025-01-29T12:48:29.148975261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.149020 containerd[1452]: time="2025-01-29T12:48:29.148989383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.149128 containerd[1452]: time="2025-01-29T12:48:29.149029118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.149128 containerd[1452]: time="2025-01-29T12:48:29.149044653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.149128 containerd[1452]: time="2025-01-29T12:48:29.149059267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.149128 containerd[1452]: time="2025-01-29T12:48:29.149072809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.149128 containerd[1452]: time="2025-01-29T12:48:29.149091429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.149128 containerd[1452]: time="2025-01-29T12:48:29.149104891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.149128 containerd[1452]: time="2025-01-29T12:48:29.149125104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.149264 containerd[1452]: time="2025-01-29T12:48:29.149141991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.149264 containerd[1452]: time="2025-01-29T12:48:29.149158849Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:48:29.149264 containerd[1452]: time="2025-01-29T12:48:29.149180063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.149264 containerd[1452]: time="2025-01-29T12:48:29.149193385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.149264 containerd[1452]: time="2025-01-29T12:48:29.149206986Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:48:29.149359 containerd[1452]: time="2025-01-29T12:48:29.149266584Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:48:29.149359 containerd[1452]: time="2025-01-29T12:48:29.149289661Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:48:29.151469 containerd[1452]: time="2025-01-29T12:48:29.149302022Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:48:29.151505 containerd[1452]: time="2025-01-29T12:48:29.151473070Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:48:29.151505 containerd[1452]: time="2025-01-29T12:48:29.151487963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.151561 containerd[1452]: time="2025-01-29T12:48:29.151502346Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:48:29.151561 containerd[1452]: time="2025-01-29T12:48:29.151517270Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:48:29.151561 containerd[1452]: time="2025-01-29T12:48:29.151530101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:48:29.152136 containerd[1452]: time="2025-01-29T12:48:29.151820954Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:48:29.152136 containerd[1452]: time="2025-01-29T12:48:29.151894184Z" level=info msg="Connect containerd service" Jan 29 12:48:29.152136 containerd[1452]: time="2025-01-29T12:48:29.151924312Z" level=info msg="using legacy CRI server" Jan 29 12:48:29.152136 containerd[1452]: time="2025-01-29T12:48:29.151931674Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:48:29.152136 containerd[1452]: time="2025-01-29T12:48:29.152038628Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:48:29.152875 containerd[1452]: time="2025-01-29T12:48:29.152693752Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:48:29.152875 containerd[1452]: time="2025-01-29T12:48:29.152812165Z" level=info msg="Start subscribing containerd event" Jan 29 12:48:29.152875 containerd[1452]: time="2025-01-29T12:48:29.152853482Z" level=info msg="Start recovering state" Jan 29 12:48:29.152954 containerd[1452]: time="2025-01-29T12:48:29.152907040Z" level=info msg="Start event monitor" Jan 29 12:48:29.152954 containerd[1452]: time="2025-01-29T12:48:29.152918839Z" level=info msg="Start snapshots syncer" Jan 29 12:48:29.152954 containerd[1452]: time="2025-01-29T12:48:29.152927983Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:48:29.152954 containerd[1452]: time="2025-01-29T12:48:29.152936827Z" level=info msg="Start streaming server" Jan 29 12:48:29.163848 containerd[1452]: time="2025-01-29T12:48:29.156912828Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:48:29.163848 containerd[1452]: time="2025-01-29T12:48:29.156970292Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:48:29.163848 containerd[1452]: time="2025-01-29T12:48:29.157098930Z" level=info msg="containerd successfully booted in 0.103316s" Jan 29 12:48:29.157191 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:48:29.309759 tar[1444]: linux-amd64/README.md Jan 29 12:48:29.322326 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:48:29.559590 systemd-networkd[1374]: eth0: Gained IPv6LL Jan 29 12:48:29.566763 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:48:29.575831 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:48:29.597146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:48:29.607305 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:48:29.660555 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:48:29.765634 sshd_keygen[1478]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:48:29.804175 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:48:29.818145 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:48:29.825587 systemd[1]: Started sshd@0-172.24.4.220:22-172.24.4.1:44134.service - OpenSSH per-connection server daemon (172.24.4.1:44134). Jan 29 12:48:29.830422 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:48:29.830590 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:48:29.843296 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:48:29.866469 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:48:29.876927 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:48:29.880777 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:48:29.885048 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:48:30.744585 sshd[1531]: Accepted publickey for core from 172.24.4.1 port 44134 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:48:30.753824 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:48:30.777640 systemd-logind[1437]: New session 1 of user core. Jan 29 12:48:30.781846 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:48:30.797607 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:48:30.830803 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:48:30.853152 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:48:30.867337 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:48:30.986236 systemd[1541]: Queued start job for default target default.target. Jan 29 12:48:30.989259 systemd[1541]: Created slice app.slice - User Application Slice. Jan 29 12:48:30.989281 systemd[1541]: Reached target paths.target - Paths. Jan 29 12:48:30.989295 systemd[1541]: Reached target timers.target - Timers. Jan 29 12:48:30.991235 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:48:31.016297 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:48:31.016432 systemd[1541]: Reached target sockets.target - Sockets. Jan 29 12:48:31.016449 systemd[1541]: Reached target basic.target - Basic System. Jan 29 12:48:31.016492 systemd[1541]: Reached target default.target - Main User Target. Jan 29 12:48:31.016520 systemd[1541]: Startup finished in 142ms. Jan 29 12:48:31.017013 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:48:31.029248 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:48:31.542479 systemd[1]: Started sshd@1-172.24.4.220:22-172.24.4.1:44144.service - OpenSSH per-connection server daemon (172.24.4.1:44144). Jan 29 12:48:32.003807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:48:32.006237 (kubelet)[1560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:48:33.434179 kubelet[1560]: E0129 12:48:33.434105 1560 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:48:33.438862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:48:33.439213 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:48:33.439824 systemd[1]: kubelet.service: Consumed 2.169s CPU time. Jan 29 12:48:33.514125 sshd[1553]: Accepted publickey for core from 172.24.4.1 port 44144 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:48:33.517072 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:48:33.527154 systemd-logind[1437]: New session 2 of user core. Jan 29 12:48:33.540823 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:48:34.361655 sshd[1553]: pam_unix(sshd:session): session closed for user core Jan 29 12:48:34.372931 systemd[1]: sshd@1-172.24.4.220:22-172.24.4.1:44144.service: Deactivated successfully. Jan 29 12:48:34.376678 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:48:34.378715 systemd-logind[1437]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:48:34.387138 systemd[1]: Started sshd@2-172.24.4.220:22-172.24.4.1:42832.service - OpenSSH per-connection server daemon (172.24.4.1:42832). Jan 29 12:48:34.396139 systemd-logind[1437]: Removed session 2. Jan 29 12:48:34.964306 login[1537]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:48:34.974024 login[1538]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:48:34.976520 systemd-logind[1437]: New session 3 of user core. Jan 29 12:48:34.984234 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:48:34.991066 systemd-logind[1437]: New session 4 of user core. Jan 29 12:48:34.997972 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:48:35.202969 coreos-metadata[1426]: Jan 29 12:48:35.202 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:48:35.250477 coreos-metadata[1426]: Jan 29 12:48:35.250 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 12:48:35.435588 coreos-metadata[1426]: Jan 29 12:48:35.435 INFO Fetch successful Jan 29 12:48:35.435588 coreos-metadata[1426]: Jan 29 12:48:35.435 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 12:48:35.450119 coreos-metadata[1426]: Jan 29 12:48:35.450 INFO Fetch successful Jan 29 12:48:35.450119 coreos-metadata[1426]: Jan 29 12:48:35.450 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 12:48:35.466061 coreos-metadata[1426]: Jan 29 12:48:35.466 INFO Fetch successful Jan 29 12:48:35.466061 coreos-metadata[1426]: Jan 29 12:48:35.466 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 12:48:35.480552 coreos-metadata[1426]: Jan 29 12:48:35.480 INFO Fetch successful Jan 29 12:48:35.480552 coreos-metadata[1426]: Jan 29 12:48:35.480 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 12:48:35.494047 coreos-metadata[1426]: Jan 29 12:48:35.493 INFO Fetch successful Jan 29 12:48:35.494047 coreos-metadata[1426]: Jan 29 12:48:35.494 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 12:48:35.503385 coreos-metadata[1426]: Jan 29 12:48:35.503 INFO Fetch successful Jan 29 12:48:35.549857 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:48:35.551812 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:48:35.829592 sshd[1572]: Accepted publickey for core from 172.24.4.1 port 42832 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:48:35.832233 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:48:35.843503 systemd-logind[1437]: New session 5 of user core. Jan 29 12:48:35.852960 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:48:35.898542 coreos-metadata[1489]: Jan 29 12:48:35.898 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:48:35.940992 coreos-metadata[1489]: Jan 29 12:48:35.940 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 12:48:35.958770 coreos-metadata[1489]: Jan 29 12:48:35.958 INFO Fetch successful Jan 29 12:48:35.958770 coreos-metadata[1489]: Jan 29 12:48:35.958 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 12:48:35.974877 coreos-metadata[1489]: Jan 29 12:48:35.974 INFO Fetch successful Jan 29 12:48:35.982630 unknown[1489]: wrote ssh authorized keys file for user: core Jan 29 12:48:36.022013 update-ssh-keys[1610]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:48:36.023580 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 12:48:36.026625 systemd[1]: Finished sshkeys.service. Jan 29 12:48:36.032779 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:48:36.033535 systemd[1]: Startup finished in 1.114s (kernel) + 14.448s (initrd) + 10.919s (userspace) = 26.482s. Jan 29 12:48:36.473561 sshd[1572]: pam_unix(sshd:session): session closed for user core Jan 29 12:48:36.480206 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:48:36.482125 systemd[1]: sshd@2-172.24.4.220:22-172.24.4.1:42832.service: Deactivated successfully. Jan 29 12:48:36.486010 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:48:36.488857 systemd-logind[1437]: Removed session 5. Jan 29 12:48:43.690070 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:48:43.699317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:48:44.058084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:48:44.062102 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:48:44.219811 kubelet[1625]: E0129 12:48:44.219685 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:48:44.227366 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:48:44.227818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:48:46.497902 systemd[1]: Started sshd@3-172.24.4.220:22-172.24.4.1:55488.service - OpenSSH per-connection server daemon (172.24.4.1:55488). Jan 29 12:48:47.975887 sshd[1633]: Accepted publickey for core from 172.24.4.1 port 55488 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:48:47.978634 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:48:47.988105 systemd-logind[1437]: New session 6 of user core. Jan 29 12:48:47.999682 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:48:48.601143 sshd[1633]: pam_unix(sshd:session): session closed for user core Jan 29 12:48:48.611619 systemd[1]: sshd@3-172.24.4.220:22-172.24.4.1:55488.service: Deactivated successfully. Jan 29 12:48:48.614897 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:48:48.618721 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:48:48.626966 systemd[1]: Started sshd@4-172.24.4.220:22-172.24.4.1:55500.service - OpenSSH per-connection server daemon (172.24.4.1:55500). Jan 29 12:48:48.630045 systemd-logind[1437]: Removed session 6. Jan 29 12:48:50.141742 sshd[1640]: Accepted publickey for core from 172.24.4.1 port 55500 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:48:50.144500 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:48:50.154518 systemd-logind[1437]: New session 7 of user core. Jan 29 12:48:50.168758 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:48:50.757383 sshd[1640]: pam_unix(sshd:session): session closed for user core Jan 29 12:48:50.769679 systemd[1]: sshd@4-172.24.4.220:22-172.24.4.1:55500.service: Deactivated successfully. Jan 29 12:48:50.772901 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:48:50.776698 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:48:50.783956 systemd[1]: Started sshd@5-172.24.4.220:22-172.24.4.1:55508.service - OpenSSH per-connection server daemon (172.24.4.1:55508). Jan 29 12:48:50.786809 systemd-logind[1437]: Removed session 7. Jan 29 12:48:52.251074 sshd[1647]: Accepted publickey for core from 172.24.4.1 port 55508 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:48:52.253800 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:48:52.264888 systemd-logind[1437]: New session 8 of user core. Jan 29 12:48:52.273681 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:48:52.991440 sshd[1647]: pam_unix(sshd:session): session closed for user core Jan 29 12:48:53.000950 systemd[1]: sshd@5-172.24.4.220:22-172.24.4.1:55508.service: Deactivated successfully. Jan 29 12:48:53.004247 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:48:53.006113 systemd-logind[1437]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:48:53.014972 systemd[1]: Started sshd@6-172.24.4.220:22-172.24.4.1:55514.service - OpenSSH per-connection server daemon (172.24.4.1:55514). Jan 29 12:48:53.017578 systemd-logind[1437]: Removed session 8. Jan 29 12:48:54.363596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:48:54.371785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:48:54.491683 sshd[1654]: Accepted publickey for core from 172.24.4.1 port 55514 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:48:54.494574 sshd[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:48:54.505390 systemd-logind[1437]: New session 9 of user core. Jan 29 12:48:54.515689 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:48:54.694724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:48:54.698777 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:48:54.834482 kubelet[1665]: E0129 12:48:54.834267 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:48:54.838543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:48:54.838864 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:48:55.128743 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:48:55.129457 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:48:55.154465 sudo[1672]: pam_unix(sudo:session): session closed for user root Jan 29 12:48:55.399851 sshd[1654]: pam_unix(sshd:session): session closed for user core Jan 29 12:48:55.410986 systemd[1]: sshd@6-172.24.4.220:22-172.24.4.1:55514.service: Deactivated successfully. Jan 29 12:48:55.414112 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:48:55.417822 systemd-logind[1437]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:48:55.424022 systemd[1]: Started sshd@7-172.24.4.220:22-172.24.4.1:38862.service - OpenSSH per-connection server daemon (172.24.4.1:38862). Jan 29 12:48:55.427187 systemd-logind[1437]: Removed session 9. Jan 29 12:48:56.606866 sshd[1677]: Accepted publickey for core from 172.24.4.1 port 38862 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:48:56.609575 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:48:56.618739 systemd-logind[1437]: New session 10 of user core. Jan 29 12:48:56.631698 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:48:57.020240 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:48:57.020950 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:48:57.027946 sudo[1681]: pam_unix(sudo:session): session closed for user root Jan 29 12:48:57.039052 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:48:57.040369 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:48:57.064943 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:48:57.080660 auditctl[1684]: No rules Jan 29 12:48:57.083024 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:48:57.083522 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:48:57.092076 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:48:57.153468 augenrules[1702]: No rules Jan 29 12:48:57.154649 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:48:57.157248 sudo[1680]: pam_unix(sudo:session): session closed for user root Jan 29 12:48:57.363782 sshd[1677]: pam_unix(sshd:session): session closed for user core Jan 29 12:48:57.372512 systemd[1]: sshd@7-172.24.4.220:22-172.24.4.1:38862.service: Deactivated successfully. Jan 29 12:48:57.375373 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:48:57.377366 systemd-logind[1437]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:48:57.386145 systemd[1]: Started sshd@8-172.24.4.220:22-172.24.4.1:38876.service - OpenSSH per-connection server daemon (172.24.4.1:38876). Jan 29 12:48:57.389854 systemd-logind[1437]: Removed session 10. Jan 29 12:48:58.538243 sshd[1710]: Accepted publickey for core from 172.24.4.1 port 38876 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:48:58.540863 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:48:58.550435 systemd-logind[1437]: New session 11 of user core. Jan 29 12:48:58.559660 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:48:59.019014 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:48:59.019712 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:49:00.107884 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:49:00.125032 (dockerd)[1730]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:49:00.875787 dockerd[1730]: time="2025-01-29T12:49:00.875689976Z" level=info msg="Starting up" Jan 29 12:49:01.110779 dockerd[1730]: time="2025-01-29T12:49:01.110637602Z" level=info msg="Loading containers: start." Jan 29 12:49:01.255469 kernel: Initializing XFRM netlink socket Jan 29 12:49:01.343698 systemd-networkd[1374]: docker0: Link UP Jan 29 12:49:01.366536 dockerd[1730]: time="2025-01-29T12:49:01.366431799Z" level=info msg="Loading containers: done." Jan 29 12:49:01.400597 dockerd[1730]: time="2025-01-29T12:49:01.400353622Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:49:01.401469 dockerd[1730]: time="2025-01-29T12:49:01.400934262Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:49:01.401469 dockerd[1730]: time="2025-01-29T12:49:01.401148056Z" level=info msg="Daemon has completed initialization" Jan 29 12:49:01.463230 dockerd[1730]: time="2025-01-29T12:49:01.462206552Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:49:01.462590 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:49:02.996520 containerd[1452]: time="2025-01-29T12:49:02.996139707Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 12:49:03.741262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2448127458.mount: Deactivated successfully. Jan 29 12:49:04.864059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 12:49:04.873008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:49:05.320650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:49:05.334807 (kubelet)[1918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:49:05.416668 kubelet[1918]: E0129 12:49:05.416543 1918 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:49:05.418572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:49:05.418700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:49:06.146744 containerd[1452]: time="2025-01-29T12:49:06.146674151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:06.148125 containerd[1452]: time="2025-01-29T12:49:06.147948651Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674832" Jan 29 12:49:06.149372 containerd[1452]: time="2025-01-29T12:49:06.149312338Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:06.152483 containerd[1452]: time="2025-01-29T12:49:06.152438910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:06.154128 containerd[1452]: time="2025-01-29T12:49:06.153787558Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 3.157575127s" Jan 29 12:49:06.154128 containerd[1452]: time="2025-01-29T12:49:06.153821902Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 29 12:49:06.154536 containerd[1452]: time="2025-01-29T12:49:06.154518913Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 12:49:08.113352 containerd[1452]: time="2025-01-29T12:49:08.112236561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:08.134302 containerd[1452]: time="2025-01-29T12:49:08.134160808Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770719" Jan 29 12:49:08.154714 containerd[1452]: time="2025-01-29T12:49:08.154597363Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:08.188989 containerd[1452]: time="2025-01-29T12:49:08.188829585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:08.191568 containerd[1452]: time="2025-01-29T12:49:08.191238704Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 2.036621849s" Jan 29 12:49:08.191568 containerd[1452]: time="2025-01-29T12:49:08.191312550Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 29 12:49:08.192832 containerd[1452]: time="2025-01-29T12:49:08.192489354Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 12:49:10.401100 containerd[1452]: time="2025-01-29T12:49:10.401023665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:10.404726 containerd[1452]: time="2025-01-29T12:49:10.404597565Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169767" Jan 29 12:49:10.407096 containerd[1452]: time="2025-01-29T12:49:10.406952169Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:10.414842 containerd[1452]: time="2025-01-29T12:49:10.414747941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:10.418542 containerd[1452]: time="2025-01-29T12:49:10.417883165Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 2.22532788s" Jan 29 12:49:10.418542 containerd[1452]: time="2025-01-29T12:49:10.417965519Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 29 12:49:10.419148 containerd[1452]: time="2025-01-29T12:49:10.418983741Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 12:49:11.817602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount805226267.mount: Deactivated successfully. Jan 29 12:49:12.379311 containerd[1452]: time="2025-01-29T12:49:12.379265798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:12.380670 containerd[1452]: time="2025-01-29T12:49:12.380636617Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909474" Jan 29 12:49:12.381840 containerd[1452]: time="2025-01-29T12:49:12.381797797Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:12.388015 containerd[1452]: time="2025-01-29T12:49:12.387981848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:12.388836 containerd[1452]: time="2025-01-29T12:49:12.388756589Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.969697279s" Jan 29 12:49:12.388836 containerd[1452]: time="2025-01-29T12:49:12.388801583Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 29 12:49:12.389886 containerd[1452]: time="2025-01-29T12:49:12.389785753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 12:49:13.041750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount648403694.mount: Deactivated successfully. Jan 29 12:49:13.558280 update_engine[1439]: I20250129 12:49:13.558233 1439 update_attempter.cc:509] Updating boot flags... Jan 29 12:49:13.584449 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2008) Jan 29 12:49:14.030289 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2012) Jan 29 12:49:14.702747 containerd[1452]: time="2025-01-29T12:49:14.702703175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:14.704356 containerd[1452]: time="2025-01-29T12:49:14.704324994Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jan 29 12:49:14.705301 containerd[1452]: time="2025-01-29T12:49:14.705274001Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:14.712414 containerd[1452]: time="2025-01-29T12:49:14.712323465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:14.714190 containerd[1452]: time="2025-01-29T12:49:14.714146770Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.324302318s" Jan 29 12:49:14.714240 containerd[1452]: time="2025-01-29T12:49:14.714193227Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 29 12:49:14.715510 containerd[1452]: time="2025-01-29T12:49:14.715491293Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 12:49:15.297950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount467297047.mount: Deactivated successfully. Jan 29 12:49:15.309462 containerd[1452]: time="2025-01-29T12:49:15.309282834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:15.311337 containerd[1452]: time="2025-01-29T12:49:15.311146115Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 29 12:49:15.312883 containerd[1452]: time="2025-01-29T12:49:15.312758769Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:15.318551 containerd[1452]: time="2025-01-29T12:49:15.318448567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:15.320695 containerd[1452]: time="2025-01-29T12:49:15.320609371Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 605.020067ms" Jan 29 12:49:15.320695 containerd[1452]: time="2025-01-29T12:49:15.320678710Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 12:49:15.321755 containerd[1452]: time="2025-01-29T12:49:15.321703640Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 12:49:15.613483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 12:49:15.620838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:49:15.787743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:49:15.792899 (kubelet)[2031]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:49:15.874209 kubelet[2031]: E0129 12:49:15.873838 2031 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:49:15.877936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:49:15.878267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:49:16.908190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1547359260.mount: Deactivated successfully. Jan 29 12:49:19.752044 containerd[1452]: time="2025-01-29T12:49:19.751958373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:19.754143 containerd[1452]: time="2025-01-29T12:49:19.754100770Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551328" Jan 29 12:49:19.755770 containerd[1452]: time="2025-01-29T12:49:19.755721833Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:19.761428 containerd[1452]: time="2025-01-29T12:49:19.761200354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:19.764655 containerd[1452]: time="2025-01-29T12:49:19.764599565Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.442838668s" Jan 29 12:49:19.764655 containerd[1452]: time="2025-01-29T12:49:19.764644449Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 29 12:49:23.840329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:49:23.860020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:49:23.917007 systemd[1]: Reloading requested from client PID 2119 ('systemctl') (unit session-11.scope)... Jan 29 12:49:23.917047 systemd[1]: Reloading... Jan 29 12:49:24.029446 zram_generator::config[2158]: No configuration found. Jan 29 12:49:24.169136 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:49:24.250913 systemd[1]: Reloading finished in 333 ms. Jan 29 12:49:24.304561 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:49:24.304635 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:49:24.305062 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:49:24.306956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:49:24.438716 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:49:24.438731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:49:24.730859 kubelet[2223]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:49:24.730859 kubelet[2223]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 12:49:24.730859 kubelet[2223]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:49:24.730859 kubelet[2223]: I0129 12:49:24.729645 2223 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:49:25.049161 kubelet[2223]: I0129 12:49:25.049042 2223 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 12:49:25.049547 kubelet[2223]: I0129 12:49:25.049290 2223 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:49:25.050271 kubelet[2223]: I0129 12:49:25.049853 2223 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 12:49:25.996793 kubelet[2223]: E0129 12:49:25.996694 2223 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.220:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:49:25.999001 kubelet[2223]: I0129 12:49:25.998767 2223 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:49:26.020655 kubelet[2223]: E0129 12:49:26.020566 2223 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:49:26.020655 kubelet[2223]: I0129 12:49:26.020628 2223 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:49:26.026954 kubelet[2223]: I0129 12:49:26.026869 2223 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:49:26.027376 kubelet[2223]: I0129 12:49:26.027285 2223 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:49:26.027815 kubelet[2223]: I0129 12:49:26.027351 2223 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-6-7edc95d587.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:49:26.028091 kubelet[2223]: I0129 12:49:26.027802 2223 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:49:26.028091 kubelet[2223]: I0129 12:49:26.027865 2223 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 12:49:26.028091 kubelet[2223]: I0129 12:49:26.028082 2223 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:49:26.038025 kubelet[2223]: I0129 12:49:26.037961 2223 kubelet.go:446] "Attempting to sync node with API server" Jan 29 12:49:26.038025 kubelet[2223]: I0129 12:49:26.038012 2223 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:49:26.040387 kubelet[2223]: I0129 12:49:26.038050 2223 kubelet.go:352] "Adding apiserver pod source" Jan 29 12:49:26.040387 kubelet[2223]: I0129 12:49:26.038071 2223 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:49:26.056450 kubelet[2223]: W0129 12:49:26.055561 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-6-7edc95d587.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Jan 29 12:49:26.056450 kubelet[2223]: E0129 12:49:26.055729 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-6-7edc95d587.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:49:26.056677 kubelet[2223]: I0129 12:49:26.056546 2223 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:49:26.057990 kubelet[2223]: I0129 12:49:26.057942 2223 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:49:26.058104 kubelet[2223]: W0129 12:49:26.058059 2223 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:49:26.061983 kubelet[2223]: W0129 12:49:26.060631 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Jan 29 12:49:26.061983 kubelet[2223]: E0129 12:49:26.060758 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:49:26.067285 kubelet[2223]: I0129 12:49:26.066053 2223 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 12:49:26.067285 kubelet[2223]: I0129 12:49:26.066159 2223 server.go:1287] "Started kubelet" Jan 29 12:49:26.073854 kubelet[2223]: I0129 12:49:26.073818 2223 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:49:26.083357 kubelet[2223]: I0129 12:49:26.083304 2223 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:49:26.084625 kubelet[2223]: E0129 12:49:26.081526 2223 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.220:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.220:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-6-7edc95d587.novalocal.181f2abb84f35827 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-6-7edc95d587.novalocal,UID:ci-4081-3-0-6-7edc95d587.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-6-7edc95d587.novalocal,},FirstTimestamp:2025-01-29 12:49:26.066092071 +0000 UTC m=+1.622587451,LastTimestamp:2025-01-29 12:49:26.066092071 +0000 UTC m=+1.622587451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-6-7edc95d587.novalocal,}" Jan 29 12:49:26.087243 kubelet[2223]: I0129 12:49:26.087173 2223 server.go:490] "Adding debug handlers to kubelet server" Jan 29 12:49:26.090575 kubelet[2223]: I0129 12:49:26.090334 2223 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:49:26.092336 kubelet[2223]: I0129 12:49:26.092299 2223 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:49:26.092908 kubelet[2223]: I0129 12:49:26.092840 2223 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:49:26.095686 kubelet[2223]: E0129 12:49:26.095603 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-6-7edc95d587.novalocal\" not found" Jan 29 12:49:26.096027 kubelet[2223]: I0129 12:49:26.095700 2223 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 12:49:26.096326 kubelet[2223]: I0129 12:49:26.096060 2223 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:49:26.096326 kubelet[2223]: I0129 12:49:26.096144 2223 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:49:26.099118 kubelet[2223]: W0129 12:49:26.098971 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Jan 29 12:49:26.100039 kubelet[2223]: E0129 12:49:26.099875 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:49:26.101008 kubelet[2223]: I0129 12:49:26.100868 2223 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:49:26.101602 kubelet[2223]: E0129 12:49:26.101476 2223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-6-7edc95d587.novalocal?timeout=10s\": dial tcp 172.24.4.220:6443: connect: connection refused" interval="200ms" Jan 29 12:49:26.102033 kubelet[2223]: E0129 12:49:26.101897 2223 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:49:26.104747 kubelet[2223]: I0129 12:49:26.104690 2223 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:49:26.104747 kubelet[2223]: I0129 12:49:26.104730 2223 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:49:26.117546 kubelet[2223]: I0129 12:49:26.116600 2223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:49:26.118976 kubelet[2223]: I0129 12:49:26.118939 2223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:49:26.118976 kubelet[2223]: I0129 12:49:26.118961 2223 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 12:49:26.119049 kubelet[2223]: I0129 12:49:26.118981 2223 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 12:49:26.119049 kubelet[2223]: I0129 12:49:26.119007 2223 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 12:49:26.119095 kubelet[2223]: E0129 12:49:26.119049 2223 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:49:26.120008 kubelet[2223]: W0129 12:49:26.119718 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Jan 29 12:49:26.120008 kubelet[2223]: E0129 12:49:26.119751 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:49:26.124665 kubelet[2223]: I0129 12:49:26.124648 2223 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 12:49:26.124847 kubelet[2223]: I0129 12:49:26.124836 2223 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 12:49:26.124924 kubelet[2223]: I0129 12:49:26.124915 2223 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:49:26.132462 kubelet[2223]: I0129 12:49:26.132448 2223 policy_none.go:49] "None policy: Start" Jan 29 12:49:26.132547 kubelet[2223]: I0129 12:49:26.132538 2223 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 12:49:26.132606 kubelet[2223]: I0129 12:49:26.132597 2223 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:49:26.139473 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 12:49:26.151975 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 12:49:26.155647 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 12:49:26.163191 kubelet[2223]: I0129 12:49:26.163169 2223 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:49:26.163607 kubelet[2223]: I0129 12:49:26.163330 2223 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:49:26.163607 kubelet[2223]: I0129 12:49:26.163359 2223 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:49:26.163607 kubelet[2223]: I0129 12:49:26.163591 2223 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:49:26.164507 kubelet[2223]: E0129 12:49:26.164484 2223 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 12:49:26.164548 kubelet[2223]: E0129 12:49:26.164539 2223 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-6-7edc95d587.novalocal\" not found" Jan 29 12:49:26.165100 kubelet[2223]: E0129 12:49:26.165015 2223 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.220:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.220:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-6-7edc95d587.novalocal.181f2abb84f35827 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-6-7edc95d587.novalocal,UID:ci-4081-3-0-6-7edc95d587.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-6-7edc95d587.novalocal,},FirstTimestamp:2025-01-29 12:49:26.066092071 +0000 UTC m=+1.622587451,LastTimestamp:2025-01-29 12:49:26.066092071 +0000 UTC m=+1.622587451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-6-7edc95d587.novalocal,}" Jan 29 12:49:26.243257 systemd[1]: Created slice kubepods-burstable-pod87b4d20d7ddfcf640fe506a78362ef75.slice - libcontainer container kubepods-burstable-pod87b4d20d7ddfcf640fe506a78362ef75.slice. Jan 29 12:49:26.263875 kubelet[2223]: E0129 12:49:26.263674 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-6-7edc95d587.novalocal\" not found" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.270307 kubelet[2223]: I0129 12:49:26.269740 2223 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.271377 kubelet[2223]: E0129 12:49:26.271325 2223 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.220:6443/api/v1/nodes\": dial tcp 172.24.4.220:6443: connect: connection refused" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.278088 systemd[1]: Created slice kubepods-burstable-podf2c7777e913e8333c52eb18354d186f4.slice - libcontainer container kubepods-burstable-podf2c7777e913e8333c52eb18354d186f4.slice. Jan 29 12:49:26.288350 kubelet[2223]: E0129 12:49:26.287980 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-6-7edc95d587.novalocal\" not found" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.293677 systemd[1]: Created slice kubepods-burstable-pod637580470fce096096e7992b9f60b148.slice - libcontainer container kubepods-burstable-pod637580470fce096096e7992b9f60b148.slice. Jan 29 12:49:26.297594 kubelet[2223]: I0129 12:49:26.297327 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2c7777e913e8333c52eb18354d186f4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"f2c7777e913e8333c52eb18354d186f4\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.297594 kubelet[2223]: I0129 12:49:26.297466 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/637580470fce096096e7992b9f60b148-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"637580470fce096096e7992b9f60b148\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.297594 kubelet[2223]: I0129 12:49:26.297534 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/637580470fce096096e7992b9f60b148-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"637580470fce096096e7992b9f60b148\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.297594 kubelet[2223]: I0129 12:49:26.297584 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87b4d20d7ddfcf640fe506a78362ef75-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"87b4d20d7ddfcf640fe506a78362ef75\") " pod="kube-system/kube-scheduler-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.297912 kubelet[2223]: I0129 12:49:26.297627 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/637580470fce096096e7992b9f60b148-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"637580470fce096096e7992b9f60b148\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.297912 kubelet[2223]: I0129 12:49:26.297670 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2c7777e913e8333c52eb18354d186f4-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"f2c7777e913e8333c52eb18354d186f4\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.297912 kubelet[2223]: I0129 12:49:26.297730 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2c7777e913e8333c52eb18354d186f4-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"f2c7777e913e8333c52eb18354d186f4\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.297912 kubelet[2223]: I0129 12:49:26.297783 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/637580470fce096096e7992b9f60b148-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"637580470fce096096e7992b9f60b148\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.297912 kubelet[2223]: I0129 12:49:26.297826 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/637580470fce096096e7992b9f60b148-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"637580470fce096096e7992b9f60b148\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.300084 kubelet[2223]: E0129 12:49:26.300015 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-6-7edc95d587.novalocal\" not found" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.310194 kubelet[2223]: E0129 12:49:26.310073 2223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-6-7edc95d587.novalocal?timeout=10s\": dial tcp 172.24.4.220:6443: connect: connection refused" interval="400ms" Jan 29 12:49:26.475729 kubelet[2223]: I0129 12:49:26.475647 2223 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.476339 kubelet[2223]: E0129 12:49:26.476263 2223 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.220:6443/api/v1/nodes\": dial tcp 172.24.4.220:6443: connect: connection refused" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.566859 containerd[1452]: time="2025-01-29T12:49:26.566528160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-6-7edc95d587.novalocal,Uid:87b4d20d7ddfcf640fe506a78362ef75,Namespace:kube-system,Attempt:0,}" Jan 29 12:49:26.593244 containerd[1452]: time="2025-01-29T12:49:26.592996735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal,Uid:f2c7777e913e8333c52eb18354d186f4,Namespace:kube-system,Attempt:0,}" Jan 29 12:49:26.601444 containerd[1452]: time="2025-01-29T12:49:26.601347143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal,Uid:637580470fce096096e7992b9f60b148,Namespace:kube-system,Attempt:0,}" Jan 29 12:49:26.711587 kubelet[2223]: E0129 12:49:26.711507 2223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-6-7edc95d587.novalocal?timeout=10s\": dial tcp 172.24.4.220:6443: connect: connection refused" interval="800ms" Jan 29 12:49:26.880376 kubelet[2223]: I0129 12:49:26.880300 2223 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:26.881634 kubelet[2223]: E0129 12:49:26.881565 2223 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.220:6443/api/v1/nodes\": dial tcp 172.24.4.220:6443: connect: connection refused" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:27.163888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1162007735.mount: Deactivated successfully. Jan 29 12:49:27.172639 containerd[1452]: time="2025-01-29T12:49:27.172530524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:49:27.176007 containerd[1452]: time="2025-01-29T12:49:27.175903550Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:49:27.177561 containerd[1452]: time="2025-01-29T12:49:27.177474377Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:49:27.179448 containerd[1452]: time="2025-01-29T12:49:27.179339804Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:49:27.182801 containerd[1452]: time="2025-01-29T12:49:27.182584210Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 12:49:27.185076 containerd[1452]: time="2025-01-29T12:49:27.184854615Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:49:27.185453 containerd[1452]: time="2025-01-29T12:49:27.185324403Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:49:27.192461 containerd[1452]: time="2025-01-29T12:49:27.192298120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:49:27.197074 containerd[1452]: time="2025-01-29T12:49:27.196696203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 603.557352ms" Jan 29 12:49:27.201347 containerd[1452]: time="2025-01-29T12:49:27.201234939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 634.549394ms" Jan 29 12:49:27.203011 containerd[1452]: time="2025-01-29T12:49:27.202924928Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 601.406745ms" Jan 29 12:49:27.281616 kubelet[2223]: W0129 12:49:27.281214 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Jan 29 12:49:27.281616 kubelet[2223]: E0129 12:49:27.281581 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:49:27.380540 kubelet[2223]: W0129 12:49:27.380347 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Jan 29 12:49:27.380540 kubelet[2223]: E0129 12:49:27.380486 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:49:27.385609 kubelet[2223]: W0129 12:49:27.385480 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-6-7edc95d587.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Jan 29 12:49:27.385609 kubelet[2223]: E0129 12:49:27.385558 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-6-7edc95d587.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:49:27.403260 kubelet[2223]: W0129 12:49:27.403106 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.220:6443: connect: connection refused Jan 29 12:49:27.403260 kubelet[2223]: E0129 12:49:27.403211 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.220:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:49:27.471361 containerd[1452]: time="2025-01-29T12:49:27.471022535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:49:27.471361 containerd[1452]: time="2025-01-29T12:49:27.471090112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:49:27.471361 containerd[1452]: time="2025-01-29T12:49:27.471110761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:49:27.471361 containerd[1452]: time="2025-01-29T12:49:27.471192693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:49:27.480419 containerd[1452]: time="2025-01-29T12:49:27.479913127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:49:27.480419 containerd[1452]: time="2025-01-29T12:49:27.480066354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:49:27.480419 containerd[1452]: time="2025-01-29T12:49:27.480114954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:49:27.480419 containerd[1452]: time="2025-01-29T12:49:27.480310119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:49:27.499558 containerd[1452]: time="2025-01-29T12:49:27.498914832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:49:27.499558 containerd[1452]: time="2025-01-29T12:49:27.499531024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:49:27.499743 containerd[1452]: time="2025-01-29T12:49:27.499586518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:49:27.502102 containerd[1452]: time="2025-01-29T12:49:27.501849158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:49:27.507615 systemd[1]: Started cri-containerd-8afc2d68e4c4cda5cff9c6cbd1a07276b0cc789205f9f596b322bae52af6a9d1.scope - libcontainer container 8afc2d68e4c4cda5cff9c6cbd1a07276b0cc789205f9f596b322bae52af6a9d1. Jan 29 12:49:27.512032 kubelet[2223]: E0129 12:49:27.511983 2223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-6-7edc95d587.novalocal?timeout=10s\": dial tcp 172.24.4.220:6443: connect: connection refused" interval="1.6s" Jan 29 12:49:27.515013 systemd[1]: Started cri-containerd-41fbece88c205842b4c7f9c718ba0d78ea7310a12d91bf1534a767c70a89a703.scope - libcontainer container 41fbece88c205842b4c7f9c718ba0d78ea7310a12d91bf1534a767c70a89a703. Jan 29 12:49:27.542532 systemd[1]: Started cri-containerd-a38076790345ee9b95bd0f2eb12b404c532bf946ad754fa79b7fc408421c5b16.scope - libcontainer container a38076790345ee9b95bd0f2eb12b404c532bf946ad754fa79b7fc408421c5b16. Jan 29 12:49:27.573009 containerd[1452]: time="2025-01-29T12:49:27.572969141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal,Uid:f2c7777e913e8333c52eb18354d186f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8afc2d68e4c4cda5cff9c6cbd1a07276b0cc789205f9f596b322bae52af6a9d1\"" Jan 29 12:49:27.576373 containerd[1452]: time="2025-01-29T12:49:27.576342147Z" level=info msg="CreateContainer within sandbox \"8afc2d68e4c4cda5cff9c6cbd1a07276b0cc789205f9f596b322bae52af6a9d1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:49:27.604707 containerd[1452]: time="2025-01-29T12:49:27.604581714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal,Uid:637580470fce096096e7992b9f60b148,Namespace:kube-system,Attempt:0,} returns sandbox id \"41fbece88c205842b4c7f9c718ba0d78ea7310a12d91bf1534a767c70a89a703\"" Jan 29 12:49:27.605615 containerd[1452]: time="2025-01-29T12:49:27.605290769Z" level=info msg="CreateContainer within sandbox \"8afc2d68e4c4cda5cff9c6cbd1a07276b0cc789205f9f596b322bae52af6a9d1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"485ed3ad2111b7a7bf722d55aa4d48014b9c8e42890ef3b7c305606bcca117f9\"" Jan 29 12:49:27.606581 containerd[1452]: time="2025-01-29T12:49:27.606559872Z" level=info msg="StartContainer for \"485ed3ad2111b7a7bf722d55aa4d48014b9c8e42890ef3b7c305606bcca117f9\"" Jan 29 12:49:27.611011 containerd[1452]: time="2025-01-29T12:49:27.610924041Z" level=info msg="CreateContainer within sandbox \"41fbece88c205842b4c7f9c718ba0d78ea7310a12d91bf1534a767c70a89a703\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:49:27.612153 containerd[1452]: time="2025-01-29T12:49:27.612129375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-6-7edc95d587.novalocal,Uid:87b4d20d7ddfcf640fe506a78362ef75,Namespace:kube-system,Attempt:0,} returns sandbox id \"a38076790345ee9b95bd0f2eb12b404c532bf946ad754fa79b7fc408421c5b16\"" Jan 29 12:49:27.615303 containerd[1452]: time="2025-01-29T12:49:27.615269745Z" level=info msg="CreateContainer within sandbox \"a38076790345ee9b95bd0f2eb12b404c532bf946ad754fa79b7fc408421c5b16\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:49:27.639563 systemd[1]: Started cri-containerd-485ed3ad2111b7a7bf722d55aa4d48014b9c8e42890ef3b7c305606bcca117f9.scope - libcontainer container 485ed3ad2111b7a7bf722d55aa4d48014b9c8e42890ef3b7c305606bcca117f9. Jan 29 12:49:27.641835 containerd[1452]: time="2025-01-29T12:49:27.641667377Z" level=info msg="CreateContainer within sandbox \"41fbece88c205842b4c7f9c718ba0d78ea7310a12d91bf1534a767c70a89a703\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5b55b253076439e259b3e8290461cb534b732d2d2ca769a097f03b172b86bec3\"" Jan 29 12:49:27.642615 containerd[1452]: time="2025-01-29T12:49:27.642584533Z" level=info msg="StartContainer for \"5b55b253076439e259b3e8290461cb534b732d2d2ca769a097f03b172b86bec3\"" Jan 29 12:49:27.649718 containerd[1452]: time="2025-01-29T12:49:27.649293455Z" level=info msg="CreateContainer within sandbox \"a38076790345ee9b95bd0f2eb12b404c532bf946ad754fa79b7fc408421c5b16\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"75c1b94d7f1dac91a1380bf8004cb3d683dd7c9ac606e0b37024f7ca1b8ab54f\"" Jan 29 12:49:27.651795 containerd[1452]: time="2025-01-29T12:49:27.651686620Z" level=info msg="StartContainer for \"75c1b94d7f1dac91a1380bf8004cb3d683dd7c9ac606e0b37024f7ca1b8ab54f\"" Jan 29 12:49:27.673578 systemd[1]: Started cri-containerd-5b55b253076439e259b3e8290461cb534b732d2d2ca769a097f03b172b86bec3.scope - libcontainer container 5b55b253076439e259b3e8290461cb534b732d2d2ca769a097f03b172b86bec3. Jan 29 12:49:27.685447 kubelet[2223]: I0129 12:49:27.684379 2223 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:27.685447 kubelet[2223]: E0129 12:49:27.684769 2223 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.220:6443/api/v1/nodes\": dial tcp 172.24.4.220:6443: connect: connection refused" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:27.696689 systemd[1]: Started cri-containerd-75c1b94d7f1dac91a1380bf8004cb3d683dd7c9ac606e0b37024f7ca1b8ab54f.scope - libcontainer container 75c1b94d7f1dac91a1380bf8004cb3d683dd7c9ac606e0b37024f7ca1b8ab54f. Jan 29 12:49:27.720539 containerd[1452]: time="2025-01-29T12:49:27.720490243Z" level=info msg="StartContainer for \"485ed3ad2111b7a7bf722d55aa4d48014b9c8e42890ef3b7c305606bcca117f9\" returns successfully" Jan 29 12:49:27.750482 containerd[1452]: time="2025-01-29T12:49:27.750198273Z" level=info msg="StartContainer for \"5b55b253076439e259b3e8290461cb534b732d2d2ca769a097f03b172b86bec3\" returns successfully" Jan 29 12:49:27.781688 containerd[1452]: time="2025-01-29T12:49:27.781640376Z" level=info msg="StartContainer for \"75c1b94d7f1dac91a1380bf8004cb3d683dd7c9ac606e0b37024f7ca1b8ab54f\" returns successfully" Jan 29 12:49:28.133410 kubelet[2223]: E0129 12:49:28.131796 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-6-7edc95d587.novalocal\" not found" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:28.134050 kubelet[2223]: E0129 12:49:28.133947 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-6-7edc95d587.novalocal\" not found" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:28.143735 kubelet[2223]: E0129 12:49:28.143708 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-6-7edc95d587.novalocal\" not found" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:29.145878 kubelet[2223]: E0129 12:49:29.145846 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-6-7edc95d587.novalocal\" not found" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:29.146213 kubelet[2223]: E0129 12:49:29.146180 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-6-7edc95d587.novalocal\" not found" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:29.147929 kubelet[2223]: E0129 12:49:29.147906 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-6-7edc95d587.novalocal\" not found" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:29.287748 kubelet[2223]: I0129 12:49:29.287709 2223 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:29.947119 kubelet[2223]: E0129 12:49:29.947078 2223 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-6-7edc95d587.novalocal\" not found" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:30.044872 kubelet[2223]: I0129 12:49:30.044817 2223 apiserver.go:52] "Watching apiserver" Jan 29 12:49:30.097706 kubelet[2223]: I0129 12:49:30.097630 2223 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:49:30.112063 kubelet[2223]: I0129 12:49:30.111999 2223 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:30.144602 kubelet[2223]: I0129 12:49:30.144556 2223 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:30.152563 kubelet[2223]: E0129 12:49:30.152497 2223 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:30.200606 kubelet[2223]: I0129 12:49:30.200487 2223 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:30.205478 kubelet[2223]: E0129 12:49:30.205429 2223 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-0-6-7edc95d587.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:30.205478 kubelet[2223]: I0129 12:49:30.205466 2223 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:30.209051 kubelet[2223]: E0129 12:49:30.209001 2223 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:30.209051 kubelet[2223]: I0129 12:49:30.209040 2223 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:30.215955 kubelet[2223]: E0129 12:49:30.215910 2223 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:31.598676 kubelet[2223]: I0129 12:49:31.597926 2223 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:31.617871 kubelet[2223]: W0129 12:49:31.616724 2223 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:49:32.749297 systemd[1]: Reloading requested from client PID 2500 ('systemctl') (unit session-11.scope)... Jan 29 12:49:32.749336 systemd[1]: Reloading... Jan 29 12:49:32.895427 zram_generator::config[2539]: No configuration found. Jan 29 12:49:33.102281 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:49:33.208740 systemd[1]: Reloading finished in 458 ms. Jan 29 12:49:33.259117 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:49:33.275537 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:49:33.275773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:49:33.275817 systemd[1]: kubelet.service: Consumed 1.060s CPU time, 125.4M memory peak, 0B memory swap peak. Jan 29 12:49:33.280751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:49:33.549550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:49:33.569958 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:49:33.633790 kubelet[2603]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:49:33.633790 kubelet[2603]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 12:49:33.633790 kubelet[2603]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:49:33.634442 kubelet[2603]: I0129 12:49:33.633824 2603 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:49:33.645308 kubelet[2603]: I0129 12:49:33.645242 2603 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 12:49:33.645308 kubelet[2603]: I0129 12:49:33.645274 2603 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:49:33.645615 kubelet[2603]: I0129 12:49:33.645562 2603 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 12:49:33.646952 kubelet[2603]: I0129 12:49:33.646909 2603 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:49:33.659597 kubelet[2603]: I0129 12:49:33.658722 2603 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:49:33.666264 kubelet[2603]: E0129 12:49:33.666053 2603 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:49:33.666264 kubelet[2603]: I0129 12:49:33.666157 2603 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:49:33.670798 kubelet[2603]: I0129 12:49:33.670739 2603 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:49:33.670942 kubelet[2603]: I0129 12:49:33.670902 2603 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:49:33.671159 kubelet[2603]: I0129 12:49:33.670926 2603 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-6-7edc95d587.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:49:33.671159 kubelet[2603]: I0129 12:49:33.671105 2603 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:49:33.671159 kubelet[2603]: I0129 12:49:33.671116 2603 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 12:49:33.671159 kubelet[2603]: I0129 12:49:33.671146 2603 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:49:33.672191 kubelet[2603]: I0129 12:49:33.671256 2603 kubelet.go:446] "Attempting to sync node with API server" Jan 29 12:49:33.672191 kubelet[2603]: I0129 12:49:33.671267 2603 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:49:33.672191 kubelet[2603]: I0129 12:49:33.671283 2603 kubelet.go:352] "Adding apiserver pod source" Jan 29 12:49:33.672191 kubelet[2603]: I0129 12:49:33.671292 2603 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:49:33.689972 kubelet[2603]: I0129 12:49:33.687663 2603 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:49:33.689972 kubelet[2603]: I0129 12:49:33.688337 2603 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:49:33.689972 kubelet[2603]: I0129 12:49:33.689070 2603 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 12:49:33.689972 kubelet[2603]: I0129 12:49:33.689103 2603 server.go:1287] "Started kubelet" Jan 29 12:49:33.694318 kubelet[2603]: I0129 12:49:33.694296 2603 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:49:33.696537 kubelet[2603]: I0129 12:49:33.696507 2603 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:49:33.698247 kubelet[2603]: I0129 12:49:33.697936 2603 server.go:490] "Adding debug handlers to kubelet server" Jan 29 12:49:33.700102 kubelet[2603]: I0129 12:49:33.700048 2603 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:49:33.700262 kubelet[2603]: I0129 12:49:33.700240 2603 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:49:33.704125 kubelet[2603]: I0129 12:49:33.704096 2603 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:49:33.706617 kubelet[2603]: I0129 12:49:33.706593 2603 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 12:49:33.707049 kubelet[2603]: E0129 12:49:33.707017 2603 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-6-7edc95d587.novalocal\" not found" Jan 29 12:49:33.709374 kubelet[2603]: I0129 12:49:33.709315 2603 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:49:33.709789 kubelet[2603]: I0129 12:49:33.709742 2603 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:49:33.712334 kubelet[2603]: I0129 12:49:33.712296 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:49:33.714257 kubelet[2603]: I0129 12:49:33.714239 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:49:33.714344 kubelet[2603]: I0129 12:49:33.714335 2603 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 12:49:33.714467 kubelet[2603]: I0129 12:49:33.714457 2603 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 12:49:33.714526 kubelet[2603]: I0129 12:49:33.714518 2603 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 12:49:33.714628 kubelet[2603]: E0129 12:49:33.714606 2603 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:49:33.719739 kubelet[2603]: I0129 12:49:33.719707 2603 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:49:33.719877 kubelet[2603]: I0129 12:49:33.719813 2603 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:49:33.725248 kubelet[2603]: I0129 12:49:33.725214 2603 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:49:33.791169 kubelet[2603]: I0129 12:49:33.791132 2603 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 12:49:33.791169 kubelet[2603]: I0129 12:49:33.791150 2603 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 12:49:33.791169 kubelet[2603]: I0129 12:49:33.791166 2603 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:49:33.791332 kubelet[2603]: I0129 12:49:33.791319 2603 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:49:33.791366 kubelet[2603]: I0129 12:49:33.791331 2603 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:49:33.791366 kubelet[2603]: I0129 12:49:33.791348 2603 policy_none.go:49] "None policy: Start" Jan 29 12:49:33.791366 kubelet[2603]: I0129 12:49:33.791359 2603 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 12:49:33.791459 kubelet[2603]: I0129 12:49:33.791369 2603 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:49:33.791517 kubelet[2603]: I0129 12:49:33.791501 2603 state_mem.go:75] "Updated machine memory state" Jan 29 12:49:33.796261 kubelet[2603]: I0129 12:49:33.796107 2603 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:49:33.796328 kubelet[2603]: I0129 12:49:33.796309 2603 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:49:33.796789 kubelet[2603]: I0129 12:49:33.796320 2603 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:49:33.798619 kubelet[2603]: I0129 12:49:33.797496 2603 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:49:33.800153 kubelet[2603]: E0129 12:49:33.800137 2603 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 12:49:33.819524 kubelet[2603]: I0129 12:49:33.815841 2603 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:33.819524 kubelet[2603]: I0129 12:49:33.816188 2603 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:33.819524 kubelet[2603]: I0129 12:49:33.816210 2603 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:33.826556 kubelet[2603]: W0129 12:49:33.826516 2603 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:49:33.828005 kubelet[2603]: W0129 12:49:33.827968 2603 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:49:33.828540 kubelet[2603]: W0129 12:49:33.828493 2603 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:49:33.828592 kubelet[2603]: E0129 12:49:33.828545 2603 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-0-6-7edc95d587.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:33.899608 kubelet[2603]: I0129 12:49:33.899578 2603 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:33.911233 kubelet[2603]: I0129 12:49:33.911150 2603 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:33.911233 kubelet[2603]: I0129 12:49:33.911219 2603 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:34.011351 kubelet[2603]: I0129 12:49:34.011305 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87b4d20d7ddfcf640fe506a78362ef75-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"87b4d20d7ddfcf640fe506a78362ef75\") " pod="kube-system/kube-scheduler-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:34.011558 kubelet[2603]: I0129 12:49:34.011539 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2c7777e913e8333c52eb18354d186f4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"f2c7777e913e8333c52eb18354d186f4\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:34.011657 kubelet[2603]: I0129 12:49:34.011639 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/637580470fce096096e7992b9f60b148-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"637580470fce096096e7992b9f60b148\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:34.011758 kubelet[2603]: I0129 12:49:34.011741 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/637580470fce096096e7992b9f60b148-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"637580470fce096096e7992b9f60b148\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:34.011855 kubelet[2603]: I0129 12:49:34.011840 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/637580470fce096096e7992b9f60b148-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"637580470fce096096e7992b9f60b148\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:34.011956 kubelet[2603]: I0129 12:49:34.011942 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2c7777e913e8333c52eb18354d186f4-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"f2c7777e913e8333c52eb18354d186f4\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:34.012113 kubelet[2603]: I0129 12:49:34.012037 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2c7777e913e8333c52eb18354d186f4-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"f2c7777e913e8333c52eb18354d186f4\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:34.012214 kubelet[2603]: I0129 12:49:34.012201 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/637580470fce096096e7992b9f60b148-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"637580470fce096096e7992b9f60b148\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:34.012323 kubelet[2603]: I0129 12:49:34.012297 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/637580470fce096096e7992b9f60b148-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal\" (UID: \"637580470fce096096e7992b9f60b148\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:34.677184 kubelet[2603]: I0129 12:49:34.677089 2603 apiserver.go:52] "Watching apiserver" Jan 29 12:49:34.710710 kubelet[2603]: I0129 12:49:34.710596 2603 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:49:34.757110 kubelet[2603]: I0129 12:49:34.754765 2603 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:34.770388 kubelet[2603]: W0129 12:49:34.770338 2603 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:49:34.770540 kubelet[2603]: E0129 12:49:34.770445 2603 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:49:34.771356 kubelet[2603]: I0129 12:49:34.771296 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-6-7edc95d587.novalocal" podStartSLOduration=1.771275532 podStartE2EDuration="1.771275532s" podCreationTimestamp="2025-01-29 12:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:49:34.747262803 +0000 UTC m=+1.170219615" watchObservedRunningTime="2025-01-29 12:49:34.771275532 +0000 UTC m=+1.194232334" Jan 29 12:49:34.785798 kubelet[2603]: I0129 12:49:34.785415 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-6-7edc95d587.novalocal" podStartSLOduration=1.785379357 podStartE2EDuration="1.785379357s" podCreationTimestamp="2025-01-29 12:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:49:34.784312955 +0000 UTC m=+1.207269757" watchObservedRunningTime="2025-01-29 12:49:34.785379357 +0000 UTC m=+1.208336400" Jan 29 12:49:34.785798 kubelet[2603]: I0129 12:49:34.785600 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-6-7edc95d587.novalocal" podStartSLOduration=3.7855883 podStartE2EDuration="3.7855883s" podCreationTimestamp="2025-01-29 12:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:49:34.772491995 +0000 UTC m=+1.195448817" watchObservedRunningTime="2025-01-29 12:49:34.7855883 +0000 UTC m=+1.208545122" Jan 29 12:49:37.463815 kubelet[2603]: I0129 12:49:37.463449 2603 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:49:37.464194 containerd[1452]: time="2025-01-29T12:49:37.463747290Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:49:37.466007 kubelet[2603]: I0129 12:49:37.465657 2603 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:49:38.269271 systemd[1]: Created slice kubepods-besteffort-pod43b4e264_d864_450d_b4eb_626deaea2946.slice - libcontainer container kubepods-besteffort-pod43b4e264_d864_450d_b4eb_626deaea2946.slice. Jan 29 12:49:38.338031 kubelet[2603]: I0129 12:49:38.337984 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43b4e264-d864-450d-b4eb-626deaea2946-kube-proxy\") pod \"kube-proxy-j8l5s\" (UID: \"43b4e264-d864-450d-b4eb-626deaea2946\") " pod="kube-system/kube-proxy-j8l5s" Jan 29 12:49:38.338031 kubelet[2603]: I0129 12:49:38.338025 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43b4e264-d864-450d-b4eb-626deaea2946-xtables-lock\") pod \"kube-proxy-j8l5s\" (UID: \"43b4e264-d864-450d-b4eb-626deaea2946\") " pod="kube-system/kube-proxy-j8l5s" Jan 29 12:49:38.338182 kubelet[2603]: I0129 12:49:38.338053 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43b4e264-d864-450d-b4eb-626deaea2946-lib-modules\") pod \"kube-proxy-j8l5s\" (UID: \"43b4e264-d864-450d-b4eb-626deaea2946\") " pod="kube-system/kube-proxy-j8l5s" Jan 29 12:49:38.338182 kubelet[2603]: I0129 12:49:38.338072 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llmvr\" (UniqueName: \"kubernetes.io/projected/43b4e264-d864-450d-b4eb-626deaea2946-kube-api-access-llmvr\") pod \"kube-proxy-j8l5s\" (UID: \"43b4e264-d864-450d-b4eb-626deaea2946\") " pod="kube-system/kube-proxy-j8l5s" Jan 29 12:49:38.578749 systemd[1]: Created slice kubepods-besteffort-podf2e75c5a_1a33_4c2f_99dc_ee42f03b6194.slice - libcontainer container kubepods-besteffort-podf2e75c5a_1a33_4c2f_99dc_ee42f03b6194.slice. Jan 29 12:49:38.588643 containerd[1452]: time="2025-01-29T12:49:38.588603321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8l5s,Uid:43b4e264-d864-450d-b4eb-626deaea2946,Namespace:kube-system,Attempt:0,}" Jan 29 12:49:38.619815 containerd[1452]: time="2025-01-29T12:49:38.619693268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:49:38.619815 containerd[1452]: time="2025-01-29T12:49:38.619757449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:49:38.619815 containerd[1452]: time="2025-01-29T12:49:38.619786363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:49:38.620172 containerd[1452]: time="2025-01-29T12:49:38.619881182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:49:38.640735 systemd[1]: run-containerd-runc-k8s.io-b4d8661faf35db585a98fc81e600969cc69a91acafeaa16dd37122ecfbf24163-runc.cE3ZPv.mount: Deactivated successfully. Jan 29 12:49:38.641282 kubelet[2603]: I0129 12:49:38.641249 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f2e75c5a-1a33-4c2f-99dc-ee42f03b6194-var-lib-calico\") pod \"tigera-operator-7d68577dc5-294bf\" (UID: \"f2e75c5a-1a33-4c2f-99dc-ee42f03b6194\") " pod="tigera-operator/tigera-operator-7d68577dc5-294bf" Jan 29 12:49:38.641574 kubelet[2603]: I0129 12:49:38.641295 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwmjj\" (UniqueName: \"kubernetes.io/projected/f2e75c5a-1a33-4c2f-99dc-ee42f03b6194-kube-api-access-hwmjj\") pod \"tigera-operator-7d68577dc5-294bf\" (UID: \"f2e75c5a-1a33-4c2f-99dc-ee42f03b6194\") " pod="tigera-operator/tigera-operator-7d68577dc5-294bf" Jan 29 12:49:38.647554 systemd[1]: Started cri-containerd-b4d8661faf35db585a98fc81e600969cc69a91acafeaa16dd37122ecfbf24163.scope - libcontainer container b4d8661faf35db585a98fc81e600969cc69a91acafeaa16dd37122ecfbf24163. Jan 29 12:49:38.668606 containerd[1452]: time="2025-01-29T12:49:38.668531571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8l5s,Uid:43b4e264-d864-450d-b4eb-626deaea2946,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4d8661faf35db585a98fc81e600969cc69a91acafeaa16dd37122ecfbf24163\"" Jan 29 12:49:38.672145 containerd[1452]: time="2025-01-29T12:49:38.672038023Z" level=info msg="CreateContainer within sandbox \"b4d8661faf35db585a98fc81e600969cc69a91acafeaa16dd37122ecfbf24163\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:49:38.700468 containerd[1452]: time="2025-01-29T12:49:38.700384862Z" level=info msg="CreateContainer within sandbox \"b4d8661faf35db585a98fc81e600969cc69a91acafeaa16dd37122ecfbf24163\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b3491c6adfe80d685e49e241a3794ef71b050625037ae6f33401c2ee5e3a5a22\"" Jan 29 12:49:38.701501 containerd[1452]: time="2025-01-29T12:49:38.701439781Z" level=info msg="StartContainer for \"b3491c6adfe80d685e49e241a3794ef71b050625037ae6f33401c2ee5e3a5a22\"" Jan 29 12:49:38.742660 systemd[1]: Started cri-containerd-b3491c6adfe80d685e49e241a3794ef71b050625037ae6f33401c2ee5e3a5a22.scope - libcontainer container b3491c6adfe80d685e49e241a3794ef71b050625037ae6f33401c2ee5e3a5a22. Jan 29 12:49:38.796003 containerd[1452]: time="2025-01-29T12:49:38.795944243Z" level=info msg="StartContainer for \"b3491c6adfe80d685e49e241a3794ef71b050625037ae6f33401c2ee5e3a5a22\" returns successfully" Jan 29 12:49:38.888527 containerd[1452]: time="2025-01-29T12:49:38.888490097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-294bf,Uid:f2e75c5a-1a33-4c2f-99dc-ee42f03b6194,Namespace:tigera-operator,Attempt:0,}" Jan 29 12:49:38.926852 containerd[1452]: time="2025-01-29T12:49:38.926496558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:49:38.927575 containerd[1452]: time="2025-01-29T12:49:38.926786232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:49:38.927988 containerd[1452]: time="2025-01-29T12:49:38.927550577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:49:38.929743 containerd[1452]: time="2025-01-29T12:49:38.929156581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:49:38.951557 systemd[1]: Started cri-containerd-f9de5900ac72ac2771337114dd9963a125a65c325f450380043e0d269cd0d160.scope - libcontainer container f9de5900ac72ac2771337114dd9963a125a65c325f450380043e0d269cd0d160. Jan 29 12:49:38.990117 containerd[1452]: time="2025-01-29T12:49:38.990085579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-294bf,Uid:f2e75c5a-1a33-4c2f-99dc-ee42f03b6194,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f9de5900ac72ac2771337114dd9963a125a65c325f450380043e0d269cd0d160\"" Jan 29 12:49:38.992777 containerd[1452]: time="2025-01-29T12:49:38.992591643Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 12:49:39.296672 sudo[1713]: pam_unix(sudo:session): session closed for user root Jan 29 12:49:39.444113 sshd[1710]: pam_unix(sshd:session): session closed for user core Jan 29 12:49:39.471698 systemd[1]: sshd@8-172.24.4.220:22-172.24.4.1:38876.service: Deactivated successfully. Jan 29 12:49:39.477518 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:49:39.478181 systemd[1]: session-11.scope: Consumed 7.352s CPU time, 160.9M memory peak, 0B memory swap peak. Jan 29 12:49:39.481647 systemd-logind[1437]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:49:39.486118 systemd-logind[1437]: Removed session 11. Jan 29 12:49:40.777584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700808395.mount: Deactivated successfully. Jan 29 12:49:41.758197 containerd[1452]: time="2025-01-29T12:49:41.758134463Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:41.759421 containerd[1452]: time="2025-01-29T12:49:41.759333313Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 12:49:41.760789 containerd[1452]: time="2025-01-29T12:49:41.760748570Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:41.764803 containerd[1452]: time="2025-01-29T12:49:41.764727198Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:41.765633 containerd[1452]: time="2025-01-29T12:49:41.765486833Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.772865114s" Jan 29 12:49:41.765633 containerd[1452]: time="2025-01-29T12:49:41.765518413Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 12:49:41.768191 containerd[1452]: time="2025-01-29T12:49:41.768161594Z" level=info msg="CreateContainer within sandbox \"f9de5900ac72ac2771337114dd9963a125a65c325f450380043e0d269cd0d160\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 12:49:41.790884 containerd[1452]: time="2025-01-29T12:49:41.790832776Z" level=info msg="CreateContainer within sandbox \"f9de5900ac72ac2771337114dd9963a125a65c325f450380043e0d269cd0d160\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cdf1eab83d23fea0baf8342d39fd5299620496e6cd48e5d74727c5c4e6f0b38c\"" Jan 29 12:49:41.791673 containerd[1452]: time="2025-01-29T12:49:41.791645491Z" level=info msg="StartContainer for \"cdf1eab83d23fea0baf8342d39fd5299620496e6cd48e5d74727c5c4e6f0b38c\"" Jan 29 12:49:41.826579 systemd[1]: Started cri-containerd-cdf1eab83d23fea0baf8342d39fd5299620496e6cd48e5d74727c5c4e6f0b38c.scope - libcontainer container cdf1eab83d23fea0baf8342d39fd5299620496e6cd48e5d74727c5c4e6f0b38c. Jan 29 12:49:41.854608 containerd[1452]: time="2025-01-29T12:49:41.854227935Z" level=info msg="StartContainer for \"cdf1eab83d23fea0baf8342d39fd5299620496e6cd48e5d74727c5c4e6f0b38c\" returns successfully" Jan 29 12:49:42.837122 kubelet[2603]: I0129 12:49:42.837012 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j8l5s" podStartSLOduration=4.836977554 podStartE2EDuration="4.836977554s" podCreationTimestamp="2025-01-29 12:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:49:39.797997838 +0000 UTC m=+6.220954720" watchObservedRunningTime="2025-01-29 12:49:42.836977554 +0000 UTC m=+9.259934406" Jan 29 12:49:43.313544 kubelet[2603]: I0129 12:49:43.313428 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-294bf" podStartSLOduration=2.538493795 podStartE2EDuration="5.313371366s" podCreationTimestamp="2025-01-29 12:49:38 +0000 UTC" firstStartedPulling="2025-01-29 12:49:38.991701221 +0000 UTC m=+5.414658033" lastFinishedPulling="2025-01-29 12:49:41.766578802 +0000 UTC m=+8.189535604" observedRunningTime="2025-01-29 12:49:42.839688502 +0000 UTC m=+9.262645385" watchObservedRunningTime="2025-01-29 12:49:43.313371366 +0000 UTC m=+9.736328218" Jan 29 12:49:45.331414 systemd[1]: Created slice kubepods-besteffort-pod2d4b85af_fdc7_4e58_a11b_7c5b65703730.slice - libcontainer container kubepods-besteffort-pod2d4b85af_fdc7_4e58_a11b_7c5b65703730.slice. Jan 29 12:49:45.381142 kubelet[2603]: I0129 12:49:45.381069 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2d4b85af-fdc7-4e58-a11b-7c5b65703730-typha-certs\") pod \"calico-typha-64cdf569d7-5ltxh\" (UID: \"2d4b85af-fdc7-4e58-a11b-7c5b65703730\") " pod="calico-system/calico-typha-64cdf569d7-5ltxh" Jan 29 12:49:45.381142 kubelet[2603]: I0129 12:49:45.381119 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjpxh\" (UniqueName: \"kubernetes.io/projected/2d4b85af-fdc7-4e58-a11b-7c5b65703730-kube-api-access-gjpxh\") pod \"calico-typha-64cdf569d7-5ltxh\" (UID: \"2d4b85af-fdc7-4e58-a11b-7c5b65703730\") " pod="calico-system/calico-typha-64cdf569d7-5ltxh" Jan 29 12:49:45.381614 kubelet[2603]: I0129 12:49:45.381171 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d4b85af-fdc7-4e58-a11b-7c5b65703730-tigera-ca-bundle\") pod \"calico-typha-64cdf569d7-5ltxh\" (UID: \"2d4b85af-fdc7-4e58-a11b-7c5b65703730\") " pod="calico-system/calico-typha-64cdf569d7-5ltxh" Jan 29 12:49:45.431121 systemd[1]: Created slice kubepods-besteffort-pod764e67b7_5a88_485f_a7d3_419e30cb004d.slice - libcontainer container kubepods-besteffort-pod764e67b7_5a88_485f_a7d3_419e30cb004d.slice. Jan 29 12:49:45.556851 kubelet[2603]: E0129 12:49:45.556741 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fldpg" podUID="b3395d29-aa34-40db-87bd-39bbc4377d98" Jan 29 12:49:45.583636 kubelet[2603]: I0129 12:49:45.583379 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/764e67b7-5a88-485f-a7d3-419e30cb004d-xtables-lock\") pod \"calico-node-sm9ph\" (UID: \"764e67b7-5a88-485f-a7d3-419e30cb004d\") " pod="calico-system/calico-node-sm9ph" Jan 29 12:49:45.583636 kubelet[2603]: I0129 12:49:45.583454 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/764e67b7-5a88-485f-a7d3-419e30cb004d-cni-net-dir\") pod \"calico-node-sm9ph\" (UID: \"764e67b7-5a88-485f-a7d3-419e30cb004d\") " pod="calico-system/calico-node-sm9ph" Jan 29 12:49:45.583636 kubelet[2603]: I0129 12:49:45.583476 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/764e67b7-5a88-485f-a7d3-419e30cb004d-cni-log-dir\") pod \"calico-node-sm9ph\" (UID: \"764e67b7-5a88-485f-a7d3-419e30cb004d\") " pod="calico-system/calico-node-sm9ph" Jan 29 12:49:45.583636 kubelet[2603]: I0129 12:49:45.583502 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b3395d29-aa34-40db-87bd-39bbc4377d98-varrun\") pod \"csi-node-driver-fldpg\" (UID: \"b3395d29-aa34-40db-87bd-39bbc4377d98\") " pod="calico-system/csi-node-driver-fldpg" Jan 29 12:49:45.583636 kubelet[2603]: I0129 12:49:45.583528 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/764e67b7-5a88-485f-a7d3-419e30cb004d-lib-modules\") pod \"calico-node-sm9ph\" (UID: \"764e67b7-5a88-485f-a7d3-419e30cb004d\") " pod="calico-system/calico-node-sm9ph" Jan 29 12:49:45.583866 kubelet[2603]: I0129 12:49:45.583563 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/764e67b7-5a88-485f-a7d3-419e30cb004d-cni-bin-dir\") pod \"calico-node-sm9ph\" (UID: \"764e67b7-5a88-485f-a7d3-419e30cb004d\") " pod="calico-system/calico-node-sm9ph" Jan 29 12:49:45.583866 kubelet[2603]: I0129 12:49:45.583585 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/764e67b7-5a88-485f-a7d3-419e30cb004d-flexvol-driver-host\") pod \"calico-node-sm9ph\" (UID: \"764e67b7-5a88-485f-a7d3-419e30cb004d\") " pod="calico-system/calico-node-sm9ph" Jan 29 12:49:45.583866 kubelet[2603]: I0129 12:49:45.583615 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-227qj\" (UniqueName: \"kubernetes.io/projected/b3395d29-aa34-40db-87bd-39bbc4377d98-kube-api-access-227qj\") pod \"csi-node-driver-fldpg\" (UID: \"b3395d29-aa34-40db-87bd-39bbc4377d98\") " pod="calico-system/csi-node-driver-fldpg" Jan 29 12:49:45.583866 kubelet[2603]: I0129 12:49:45.583636 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/764e67b7-5a88-485f-a7d3-419e30cb004d-var-lib-calico\") pod \"calico-node-sm9ph\" (UID: \"764e67b7-5a88-485f-a7d3-419e30cb004d\") " pod="calico-system/calico-node-sm9ph" Jan 29 12:49:45.583866 kubelet[2603]: I0129 12:49:45.583654 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8w2b\" (UniqueName: \"kubernetes.io/projected/764e67b7-5a88-485f-a7d3-419e30cb004d-kube-api-access-n8w2b\") pod \"calico-node-sm9ph\" (UID: \"764e67b7-5a88-485f-a7d3-419e30cb004d\") " pod="calico-system/calico-node-sm9ph" Jan 29 12:49:45.584001 kubelet[2603]: I0129 12:49:45.583675 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b3395d29-aa34-40db-87bd-39bbc4377d98-socket-dir\") pod \"csi-node-driver-fldpg\" (UID: \"b3395d29-aa34-40db-87bd-39bbc4377d98\") " pod="calico-system/csi-node-driver-fldpg" Jan 29 12:49:45.584001 kubelet[2603]: I0129 12:49:45.583694 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/764e67b7-5a88-485f-a7d3-419e30cb004d-policysync\") pod \"calico-node-sm9ph\" (UID: \"764e67b7-5a88-485f-a7d3-419e30cb004d\") " pod="calico-system/calico-node-sm9ph" Jan 29 12:49:45.584001 kubelet[2603]: I0129 12:49:45.583714 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3395d29-aa34-40db-87bd-39bbc4377d98-kubelet-dir\") pod \"csi-node-driver-fldpg\" (UID: \"b3395d29-aa34-40db-87bd-39bbc4377d98\") " pod="calico-system/csi-node-driver-fldpg" Jan 29 12:49:45.584001 kubelet[2603]: I0129 12:49:45.583745 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/764e67b7-5a88-485f-a7d3-419e30cb004d-node-certs\") pod \"calico-node-sm9ph\" (UID: \"764e67b7-5a88-485f-a7d3-419e30cb004d\") " pod="calico-system/calico-node-sm9ph" Jan 29 12:49:45.584001 kubelet[2603]: I0129 12:49:45.583767 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/764e67b7-5a88-485f-a7d3-419e30cb004d-var-run-calico\") pod \"calico-node-sm9ph\" (UID: \"764e67b7-5a88-485f-a7d3-419e30cb004d\") " pod="calico-system/calico-node-sm9ph" Jan 29 12:49:45.584133 kubelet[2603]: I0129 12:49:45.583788 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/764e67b7-5a88-485f-a7d3-419e30cb004d-tigera-ca-bundle\") pod \"calico-node-sm9ph\" (UID: \"764e67b7-5a88-485f-a7d3-419e30cb004d\") " pod="calico-system/calico-node-sm9ph" Jan 29 12:49:45.584133 kubelet[2603]: I0129 12:49:45.583807 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b3395d29-aa34-40db-87bd-39bbc4377d98-registration-dir\") pod \"csi-node-driver-fldpg\" (UID: \"b3395d29-aa34-40db-87bd-39bbc4377d98\") " pod="calico-system/csi-node-driver-fldpg" Jan 29 12:49:45.639054 containerd[1452]: time="2025-01-29T12:49:45.639002547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64cdf569d7-5ltxh,Uid:2d4b85af-fdc7-4e58-a11b-7c5b65703730,Namespace:calico-system,Attempt:0,}" Jan 29 12:49:45.703433 kubelet[2603]: E0129 12:49:45.700339 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:45.703433 kubelet[2603]: W0129 12:49:45.700390 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:45.703433 kubelet[2603]: E0129 12:49:45.700437 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:45.703684 containerd[1452]: time="2025-01-29T12:49:45.696803558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:49:45.703684 containerd[1452]: time="2025-01-29T12:49:45.702162375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:49:45.703684 containerd[1452]: time="2025-01-29T12:49:45.702196008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:49:45.703684 containerd[1452]: time="2025-01-29T12:49:45.702284364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:49:45.730441 kubelet[2603]: E0129 12:49:45.729388 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:45.730441 kubelet[2603]: W0129 12:49:45.729584 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:45.730441 kubelet[2603]: E0129 12:49:45.729754 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:45.730441 kubelet[2603]: E0129 12:49:45.730240 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:45.730441 kubelet[2603]: W0129 12:49:45.730366 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:45.730872 kubelet[2603]: E0129 12:49:45.730773 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:45.734921 containerd[1452]: time="2025-01-29T12:49:45.734888986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sm9ph,Uid:764e67b7-5a88-485f-a7d3-419e30cb004d,Namespace:calico-system,Attempt:0,}" Jan 29 12:49:45.737611 systemd[1]: Started cri-containerd-57d818a0360eeb46491b863232e2d73da90161a4602aed0d4da1db6a7848ce17.scope - libcontainer container 57d818a0360eeb46491b863232e2d73da90161a4602aed0d4da1db6a7848ce17. Jan 29 12:49:45.788120 containerd[1452]: time="2025-01-29T12:49:45.787654245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:49:45.788120 containerd[1452]: time="2025-01-29T12:49:45.787729647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:49:45.788120 containerd[1452]: time="2025-01-29T12:49:45.787749415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:49:45.788120 containerd[1452]: time="2025-01-29T12:49:45.787864240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:49:45.819668 systemd[1]: Started cri-containerd-1eb5a9742ade836b4cedeecbbde74edcc511b2fcb2e803ef163cc1273b6e5c4c.scope - libcontainer container 1eb5a9742ade836b4cedeecbbde74edcc511b2fcb2e803ef163cc1273b6e5c4c. Jan 29 12:49:45.831176 containerd[1452]: time="2025-01-29T12:49:45.831097550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64cdf569d7-5ltxh,Uid:2d4b85af-fdc7-4e58-a11b-7c5b65703730,Namespace:calico-system,Attempt:0,} returns sandbox id \"57d818a0360eeb46491b863232e2d73da90161a4602aed0d4da1db6a7848ce17\"" Jan 29 12:49:45.834994 containerd[1452]: time="2025-01-29T12:49:45.834039561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 12:49:45.866032 containerd[1452]: time="2025-01-29T12:49:45.865811871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sm9ph,Uid:764e67b7-5a88-485f-a7d3-419e30cb004d,Namespace:calico-system,Attempt:0,} returns sandbox id \"1eb5a9742ade836b4cedeecbbde74edcc511b2fcb2e803ef163cc1273b6e5c4c\"" Jan 29 12:49:47.716094 kubelet[2603]: E0129 12:49:47.715716 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fldpg" podUID="b3395d29-aa34-40db-87bd-39bbc4377d98" Jan 29 12:49:47.810146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3923918122.mount: Deactivated successfully. Jan 29 12:49:49.190184 containerd[1452]: time="2025-01-29T12:49:49.187971868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:49.190951 containerd[1452]: time="2025-01-29T12:49:49.190891657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 29 12:49:49.192481 containerd[1452]: time="2025-01-29T12:49:49.192437098Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:49.200552 containerd[1452]: time="2025-01-29T12:49:49.198057245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:49.202263 containerd[1452]: time="2025-01-29T12:49:49.202218154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.368083244s" Jan 29 12:49:49.202263 containerd[1452]: time="2025-01-29T12:49:49.202256466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 12:49:49.206438 containerd[1452]: time="2025-01-29T12:49:49.205871911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 12:49:49.214186 containerd[1452]: time="2025-01-29T12:49:49.214052883Z" level=info msg="CreateContainer within sandbox \"57d818a0360eeb46491b863232e2d73da90161a4602aed0d4da1db6a7848ce17\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 12:49:49.251730 containerd[1452]: time="2025-01-29T12:49:49.251572505Z" level=info msg="CreateContainer within sandbox \"57d818a0360eeb46491b863232e2d73da90161a4602aed0d4da1db6a7848ce17\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6d78e828f71b4b4444a83c3ba778c4131adf14e0512e99ca591dbe01ee488c9d\"" Jan 29 12:49:49.252520 containerd[1452]: time="2025-01-29T12:49:49.252369150Z" level=info msg="StartContainer for \"6d78e828f71b4b4444a83c3ba778c4131adf14e0512e99ca591dbe01ee488c9d\"" Jan 29 12:49:49.300573 systemd[1]: Started cri-containerd-6d78e828f71b4b4444a83c3ba778c4131adf14e0512e99ca591dbe01ee488c9d.scope - libcontainer container 6d78e828f71b4b4444a83c3ba778c4131adf14e0512e99ca591dbe01ee488c9d. Jan 29 12:49:49.364110 containerd[1452]: time="2025-01-29T12:49:49.363653913Z" level=info msg="StartContainer for \"6d78e828f71b4b4444a83c3ba778c4131adf14e0512e99ca591dbe01ee488c9d\" returns successfully" Jan 29 12:49:49.715970 kubelet[2603]: E0129 12:49:49.714912 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fldpg" podUID="b3395d29-aa34-40db-87bd-39bbc4377d98" Jan 29 12:49:49.890061 kubelet[2603]: I0129 12:49:49.888755 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64cdf569d7-5ltxh" podStartSLOduration=1.5180129660000001 podStartE2EDuration="4.888726226s" podCreationTimestamp="2025-01-29 12:49:45 +0000 UTC" firstStartedPulling="2025-01-29 12:49:45.833145803 +0000 UTC m=+12.256102615" lastFinishedPulling="2025-01-29 12:49:49.203859062 +0000 UTC m=+15.626815875" observedRunningTime="2025-01-29 12:49:49.886338085 +0000 UTC m=+16.309294967" watchObservedRunningTime="2025-01-29 12:49:49.888726226 +0000 UTC m=+16.311683078" Jan 29 12:49:49.911659 kubelet[2603]: E0129 12:49:49.911618 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.911659 kubelet[2603]: W0129 12:49:49.911655 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.912186 kubelet[2603]: E0129 12:49:49.911690 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.912186 kubelet[2603]: E0129 12:49:49.912090 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.912186 kubelet[2603]: W0129 12:49:49.912112 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.912186 kubelet[2603]: E0129 12:49:49.912133 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.912615 kubelet[2603]: E0129 12:49:49.912549 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.912615 kubelet[2603]: W0129 12:49:49.912578 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.912615 kubelet[2603]: E0129 12:49:49.912600 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.913501 kubelet[2603]: E0129 12:49:49.913065 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.913501 kubelet[2603]: W0129 12:49:49.913088 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.913501 kubelet[2603]: E0129 12:49:49.913109 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.913711 kubelet[2603]: E0129 12:49:49.913553 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.913711 kubelet[2603]: W0129 12:49:49.913575 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.913711 kubelet[2603]: E0129 12:49:49.913596 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.913948 kubelet[2603]: E0129 12:49:49.913903 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.913948 kubelet[2603]: W0129 12:49:49.913935 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.914373 kubelet[2603]: E0129 12:49:49.913957 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.914373 kubelet[2603]: E0129 12:49:49.914350 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.914373 kubelet[2603]: W0129 12:49:49.914370 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.914628 kubelet[2603]: E0129 12:49:49.914391 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.914905 kubelet[2603]: E0129 12:49:49.914847 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.914905 kubelet[2603]: W0129 12:49:49.914877 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.914905 kubelet[2603]: E0129 12:49:49.914898 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.915322 kubelet[2603]: E0129 12:49:49.915238 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.915322 kubelet[2603]: W0129 12:49:49.915258 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.915322 kubelet[2603]: E0129 12:49:49.915278 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.915882 kubelet[2603]: E0129 12:49:49.915661 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.915882 kubelet[2603]: W0129 12:49:49.915682 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.915882 kubelet[2603]: E0129 12:49:49.915702 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.916333 kubelet[2603]: E0129 12:49:49.916013 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.916333 kubelet[2603]: W0129 12:49:49.916034 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.916333 kubelet[2603]: E0129 12:49:49.916053 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.916333 kubelet[2603]: E0129 12:49:49.916359 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.916333 kubelet[2603]: W0129 12:49:49.916379 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.916333 kubelet[2603]: E0129 12:49:49.916433 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.917238 kubelet[2603]: E0129 12:49:49.916798 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.917238 kubelet[2603]: W0129 12:49:49.916819 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.917238 kubelet[2603]: E0129 12:49:49.916838 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.917238 kubelet[2603]: E0129 12:49:49.917181 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.917238 kubelet[2603]: W0129 12:49:49.917201 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.917238 kubelet[2603]: E0129 12:49:49.917221 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.917943 kubelet[2603]: E0129 12:49:49.917609 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.917943 kubelet[2603]: W0129 12:49:49.917630 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.917943 kubelet[2603]: E0129 12:49:49.917650 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.920547 kubelet[2603]: E0129 12:49:49.920218 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.920547 kubelet[2603]: W0129 12:49:49.920251 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.920547 kubelet[2603]: E0129 12:49:49.920280 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.921193 kubelet[2603]: E0129 12:49:49.920963 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.921193 kubelet[2603]: W0129 12:49:49.921052 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.921193 kubelet[2603]: E0129 12:49:49.921113 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.921772 kubelet[2603]: E0129 12:49:49.921633 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.921772 kubelet[2603]: W0129 12:49:49.921660 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.921772 kubelet[2603]: E0129 12:49:49.921686 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.922313 kubelet[2603]: E0129 12:49:49.922275 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.922479 kubelet[2603]: W0129 12:49:49.922313 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.922479 kubelet[2603]: E0129 12:49:49.922457 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.922841 kubelet[2603]: E0129 12:49:49.922809 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.922841 kubelet[2603]: W0129 12:49:49.922836 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.923019 kubelet[2603]: E0129 12:49:49.922966 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.923376 kubelet[2603]: E0129 12:49:49.923340 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.923376 kubelet[2603]: W0129 12:49:49.923371 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.923683 kubelet[2603]: E0129 12:49:49.923583 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.923987 kubelet[2603]: E0129 12:49:49.923812 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.923987 kubelet[2603]: W0129 12:49:49.923840 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.923987 kubelet[2603]: E0129 12:49:49.923871 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.924758 kubelet[2603]: E0129 12:49:49.924248 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.924758 kubelet[2603]: W0129 12:49:49.924271 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.924758 kubelet[2603]: E0129 12:49:49.924292 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.925588 kubelet[2603]: E0129 12:49:49.924940 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.925588 kubelet[2603]: W0129 12:49:49.924965 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.925588 kubelet[2603]: E0129 12:49:49.925105 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.926556 kubelet[2603]: E0129 12:49:49.926112 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.926556 kubelet[2603]: W0129 12:49:49.926314 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.926828 kubelet[2603]: E0129 12:49:49.926710 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.927472 kubelet[2603]: E0129 12:49:49.927148 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.927472 kubelet[2603]: W0129 12:49:49.927179 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.927472 kubelet[2603]: E0129 12:49:49.927211 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.927739 kubelet[2603]: E0129 12:49:49.927713 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.927822 kubelet[2603]: W0129 12:49:49.927735 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.927822 kubelet[2603]: E0129 12:49:49.927813 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.928981 kubelet[2603]: E0129 12:49:49.928342 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.928981 kubelet[2603]: W0129 12:49:49.928428 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.928981 kubelet[2603]: E0129 12:49:49.928453 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.929497 kubelet[2603]: E0129 12:49:49.929278 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.929497 kubelet[2603]: W0129 12:49:49.929301 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.929497 kubelet[2603]: E0129 12:49:49.929479 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.930614 kubelet[2603]: E0129 12:49:49.930557 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.930614 kubelet[2603]: W0129 12:49:49.930619 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.931145 kubelet[2603]: E0129 12:49:49.930846 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.931621 kubelet[2603]: E0129 12:49:49.931577 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.931621 kubelet[2603]: W0129 12:49:49.931609 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.931996 kubelet[2603]: E0129 12:49:49.931631 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.932082 kubelet[2603]: E0129 12:49:49.932060 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.932153 kubelet[2603]: W0129 12:49:49.932088 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.932153 kubelet[2603]: E0129 12:49:49.932113 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:49.933069 kubelet[2603]: E0129 12:49:49.932999 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:49.933069 kubelet[2603]: W0129 12:49:49.933042 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:49.933069 kubelet[2603]: E0129 12:49:49.933064 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.848163 kubelet[2603]: I0129 12:49:50.847986 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:49:50.924935 kubelet[2603]: E0129 12:49:50.924483 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.924935 kubelet[2603]: W0129 12:49:50.924521 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.924935 kubelet[2603]: E0129 12:49:50.924539 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.924935 kubelet[2603]: E0129 12:49:50.924764 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.924935 kubelet[2603]: W0129 12:49:50.924774 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.924935 kubelet[2603]: E0129 12:49:50.924784 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.925730 kubelet[2603]: E0129 12:49:50.925358 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.925730 kubelet[2603]: W0129 12:49:50.925427 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.925730 kubelet[2603]: E0129 12:49:50.925438 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.925730 kubelet[2603]: E0129 12:49:50.925628 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.925730 kubelet[2603]: W0129 12:49:50.925639 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.925730 kubelet[2603]: E0129 12:49:50.925648 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.926218 kubelet[2603]: E0129 12:49:50.926027 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.926218 kubelet[2603]: W0129 12:49:50.926038 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.926218 kubelet[2603]: E0129 12:49:50.926047 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.926565 kubelet[2603]: E0129 12:49:50.926284 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.926565 kubelet[2603]: W0129 12:49:50.926294 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.926565 kubelet[2603]: E0129 12:49:50.926302 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.926869 kubelet[2603]: E0129 12:49:50.926737 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.926869 kubelet[2603]: W0129 12:49:50.926747 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.926869 kubelet[2603]: E0129 12:49:50.926756 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.927129 kubelet[2603]: E0129 12:49:50.927022 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.927129 kubelet[2603]: W0129 12:49:50.927031 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.927129 kubelet[2603]: E0129 12:49:50.927040 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.927442 kubelet[2603]: E0129 12:49:50.927346 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.927442 kubelet[2603]: W0129 12:49:50.927384 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.927442 kubelet[2603]: E0129 12:49:50.927406 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.927832 kubelet[2603]: E0129 12:49:50.927709 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.927832 kubelet[2603]: W0129 12:49:50.927720 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.927832 kubelet[2603]: E0129 12:49:50.927730 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.928174 kubelet[2603]: E0129 12:49:50.928067 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.928174 kubelet[2603]: W0129 12:49:50.928079 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.928174 kubelet[2603]: E0129 12:49:50.928088 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.928592 kubelet[2603]: E0129 12:49:50.928466 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.928592 kubelet[2603]: W0129 12:49:50.928479 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.928592 kubelet[2603]: E0129 12:49:50.928507 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.928954 kubelet[2603]: E0129 12:49:50.928875 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.928954 kubelet[2603]: W0129 12:49:50.928887 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.928954 kubelet[2603]: E0129 12:49:50.928897 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.929337 kubelet[2603]: E0129 12:49:50.929261 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.929337 kubelet[2603]: W0129 12:49:50.929273 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.929337 kubelet[2603]: E0129 12:49:50.929282 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.929817 kubelet[2603]: E0129 12:49:50.929677 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.929817 kubelet[2603]: W0129 12:49:50.929688 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.929817 kubelet[2603]: E0129 12:49:50.929698 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.930173 kubelet[2603]: E0129 12:49:50.930045 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.930173 kubelet[2603]: W0129 12:49:50.930056 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.930173 kubelet[2603]: E0129 12:49:50.930065 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.930536 kubelet[2603]: E0129 12:49:50.930433 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.930536 kubelet[2603]: W0129 12:49:50.930444 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.930536 kubelet[2603]: E0129 12:49:50.930467 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.930877 kubelet[2603]: E0129 12:49:50.930783 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.930877 kubelet[2603]: W0129 12:49:50.930795 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.931095 kubelet[2603]: E0129 12:49:50.931012 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.931095 kubelet[2603]: E0129 12:49:50.931075 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.931095 kubelet[2603]: W0129 12:49:50.931083 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.931287 kubelet[2603]: E0129 12:49:50.931170 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.931646 kubelet[2603]: E0129 12:49:50.931542 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.931646 kubelet[2603]: W0129 12:49:50.931555 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.931646 kubelet[2603]: E0129 12:49:50.931567 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.932223 kubelet[2603]: E0129 12:49:50.931916 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.932223 kubelet[2603]: W0129 12:49:50.931928 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.932223 kubelet[2603]: E0129 12:49:50.931949 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.932483 kubelet[2603]: E0129 12:49:50.932472 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.932557 kubelet[2603]: W0129 12:49:50.932545 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.932682 kubelet[2603]: E0129 12:49:50.932621 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.932895 kubelet[2603]: E0129 12:49:50.932857 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.932895 kubelet[2603]: W0129 12:49:50.932882 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.933503 kubelet[2603]: E0129 12:49:50.933040 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.933728 kubelet[2603]: E0129 12:49:50.933704 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.934043 kubelet[2603]: W0129 12:49:50.933854 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.934043 kubelet[2603]: E0129 12:49:50.933895 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.934742 kubelet[2603]: E0129 12:49:50.934586 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.934742 kubelet[2603]: W0129 12:49:50.934613 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.935178 kubelet[2603]: E0129 12:49:50.934950 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.935616 kubelet[2603]: E0129 12:49:50.935367 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.935616 kubelet[2603]: W0129 12:49:50.935392 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.935616 kubelet[2603]: E0129 12:49:50.935463 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.936531 kubelet[2603]: E0129 12:49:50.936129 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.936531 kubelet[2603]: W0129 12:49:50.936154 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.936531 kubelet[2603]: E0129 12:49:50.936192 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.937295 kubelet[2603]: E0129 12:49:50.937079 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.937295 kubelet[2603]: W0129 12:49:50.937105 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.937295 kubelet[2603]: E0129 12:49:50.937238 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.938160 kubelet[2603]: E0129 12:49:50.938135 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.938565 kubelet[2603]: W0129 12:49:50.938284 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.938565 kubelet[2603]: E0129 12:49:50.938326 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.939132 kubelet[2603]: E0129 12:49:50.939078 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.939132 kubelet[2603]: W0129 12:49:50.939104 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.940102 kubelet[2603]: E0129 12:49:50.939607 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.940565 kubelet[2603]: E0129 12:49:50.940539 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.941300 kubelet[2603]: W0129 12:49:50.940704 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.941300 kubelet[2603]: E0129 12:49:50.940746 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.942245 kubelet[2603]: E0129 12:49:50.942217 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.942391 kubelet[2603]: W0129 12:49:50.942366 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.942858 kubelet[2603]: E0129 12:49:50.942730 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:50.943304 kubelet[2603]: E0129 12:49:50.943201 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:49:50.943304 kubelet[2603]: W0129 12:49:50.943231 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:49:50.943304 kubelet[2603]: E0129 12:49:50.943253 2603 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:49:51.233760 containerd[1452]: time="2025-01-29T12:49:51.233718458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:51.236018 containerd[1452]: time="2025-01-29T12:49:51.235970003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 29 12:49:51.237668 containerd[1452]: time="2025-01-29T12:49:51.237623666Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:51.240307 containerd[1452]: time="2025-01-29T12:49:51.240223645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:51.240916 containerd[1452]: time="2025-01-29T12:49:51.240878675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.034954086s" Jan 29 12:49:51.240972 containerd[1452]: time="2025-01-29T12:49:51.240916686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 12:49:51.243386 containerd[1452]: time="2025-01-29T12:49:51.243277065Z" level=info msg="CreateContainer within sandbox \"1eb5a9742ade836b4cedeecbbde74edcc511b2fcb2e803ef163cc1273b6e5c4c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:49:51.263947 containerd[1452]: time="2025-01-29T12:49:51.263827891Z" level=info msg="CreateContainer within sandbox \"1eb5a9742ade836b4cedeecbbde74edcc511b2fcb2e803ef163cc1273b6e5c4c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7a06e3a2fce1b0154b732d4453061d8749825893147752e3f2cb44ce7f203fb2\"" Jan 29 12:49:51.265209 containerd[1452]: time="2025-01-29T12:49:51.264881648Z" level=info msg="StartContainer for \"7a06e3a2fce1b0154b732d4453061d8749825893147752e3f2cb44ce7f203fb2\"" Jan 29 12:49:51.303574 systemd[1]: Started cri-containerd-7a06e3a2fce1b0154b732d4453061d8749825893147752e3f2cb44ce7f203fb2.scope - libcontainer container 7a06e3a2fce1b0154b732d4453061d8749825893147752e3f2cb44ce7f203fb2. Jan 29 12:49:51.332450 containerd[1452]: time="2025-01-29T12:49:51.332378804Z" level=info msg="StartContainer for \"7a06e3a2fce1b0154b732d4453061d8749825893147752e3f2cb44ce7f203fb2\" returns successfully" Jan 29 12:49:51.342550 systemd[1]: cri-containerd-7a06e3a2fce1b0154b732d4453061d8749825893147752e3f2cb44ce7f203fb2.scope: Deactivated successfully. Jan 29 12:49:51.366145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a06e3a2fce1b0154b732d4453061d8749825893147752e3f2cb44ce7f203fb2-rootfs.mount: Deactivated successfully. Jan 29 12:49:51.715497 kubelet[2603]: E0129 12:49:51.715332 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fldpg" podUID="b3395d29-aa34-40db-87bd-39bbc4377d98" Jan 29 12:49:52.200158 containerd[1452]: time="2025-01-29T12:49:52.200013814Z" level=info msg="shim disconnected" id=7a06e3a2fce1b0154b732d4453061d8749825893147752e3f2cb44ce7f203fb2 namespace=k8s.io Jan 29 12:49:52.200158 containerd[1452]: time="2025-01-29T12:49:52.200135772Z" level=warning msg="cleaning up after shim disconnected" id=7a06e3a2fce1b0154b732d4453061d8749825893147752e3f2cb44ce7f203fb2 namespace=k8s.io Jan 29 12:49:52.200158 containerd[1452]: time="2025-01-29T12:49:52.200158745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:49:52.232630 containerd[1452]: time="2025-01-29T12:49:52.232488395Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:49:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:49:52.863701 containerd[1452]: time="2025-01-29T12:49:52.861728861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 12:49:53.720729 kubelet[2603]: E0129 12:49:53.720589 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fldpg" podUID="b3395d29-aa34-40db-87bd-39bbc4377d98" Jan 29 12:49:55.722444 kubelet[2603]: E0129 12:49:55.722316 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fldpg" podUID="b3395d29-aa34-40db-87bd-39bbc4377d98" Jan 29 12:49:57.716972 kubelet[2603]: E0129 12:49:57.716877 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fldpg" podUID="b3395d29-aa34-40db-87bd-39bbc4377d98" Jan 29 12:49:58.437366 containerd[1452]: time="2025-01-29T12:49:58.437315122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:58.440121 containerd[1452]: time="2025-01-29T12:49:58.440041327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 12:49:58.440758 containerd[1452]: time="2025-01-29T12:49:58.440702698Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:58.446569 containerd[1452]: time="2025-01-29T12:49:58.446543077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:49:58.448124 containerd[1452]: time="2025-01-29T12:49:58.448097705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.586236976s" Jan 29 12:49:58.448220 containerd[1452]: time="2025-01-29T12:49:58.448200447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 12:49:58.451372 containerd[1452]: time="2025-01-29T12:49:58.451344256Z" level=info msg="CreateContainer within sandbox \"1eb5a9742ade836b4cedeecbbde74edcc511b2fcb2e803ef163cc1273b6e5c4c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:49:58.470439 containerd[1452]: time="2025-01-29T12:49:58.470377741Z" level=info msg="CreateContainer within sandbox \"1eb5a9742ade836b4cedeecbbde74edcc511b2fcb2e803ef163cc1273b6e5c4c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f64df153969fe9deec57f288e243a1bca16f77f75b46313d753f1b0e5d95363c\"" Jan 29 12:49:58.472048 containerd[1452]: time="2025-01-29T12:49:58.472019792Z" level=info msg="StartContainer for \"f64df153969fe9deec57f288e243a1bca16f77f75b46313d753f1b0e5d95363c\"" Jan 29 12:49:58.512575 systemd[1]: Started cri-containerd-f64df153969fe9deec57f288e243a1bca16f77f75b46313d753f1b0e5d95363c.scope - libcontainer container f64df153969fe9deec57f288e243a1bca16f77f75b46313d753f1b0e5d95363c. Jan 29 12:49:58.553146 containerd[1452]: time="2025-01-29T12:49:58.552343313Z" level=info msg="StartContainer for \"f64df153969fe9deec57f288e243a1bca16f77f75b46313d753f1b0e5d95363c\" returns successfully" Jan 29 12:49:59.725812 kubelet[2603]: E0129 12:49:59.725712 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fldpg" podUID="b3395d29-aa34-40db-87bd-39bbc4377d98" Jan 29 12:49:59.753064 containerd[1452]: time="2025-01-29T12:49:59.752949492Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:49:59.757980 systemd[1]: cri-containerd-f64df153969fe9deec57f288e243a1bca16f77f75b46313d753f1b0e5d95363c.scope: Deactivated successfully. Jan 29 12:49:59.787008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f64df153969fe9deec57f288e243a1bca16f77f75b46313d753f1b0e5d95363c-rootfs.mount: Deactivated successfully. Jan 29 12:49:59.788648 kubelet[2603]: I0129 12:49:59.788595 2603 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 12:50:00.135839 systemd[1]: Created slice kubepods-burstable-podb7e92067_48ec_4b6c_a725_b3129763f04a.slice - libcontainer container kubepods-burstable-podb7e92067_48ec_4b6c_a725_b3129763f04a.slice. Jan 29 12:50:00.171461 systemd[1]: Created slice kubepods-burstable-pod0f12975c_89a3_46d1_87fb_a8eed8bcd180.slice - libcontainer container kubepods-burstable-pod0f12975c_89a3_46d1_87fb_a8eed8bcd180.slice. Jan 29 12:50:00.176144 systemd[1]: Created slice kubepods-besteffort-pod75b0dcc9_8d93_4002_b03a_5f5411f1a957.slice - libcontainer container kubepods-besteffort-pod75b0dcc9_8d93_4002_b03a_5f5411f1a957.slice. Jan 29 12:50:00.182668 systemd[1]: Created slice kubepods-besteffort-pod47cdff09_3717_4637_aaa6_498f177eaff7.slice - libcontainer container kubepods-besteffort-pod47cdff09_3717_4637_aaa6_498f177eaff7.slice. Jan 29 12:50:00.188184 systemd[1]: Created slice kubepods-besteffort-pod99f92faa_8e44_47bb_8c33_cf1d3c148912.slice - libcontainer container kubepods-besteffort-pod99f92faa_8e44_47bb_8c33_cf1d3c148912.slice. Jan 29 12:50:00.198877 kubelet[2603]: I0129 12:50:00.198586 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75b0dcc9-8d93-4002-b03a-5f5411f1a957-tigera-ca-bundle\") pod \"calico-kube-controllers-76fcdb488d-h7k99\" (UID: \"75b0dcc9-8d93-4002-b03a-5f5411f1a957\") " pod="calico-system/calico-kube-controllers-76fcdb488d-h7k99" Jan 29 12:50:00.198877 kubelet[2603]: I0129 12:50:00.198624 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkscl\" (UniqueName: \"kubernetes.io/projected/75b0dcc9-8d93-4002-b03a-5f5411f1a957-kube-api-access-fkscl\") pod \"calico-kube-controllers-76fcdb488d-h7k99\" (UID: \"75b0dcc9-8d93-4002-b03a-5f5411f1a957\") " pod="calico-system/calico-kube-controllers-76fcdb488d-h7k99" Jan 29 12:50:00.198877 kubelet[2603]: I0129 12:50:00.198647 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/99f92faa-8e44-47bb-8c33-cf1d3c148912-calico-apiserver-certs\") pod \"calico-apiserver-55d77dbf59-t4fpz\" (UID: \"99f92faa-8e44-47bb-8c33-cf1d3c148912\") " pod="calico-apiserver/calico-apiserver-55d77dbf59-t4fpz" Jan 29 12:50:00.198877 kubelet[2603]: I0129 12:50:00.198667 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f12975c-89a3-46d1-87fb-a8eed8bcd180-config-volume\") pod \"coredns-668d6bf9bc-n5k7q\" (UID: \"0f12975c-89a3-46d1-87fb-a8eed8bcd180\") " pod="kube-system/coredns-668d6bf9bc-n5k7q" Jan 29 12:50:00.198877 kubelet[2603]: I0129 12:50:00.198691 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8pkc\" (UniqueName: \"kubernetes.io/projected/47cdff09-3717-4637-aaa6-498f177eaff7-kube-api-access-q8pkc\") pod \"calico-apiserver-55d77dbf59-pmmxr\" (UID: \"47cdff09-3717-4637-aaa6-498f177eaff7\") " pod="calico-apiserver/calico-apiserver-55d77dbf59-pmmxr" Jan 29 12:50:00.199104 kubelet[2603]: I0129 12:50:00.198709 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54ps5\" (UniqueName: \"kubernetes.io/projected/0f12975c-89a3-46d1-87fb-a8eed8bcd180-kube-api-access-54ps5\") pod \"coredns-668d6bf9bc-n5k7q\" (UID: \"0f12975c-89a3-46d1-87fb-a8eed8bcd180\") " pod="kube-system/coredns-668d6bf9bc-n5k7q" Jan 29 12:50:00.199104 kubelet[2603]: I0129 12:50:00.198727 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7e92067-48ec-4b6c-a725-b3129763f04a-config-volume\") pod \"coredns-668d6bf9bc-zwt5p\" (UID: \"b7e92067-48ec-4b6c-a725-b3129763f04a\") " pod="kube-system/coredns-668d6bf9bc-zwt5p" Jan 29 12:50:00.199104 kubelet[2603]: I0129 12:50:00.198744 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrsz9\" (UniqueName: \"kubernetes.io/projected/b7e92067-48ec-4b6c-a725-b3129763f04a-kube-api-access-hrsz9\") pod \"coredns-668d6bf9bc-zwt5p\" (UID: \"b7e92067-48ec-4b6c-a725-b3129763f04a\") " pod="kube-system/coredns-668d6bf9bc-zwt5p" Jan 29 12:50:00.199104 kubelet[2603]: I0129 12:50:00.198763 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62ml2\" (UniqueName: \"kubernetes.io/projected/99f92faa-8e44-47bb-8c33-cf1d3c148912-kube-api-access-62ml2\") pod \"calico-apiserver-55d77dbf59-t4fpz\" (UID: \"99f92faa-8e44-47bb-8c33-cf1d3c148912\") " pod="calico-apiserver/calico-apiserver-55d77dbf59-t4fpz" Jan 29 12:50:00.199104 kubelet[2603]: I0129 12:50:00.198784 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/47cdff09-3717-4637-aaa6-498f177eaff7-calico-apiserver-certs\") pod \"calico-apiserver-55d77dbf59-pmmxr\" (UID: \"47cdff09-3717-4637-aaa6-498f177eaff7\") " pod="calico-apiserver/calico-apiserver-55d77dbf59-pmmxr" Jan 29 12:50:00.458273 containerd[1452]: time="2025-01-29T12:50:00.457559912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwt5p,Uid:b7e92067-48ec-4b6c-a725-b3129763f04a,Namespace:kube-system,Attempt:0,}" Jan 29 12:50:00.475971 containerd[1452]: time="2025-01-29T12:50:00.475846024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n5k7q,Uid:0f12975c-89a3-46d1-87fb-a8eed8bcd180,Namespace:kube-system,Attempt:0,}" Jan 29 12:50:00.480084 containerd[1452]: time="2025-01-29T12:50:00.479951527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76fcdb488d-h7k99,Uid:75b0dcc9-8d93-4002-b03a-5f5411f1a957,Namespace:calico-system,Attempt:0,}" Jan 29 12:50:00.485593 containerd[1452]: time="2025-01-29T12:50:00.485510077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d77dbf59-pmmxr,Uid:47cdff09-3717-4637-aaa6-498f177eaff7,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:50:00.491543 containerd[1452]: time="2025-01-29T12:50:00.491383077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d77dbf59-t4fpz,Uid:99f92faa-8e44-47bb-8c33-cf1d3c148912,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:50:00.729502 containerd[1452]: time="2025-01-29T12:50:00.727265401Z" level=info msg="shim disconnected" id=f64df153969fe9deec57f288e243a1bca16f77f75b46313d753f1b0e5d95363c namespace=k8s.io Jan 29 12:50:00.729502 containerd[1452]: time="2025-01-29T12:50:00.727461709Z" level=warning msg="cleaning up after shim disconnected" id=f64df153969fe9deec57f288e243a1bca16f77f75b46313d753f1b0e5d95363c namespace=k8s.io Jan 29 12:50:00.729502 containerd[1452]: time="2025-01-29T12:50:00.727489582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:50:00.902543 containerd[1452]: time="2025-01-29T12:50:00.902339977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 12:50:01.019975 containerd[1452]: time="2025-01-29T12:50:01.019750103Z" level=error msg="Failed to destroy network for sandbox \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.021324 containerd[1452]: time="2025-01-29T12:50:01.021143086Z" level=error msg="Failed to destroy network for sandbox \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.021480 containerd[1452]: time="2025-01-29T12:50:01.021359743Z" level=error msg="encountered an error cleaning up failed sandbox \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.021480 containerd[1452]: time="2025-01-29T12:50:01.021569827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwt5p,Uid:b7e92067-48ec-4b6c-a725-b3129763f04a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.022302 kubelet[2603]: E0129 12:50:01.021855 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.022302 kubelet[2603]: E0129 12:50:01.021927 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwt5p" Jan 29 12:50:01.022302 kubelet[2603]: E0129 12:50:01.021951 2603 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwt5p" Jan 29 12:50:01.022990 kubelet[2603]: E0129 12:50:01.022001 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zwt5p_kube-system(b7e92067-48ec-4b6c-a725-b3129763f04a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zwt5p_kube-system(b7e92067-48ec-4b6c-a725-b3129763f04a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zwt5p" podUID="b7e92067-48ec-4b6c-a725-b3129763f04a" Jan 29 12:50:01.023067 containerd[1452]: time="2025-01-29T12:50:01.022755382Z" level=error msg="Failed to destroy network for sandbox \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.023612 containerd[1452]: time="2025-01-29T12:50:01.023380905Z" level=error msg="encountered an error cleaning up failed sandbox \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.023682 containerd[1452]: time="2025-01-29T12:50:01.023506190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d77dbf59-t4fpz,Uid:99f92faa-8e44-47bb-8c33-cf1d3c148912,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.023970 containerd[1452]: time="2025-01-29T12:50:01.023621657Z" level=error msg="encountered an error cleaning up failed sandbox \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.024165 containerd[1452]: time="2025-01-29T12:50:01.024140791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n5k7q,Uid:0f12975c-89a3-46d1-87fb-a8eed8bcd180,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.024723 kubelet[2603]: E0129 12:50:01.024590 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.024723 kubelet[2603]: E0129 12:50:01.024630 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d77dbf59-t4fpz" Jan 29 12:50:01.024723 kubelet[2603]: E0129 12:50:01.024649 2603 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d77dbf59-t4fpz" Jan 29 12:50:01.024833 kubelet[2603]: E0129 12:50:01.024684 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55d77dbf59-t4fpz_calico-apiserver(99f92faa-8e44-47bb-8c33-cf1d3c148912)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55d77dbf59-t4fpz_calico-apiserver(99f92faa-8e44-47bb-8c33-cf1d3c148912)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d77dbf59-t4fpz" podUID="99f92faa-8e44-47bb-8c33-cf1d3c148912" Jan 29 12:50:01.025677 kubelet[2603]: E0129 12:50:01.024464 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.025677 kubelet[2603]: E0129 12:50:01.025101 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n5k7q" Jan 29 12:50:01.025677 kubelet[2603]: E0129 12:50:01.025120 2603 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n5k7q" Jan 29 12:50:01.025799 kubelet[2603]: E0129 12:50:01.025175 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-n5k7q_kube-system(0f12975c-89a3-46d1-87fb-a8eed8bcd180)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-n5k7q_kube-system(0f12975c-89a3-46d1-87fb-a8eed8bcd180)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n5k7q" podUID="0f12975c-89a3-46d1-87fb-a8eed8bcd180" Jan 29 12:50:01.027378 containerd[1452]: time="2025-01-29T12:50:01.027215380Z" level=error msg="Failed to destroy network for sandbox \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.028715 containerd[1452]: time="2025-01-29T12:50:01.028661143Z" level=error msg="encountered an error cleaning up failed sandbox \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.028898 containerd[1452]: time="2025-01-29T12:50:01.028823067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76fcdb488d-h7k99,Uid:75b0dcc9-8d93-4002-b03a-5f5411f1a957,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.029642 kubelet[2603]: E0129 12:50:01.029617 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.029788 kubelet[2603]: E0129 12:50:01.029731 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76fcdb488d-h7k99" Jan 29 12:50:01.029788 kubelet[2603]: E0129 12:50:01.029758 2603 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76fcdb488d-h7k99" Jan 29 12:50:01.029973 kubelet[2603]: E0129 12:50:01.029910 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76fcdb488d-h7k99_calico-system(75b0dcc9-8d93-4002-b03a-5f5411f1a957)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76fcdb488d-h7k99_calico-system(75b0dcc9-8d93-4002-b03a-5f5411f1a957)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76fcdb488d-h7k99" podUID="75b0dcc9-8d93-4002-b03a-5f5411f1a957" Jan 29 12:50:01.041013 containerd[1452]: time="2025-01-29T12:50:01.040886021Z" level=error msg="Failed to destroy network for sandbox \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.041244 containerd[1452]: time="2025-01-29T12:50:01.041200090Z" level=error msg="encountered an error cleaning up failed sandbox \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.041317 containerd[1452]: time="2025-01-29T12:50:01.041264180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d77dbf59-pmmxr,Uid:47cdff09-3717-4637-aaa6-498f177eaff7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.041488 kubelet[2603]: E0129 12:50:01.041459 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:01.041591 kubelet[2603]: E0129 12:50:01.041511 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d77dbf59-pmmxr" Jan 29 12:50:01.041591 kubelet[2603]: E0129 12:50:01.041537 2603 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d77dbf59-pmmxr" Jan 29 12:50:01.041591 kubelet[2603]: E0129 12:50:01.041579 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55d77dbf59-pmmxr_calico-apiserver(47cdff09-3717-4637-aaa6-498f177eaff7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55d77dbf59-pmmxr_calico-apiserver(47cdff09-3717-4637-aaa6-498f177eaff7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d77dbf59-pmmxr" podUID="47cdff09-3717-4637-aaa6-498f177eaff7" Jan 29 12:50:01.733552 systemd[1]: Created slice kubepods-besteffort-podb3395d29_aa34_40db_87bd_39bbc4377d98.slice - libcontainer container kubepods-besteffort-podb3395d29_aa34_40db_87bd_39bbc4377d98.slice. Jan 29 12:50:01.739068 containerd[1452]: time="2025-01-29T12:50:01.738985095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fldpg,Uid:b3395d29-aa34-40db-87bd-39bbc4377d98,Namespace:calico-system,Attempt:0,}" Jan 29 12:50:01.794535 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac-shm.mount: Deactivated successfully. Jan 29 12:50:01.794743 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8-shm.mount: Deactivated successfully. Jan 29 12:50:01.903762 kubelet[2603]: I0129 12:50:01.903536 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Jan 29 12:50:01.910074 kubelet[2603]: I0129 12:50:01.907389 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Jan 29 12:50:01.910291 containerd[1452]: time="2025-01-29T12:50:01.908440292Z" level=info msg="StopPodSandbox for \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\"" Jan 29 12:50:01.910291 containerd[1452]: time="2025-01-29T12:50:01.908791482Z" level=info msg="Ensure that sandbox e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb in task-service has been cleanup successfully" Jan 29 12:50:01.914578 containerd[1452]: time="2025-01-29T12:50:01.910981070Z" level=info msg="StopPodSandbox for \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\"" Jan 29 12:50:01.934967 containerd[1452]: time="2025-01-29T12:50:01.915224272Z" level=info msg="Ensure that sandbox 22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8 in task-service has been cleanup successfully" Jan 29 12:50:01.934967 containerd[1452]: time="2025-01-29T12:50:01.923186482Z" level=info msg="StopPodSandbox for \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\"" Jan 29 12:50:01.934967 containerd[1452]: time="2025-01-29T12:50:01.923622159Z" level=info msg="Ensure that sandbox a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4 in task-service has been cleanup successfully" Jan 29 12:50:01.934967 containerd[1452]: time="2025-01-29T12:50:01.928306909Z" level=info msg="StopPodSandbox for \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\"" Jan 29 12:50:01.934967 containerd[1452]: time="2025-01-29T12:50:01.930804716Z" level=info msg="Ensure that sandbox 0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7 in task-service has been cleanup successfully" Jan 29 12:50:01.935680 kubelet[2603]: I0129 12:50:01.920118 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Jan 29 12:50:01.935680 kubelet[2603]: I0129 12:50:01.926366 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Jan 29 12:50:01.944029 kubelet[2603]: I0129 12:50:01.943234 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Jan 29 12:50:01.946144 containerd[1452]: time="2025-01-29T12:50:01.946065261Z" level=info msg="StopPodSandbox for \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\"" Jan 29 12:50:01.946483 containerd[1452]: time="2025-01-29T12:50:01.946392234Z" level=info msg="Ensure that sandbox 74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac in task-service has been cleanup successfully" Jan 29 12:50:02.039538 containerd[1452]: time="2025-01-29T12:50:02.038837503Z" level=error msg="StopPodSandbox for \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\" failed" error="failed to destroy network for sandbox \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:02.039665 kubelet[2603]: E0129 12:50:02.039227 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Jan 29 12:50:02.039665 kubelet[2603]: E0129 12:50:02.039310 2603 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8"} Jan 29 12:50:02.039665 kubelet[2603]: E0129 12:50:02.039425 2603 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7e92067-48ec-4b6c-a725-b3129763f04a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:50:02.039665 kubelet[2603]: E0129 12:50:02.039459 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7e92067-48ec-4b6c-a725-b3129763f04a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zwt5p" podUID="b7e92067-48ec-4b6c-a725-b3129763f04a" Jan 29 12:50:02.049712 containerd[1452]: time="2025-01-29T12:50:02.049657836Z" level=error msg="StopPodSandbox for \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\" failed" error="failed to destroy network for sandbox \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:02.050083 kubelet[2603]: E0129 12:50:02.049932 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Jan 29 12:50:02.050083 kubelet[2603]: E0129 12:50:02.050019 2603 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac"} Jan 29 12:50:02.050083 kubelet[2603]: E0129 12:50:02.050056 2603 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0f12975c-89a3-46d1-87fb-a8eed8bcd180\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:50:02.050223 kubelet[2603]: E0129 12:50:02.050081 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0f12975c-89a3-46d1-87fb-a8eed8bcd180\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n5k7q" podUID="0f12975c-89a3-46d1-87fb-a8eed8bcd180" Jan 29 12:50:02.066529 containerd[1452]: time="2025-01-29T12:50:02.066156743Z" level=error msg="StopPodSandbox for \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\" failed" error="failed to destroy network for sandbox \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:02.066660 kubelet[2603]: E0129 12:50:02.066419 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Jan 29 12:50:02.066660 kubelet[2603]: E0129 12:50:02.066522 2603 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4"} Jan 29 12:50:02.066660 kubelet[2603]: E0129 12:50:02.066563 2603 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47cdff09-3717-4637-aaa6-498f177eaff7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:50:02.066660 kubelet[2603]: E0129 12:50:02.066614 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47cdff09-3717-4637-aaa6-498f177eaff7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d77dbf59-pmmxr" podUID="47cdff09-3717-4637-aaa6-498f177eaff7" Jan 29 12:50:02.067329 containerd[1452]: time="2025-01-29T12:50:02.066961624Z" level=error msg="StopPodSandbox for \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\" failed" error="failed to destroy network for sandbox \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:02.067378 kubelet[2603]: E0129 12:50:02.067130 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Jan 29 12:50:02.067378 kubelet[2603]: E0129 12:50:02.067156 2603 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb"} Jan 29 12:50:02.067378 kubelet[2603]: E0129 12:50:02.067182 2603 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99f92faa-8e44-47bb-8c33-cf1d3c148912\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:50:02.067378 kubelet[2603]: E0129 12:50:02.067201 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99f92faa-8e44-47bb-8c33-cf1d3c148912\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d77dbf59-t4fpz" podUID="99f92faa-8e44-47bb-8c33-cf1d3c148912" Jan 29 12:50:02.068142 containerd[1452]: time="2025-01-29T12:50:02.068091975Z" level=error msg="StopPodSandbox for \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\" failed" error="failed to destroy network for sandbox \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:02.068439 kubelet[2603]: E0129 12:50:02.068223 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Jan 29 12:50:02.068439 kubelet[2603]: E0129 12:50:02.068256 2603 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7"} Jan 29 12:50:02.068439 kubelet[2603]: E0129 12:50:02.068281 2603 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"75b0dcc9-8d93-4002-b03a-5f5411f1a957\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:50:02.068439 kubelet[2603]: E0129 12:50:02.068304 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"75b0dcc9-8d93-4002-b03a-5f5411f1a957\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76fcdb488d-h7k99" podUID="75b0dcc9-8d93-4002-b03a-5f5411f1a957" Jan 29 12:50:02.103297 containerd[1452]: time="2025-01-29T12:50:02.103231809Z" level=error msg="Failed to destroy network for sandbox \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:02.105857 containerd[1452]: time="2025-01-29T12:50:02.105671677Z" level=error msg="encountered an error cleaning up failed sandbox \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:02.105857 containerd[1452]: time="2025-01-29T12:50:02.105753271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fldpg,Uid:b3395d29-aa34-40db-87bd-39bbc4377d98,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:02.106419 kubelet[2603]: E0129 12:50:02.106150 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:02.106419 kubelet[2603]: E0129 12:50:02.106234 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fldpg" Jan 29 12:50:02.106419 kubelet[2603]: E0129 12:50:02.106280 2603 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fldpg" Jan 29 12:50:02.106298 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b-shm.mount: Deactivated successfully. Jan 29 12:50:02.106638 kubelet[2603]: E0129 12:50:02.106368 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fldpg_calico-system(b3395d29-aa34-40db-87bd-39bbc4377d98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fldpg_calico-system(b3395d29-aa34-40db-87bd-39bbc4377d98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fldpg" podUID="b3395d29-aa34-40db-87bd-39bbc4377d98" Jan 29 12:50:02.132240 kubelet[2603]: I0129 12:50:02.131911 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:50:02.949120 kubelet[2603]: I0129 12:50:02.948463 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Jan 29 12:50:02.951695 containerd[1452]: time="2025-01-29T12:50:02.951591777Z" level=info msg="StopPodSandbox for \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\"" Jan 29 12:50:02.952209 containerd[1452]: time="2025-01-29T12:50:02.952160785Z" level=info msg="Ensure that sandbox 61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b in task-service has been cleanup successfully" Jan 29 12:50:03.018316 containerd[1452]: time="2025-01-29T12:50:03.018258386Z" level=error msg="StopPodSandbox for \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\" failed" error="failed to destroy network for sandbox \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:50:03.018679 kubelet[2603]: E0129 12:50:03.018512 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Jan 29 12:50:03.018679 kubelet[2603]: E0129 12:50:03.018567 2603 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b"} Jan 29 12:50:03.018679 kubelet[2603]: E0129 12:50:03.018602 2603 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b3395d29-aa34-40db-87bd-39bbc4377d98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:50:03.018679 kubelet[2603]: E0129 12:50:03.018627 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b3395d29-aa34-40db-87bd-39bbc4377d98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fldpg" podUID="b3395d29-aa34-40db-87bd-39bbc4377d98" Jan 29 12:50:10.037580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176603046.mount: Deactivated successfully. Jan 29 12:50:10.532115 containerd[1452]: time="2025-01-29T12:50:10.531938878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:10.534875 containerd[1452]: time="2025-01-29T12:50:10.534762847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 12:50:10.536855 containerd[1452]: time="2025-01-29T12:50:10.536713887Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:10.542014 containerd[1452]: time="2025-01-29T12:50:10.541830146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:10.543734 containerd[1452]: time="2025-01-29T12:50:10.543372119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.640975847s" Jan 29 12:50:10.543734 containerd[1452]: time="2025-01-29T12:50:10.543483168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 12:50:10.589713 containerd[1452]: time="2025-01-29T12:50:10.589572781Z" level=info msg="CreateContainer within sandbox \"1eb5a9742ade836b4cedeecbbde74edcc511b2fcb2e803ef163cc1273b6e5c4c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:50:10.640607 containerd[1452]: time="2025-01-29T12:50:10.640516050Z" level=info msg="CreateContainer within sandbox \"1eb5a9742ade836b4cedeecbbde74edcc511b2fcb2e803ef163cc1273b6e5c4c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"429ef708081971167ebf4995543e09271b0746d24a93997d7236e20f3d1fec04\"" Jan 29 12:50:10.642677 containerd[1452]: time="2025-01-29T12:50:10.641631092Z" level=info msg="StartContainer for \"429ef708081971167ebf4995543e09271b0746d24a93997d7236e20f3d1fec04\"" Jan 29 12:50:10.693581 systemd[1]: Started cri-containerd-429ef708081971167ebf4995543e09271b0746d24a93997d7236e20f3d1fec04.scope - libcontainer container 429ef708081971167ebf4995543e09271b0746d24a93997d7236e20f3d1fec04. Jan 29 12:50:10.737544 containerd[1452]: time="2025-01-29T12:50:10.737473561Z" level=info msg="StartContainer for \"429ef708081971167ebf4995543e09271b0746d24a93997d7236e20f3d1fec04\" returns successfully" Jan 29 12:50:10.810990 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 12:50:10.811106 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 12:50:10.999884 kubelet[2603]: I0129 12:50:10.999001 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sm9ph" podStartSLOduration=1.321719123 podStartE2EDuration="25.998984417s" podCreationTimestamp="2025-01-29 12:49:45 +0000 UTC" firstStartedPulling="2025-01-29 12:49:45.868576339 +0000 UTC m=+12.291533141" lastFinishedPulling="2025-01-29 12:50:10.545841583 +0000 UTC m=+36.968798435" observedRunningTime="2025-01-29 12:50:10.997908488 +0000 UTC m=+37.420865330" watchObservedRunningTime="2025-01-29 12:50:10.998984417 +0000 UTC m=+37.421941219" Jan 29 12:50:12.020158 systemd[1]: run-containerd-runc-k8s.io-429ef708081971167ebf4995543e09271b0746d24a93997d7236e20f3d1fec04-runc.6e4tUX.mount: Deactivated successfully. Jan 29 12:50:12.626442 kernel: bpftool[3864]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 12:50:12.716840 containerd[1452]: time="2025-01-29T12:50:12.716764997Z" level=info msg="StopPodSandbox for \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\"" Jan 29 12:50:12.870002 containerd[1452]: 2025-01-29 12:50:12.826 [INFO][3879] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Jan 29 12:50:12.870002 containerd[1452]: 2025-01-29 12:50:12.826 [INFO][3879] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" iface="eth0" netns="/var/run/netns/cni-88168696-58cc-616a-82d8-fd28de878965" Jan 29 12:50:12.870002 containerd[1452]: 2025-01-29 12:50:12.826 [INFO][3879] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" iface="eth0" netns="/var/run/netns/cni-88168696-58cc-616a-82d8-fd28de878965" Jan 29 12:50:12.870002 containerd[1452]: 2025-01-29 12:50:12.829 [INFO][3879] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" iface="eth0" netns="/var/run/netns/cni-88168696-58cc-616a-82d8-fd28de878965" Jan 29 12:50:12.870002 containerd[1452]: 2025-01-29 12:50:12.829 [INFO][3879] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Jan 29 12:50:12.870002 containerd[1452]: 2025-01-29 12:50:12.829 [INFO][3879] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Jan 29 12:50:12.870002 containerd[1452]: 2025-01-29 12:50:12.855 [INFO][3885] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" HandleID="k8s-pod-network.a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:12.870002 containerd[1452]: 2025-01-29 12:50:12.855 [INFO][3885] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:12.870002 containerd[1452]: 2025-01-29 12:50:12.855 [INFO][3885] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:12.870002 containerd[1452]: 2025-01-29 12:50:12.863 [WARNING][3885] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" HandleID="k8s-pod-network.a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:12.870002 containerd[1452]: 2025-01-29 12:50:12.864 [INFO][3885] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" HandleID="k8s-pod-network.a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:12.870002 containerd[1452]: 2025-01-29 12:50:12.865 [INFO][3885] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:12.870002 containerd[1452]: 2025-01-29 12:50:12.868 [INFO][3879] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Jan 29 12:50:12.872567 containerd[1452]: time="2025-01-29T12:50:12.872489092Z" level=info msg="TearDown network for sandbox \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\" successfully" Jan 29 12:50:12.872567 containerd[1452]: time="2025-01-29T12:50:12.872539516Z" level=info msg="StopPodSandbox for \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\" returns successfully" Jan 29 12:50:12.873310 containerd[1452]: time="2025-01-29T12:50:12.873273613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d77dbf59-pmmxr,Uid:47cdff09-3717-4637-aaa6-498f177eaff7,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:50:12.873808 systemd[1]: run-netns-cni\x2d88168696\x2d58cc\x2d616a\x2d82d8\x2dfd28de878965.mount: Deactivated successfully. Jan 29 12:50:12.983249 systemd-networkd[1374]: vxlan.calico: Link UP Jan 29 12:50:12.983258 systemd-networkd[1374]: vxlan.calico: Gained carrier Jan 29 12:50:13.120542 systemd-networkd[1374]: cali89fde96589b: Link UP Jan 29 12:50:13.121130 systemd-networkd[1374]: cali89fde96589b: Gained carrier Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:12.957 [INFO][3906] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0 calico-apiserver-55d77dbf59- calico-apiserver 47cdff09-3717-4637-aaa6-498f177eaff7 742 0 2025-01-29 12:49:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55d77dbf59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-6-7edc95d587.novalocal calico-apiserver-55d77dbf59-pmmxr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali89fde96589b [] []}} ContainerID="86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-pmmxr" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-" Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:12.957 [INFO][3906] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-pmmxr" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.028 [INFO][3922] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" HandleID="k8s-pod-network.86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.057 [INFO][3922] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" HandleID="k8s-pod-network.86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000335050), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-6-7edc95d587.novalocal", "pod":"calico-apiserver-55d77dbf59-pmmxr", "timestamp":"2025-01-29 12:50:13.028213726 +0000 UTC"}, Hostname:"ci-4081-3-0-6-7edc95d587.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.057 [INFO][3922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.057 [INFO][3922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.057 [INFO][3922] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6-7edc95d587.novalocal' Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.060 [INFO][3922] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.065 [INFO][3922] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.071 [INFO][3922] ipam/ipam.go 489: Trying affinity for 192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.074 [INFO][3922] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.076 [INFO][3922] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.076 [INFO][3922] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.64/26 handle="k8s-pod-network.86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.078 [INFO][3922] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0 Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.086 [INFO][3922] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.64/26 handle="k8s-pod-network.86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.108 [INFO][3922] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.65/26] block=192.168.32.64/26 handle="k8s-pod-network.86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.108 [INFO][3922] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.65/26] handle="k8s-pod-network.86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.108 [INFO][3922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:13.299547 containerd[1452]: 2025-01-29 12:50:13.108 [INFO][3922] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.65/26] IPv6=[] ContainerID="86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" HandleID="k8s-pod-network.86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:13.303234 containerd[1452]: 2025-01-29 12:50:13.111 [INFO][3906] cni-plugin/k8s.go 386: Populated endpoint ContainerID="86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-pmmxr" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0", GenerateName:"calico-apiserver-55d77dbf59-", Namespace:"calico-apiserver", SelfLink:"", UID:"47cdff09-3717-4637-aaa6-498f177eaff7", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d77dbf59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"", Pod:"calico-apiserver-55d77dbf59-pmmxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89fde96589b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:13.303234 containerd[1452]: 2025-01-29 12:50:13.111 [INFO][3906] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.65/32] ContainerID="86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-pmmxr" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:13.303234 containerd[1452]: 2025-01-29 12:50:13.111 [INFO][3906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89fde96589b ContainerID="86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-pmmxr" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:13.303234 containerd[1452]: 2025-01-29 12:50:13.122 [INFO][3906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-pmmxr" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:13.303234 containerd[1452]: 2025-01-29 12:50:13.123 [INFO][3906] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-pmmxr" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0", GenerateName:"calico-apiserver-55d77dbf59-", Namespace:"calico-apiserver", SelfLink:"", UID:"47cdff09-3717-4637-aaa6-498f177eaff7", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d77dbf59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0", Pod:"calico-apiserver-55d77dbf59-pmmxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89fde96589b", MAC:"e6:9b:9f:ba:b4:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:13.303234 containerd[1452]: 2025-01-29 12:50:13.295 [INFO][3906] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-pmmxr" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:13.342996 containerd[1452]: time="2025-01-29T12:50:13.342912948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:50:13.344464 containerd[1452]: time="2025-01-29T12:50:13.343563700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:50:13.344464 containerd[1452]: time="2025-01-29T12:50:13.343585631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:50:13.344464 containerd[1452]: time="2025-01-29T12:50:13.343674708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:50:13.389631 systemd[1]: Started cri-containerd-86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0.scope - libcontainer container 86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0. Jan 29 12:50:13.439148 containerd[1452]: time="2025-01-29T12:50:13.439049295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d77dbf59-pmmxr,Uid:47cdff09-3717-4637-aaa6-498f177eaff7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0\"" Jan 29 12:50:13.441150 containerd[1452]: time="2025-01-29T12:50:13.440920565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:50:13.722251 containerd[1452]: time="2025-01-29T12:50:13.722142840Z" level=info msg="StopPodSandbox for \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\"" Jan 29 12:50:13.911124 containerd[1452]: 2025-01-29 12:50:13.837 [INFO][4047] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Jan 29 12:50:13.911124 containerd[1452]: 2025-01-29 12:50:13.838 [INFO][4047] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" iface="eth0" netns="/var/run/netns/cni-39bad7c5-7d6b-87ac-a8de-e491d60fadac" Jan 29 12:50:13.911124 containerd[1452]: 2025-01-29 12:50:13.839 [INFO][4047] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" iface="eth0" netns="/var/run/netns/cni-39bad7c5-7d6b-87ac-a8de-e491d60fadac" Jan 29 12:50:13.911124 containerd[1452]: 2025-01-29 12:50:13.839 [INFO][4047] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" iface="eth0" netns="/var/run/netns/cni-39bad7c5-7d6b-87ac-a8de-e491d60fadac" Jan 29 12:50:13.911124 containerd[1452]: 2025-01-29 12:50:13.839 [INFO][4047] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Jan 29 12:50:13.911124 containerd[1452]: 2025-01-29 12:50:13.839 [INFO][4047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Jan 29 12:50:13.911124 containerd[1452]: 2025-01-29 12:50:13.893 [INFO][4053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" HandleID="k8s-pod-network.61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:13.911124 containerd[1452]: 2025-01-29 12:50:13.893 [INFO][4053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:13.911124 containerd[1452]: 2025-01-29 12:50:13.894 [INFO][4053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:13.911124 containerd[1452]: 2025-01-29 12:50:13.903 [WARNING][4053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" HandleID="k8s-pod-network.61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:13.911124 containerd[1452]: 2025-01-29 12:50:13.903 [INFO][4053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" HandleID="k8s-pod-network.61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:13.911124 containerd[1452]: 2025-01-29 12:50:13.907 [INFO][4053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:13.911124 containerd[1452]: 2025-01-29 12:50:13.909 [INFO][4047] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Jan 29 12:50:13.913060 containerd[1452]: time="2025-01-29T12:50:13.912585900Z" level=info msg="TearDown network for sandbox \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\" successfully" Jan 29 12:50:13.913060 containerd[1452]: time="2025-01-29T12:50:13.912615014Z" level=info msg="StopPodSandbox for \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\" returns successfully" Jan 29 12:50:13.915872 systemd[1]: run-netns-cni\x2d39bad7c5\x2d7d6b\x2d87ac\x2da8de\x2de491d60fadac.mount: Deactivated successfully. Jan 29 12:50:13.916984 containerd[1452]: time="2025-01-29T12:50:13.916919450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fldpg,Uid:b3395d29-aa34-40db-87bd-39bbc4377d98,Namespace:calico-system,Attempt:1,}" Jan 29 12:50:14.085214 systemd-networkd[1374]: calie988f29dbcf: Link UP Jan 29 12:50:14.085909 systemd-networkd[1374]: calie988f29dbcf: Gained carrier Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.001 [INFO][4060] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0 csi-node-driver- calico-system b3395d29-aa34-40db-87bd-39bbc4377d98 749 0 2025-01-29 12:49:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-6-7edc95d587.novalocal csi-node-driver-fldpg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie988f29dbcf [] []}} ContainerID="6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" Namespace="calico-system" Pod="csi-node-driver-fldpg" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-" Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.001 [INFO][4060] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" Namespace="calico-system" Pod="csi-node-driver-fldpg" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.031 [INFO][4070] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" HandleID="k8s-pod-network.6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.042 [INFO][4070] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" HandleID="k8s-pod-network.6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-6-7edc95d587.novalocal", "pod":"csi-node-driver-fldpg", "timestamp":"2025-01-29 12:50:14.031557257 +0000 UTC"}, Hostname:"ci-4081-3-0-6-7edc95d587.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.042 [INFO][4070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.042 [INFO][4070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.042 [INFO][4070] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6-7edc95d587.novalocal' Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.045 [INFO][4070] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.049 [INFO][4070] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.058 [INFO][4070] ipam/ipam.go 489: Trying affinity for 192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.060 [INFO][4070] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.063 [INFO][4070] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.063 [INFO][4070] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.64/26 handle="k8s-pod-network.6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.065 [INFO][4070] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457 Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.070 [INFO][4070] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.64/26 handle="k8s-pod-network.6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.080 [INFO][4070] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.66/26] block=192.168.32.64/26 handle="k8s-pod-network.6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.080 [INFO][4070] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.66/26] handle="k8s-pod-network.6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.080 [INFO][4070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:14.103855 containerd[1452]: 2025-01-29 12:50:14.080 [INFO][4070] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.66/26] IPv6=[] ContainerID="6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" HandleID="k8s-pod-network.6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:14.104502 containerd[1452]: 2025-01-29 12:50:14.082 [INFO][4060] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" Namespace="calico-system" Pod="csi-node-driver-fldpg" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b3395d29-aa34-40db-87bd-39bbc4377d98", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"", Pod:"csi-node-driver-fldpg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie988f29dbcf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:14.104502 containerd[1452]: 2025-01-29 12:50:14.082 [INFO][4060] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.66/32] ContainerID="6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" Namespace="calico-system" Pod="csi-node-driver-fldpg" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:14.104502 containerd[1452]: 2025-01-29 12:50:14.082 [INFO][4060] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie988f29dbcf ContainerID="6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" Namespace="calico-system" Pod="csi-node-driver-fldpg" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:14.104502 containerd[1452]: 2025-01-29 12:50:14.084 [INFO][4060] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" Namespace="calico-system" Pod="csi-node-driver-fldpg" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:14.104502 containerd[1452]: 2025-01-29 12:50:14.084 [INFO][4060] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" Namespace="calico-system" Pod="csi-node-driver-fldpg" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b3395d29-aa34-40db-87bd-39bbc4377d98", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457", Pod:"csi-node-driver-fldpg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie988f29dbcf", MAC:"92:8b:73:12:b4:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:14.104502 containerd[1452]: 2025-01-29 12:50:14.099 [INFO][4060] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457" Namespace="calico-system" Pod="csi-node-driver-fldpg" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:14.127179 containerd[1452]: time="2025-01-29T12:50:14.126869345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:50:14.127179 containerd[1452]: time="2025-01-29T12:50:14.126931121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:50:14.127179 containerd[1452]: time="2025-01-29T12:50:14.126950798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:50:14.127179 containerd[1452]: time="2025-01-29T12:50:14.127092614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:50:14.156587 systemd[1]: Started cri-containerd-6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457.scope - libcontainer container 6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457. Jan 29 12:50:14.180360 containerd[1452]: time="2025-01-29T12:50:14.180273849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fldpg,Uid:b3395d29-aa34-40db-87bd-39bbc4377d98,Namespace:calico-system,Attempt:1,} returns sandbox id \"6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457\"" Jan 29 12:50:14.647724 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL Jan 29 12:50:15.031697 systemd-networkd[1374]: cali89fde96589b: Gained IPv6LL Jan 29 12:50:15.223599 systemd-networkd[1374]: calie988f29dbcf: Gained IPv6LL Jan 29 12:50:15.718996 containerd[1452]: time="2025-01-29T12:50:15.718930136Z" level=info msg="StopPodSandbox for \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\"" Jan 29 12:50:15.719729 containerd[1452]: time="2025-01-29T12:50:15.719529791Z" level=info msg="StopPodSandbox for \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\"" Jan 29 12:50:16.173335 containerd[1452]: 2025-01-29 12:50:16.098 [INFO][4159] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Jan 29 12:50:16.173335 containerd[1452]: 2025-01-29 12:50:16.100 [INFO][4159] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" iface="eth0" netns="/var/run/netns/cni-d143942e-e438-656e-18dc-be41cfa87b98" Jan 29 12:50:16.173335 containerd[1452]: 2025-01-29 12:50:16.102 [INFO][4159] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" iface="eth0" netns="/var/run/netns/cni-d143942e-e438-656e-18dc-be41cfa87b98" Jan 29 12:50:16.173335 containerd[1452]: 2025-01-29 12:50:16.105 [INFO][4159] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" iface="eth0" netns="/var/run/netns/cni-d143942e-e438-656e-18dc-be41cfa87b98" Jan 29 12:50:16.173335 containerd[1452]: 2025-01-29 12:50:16.105 [INFO][4159] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Jan 29 12:50:16.173335 containerd[1452]: 2025-01-29 12:50:16.105 [INFO][4159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Jan 29 12:50:16.173335 containerd[1452]: 2025-01-29 12:50:16.154 [INFO][4171] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" HandleID="k8s-pod-network.22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:16.173335 containerd[1452]: 2025-01-29 12:50:16.154 [INFO][4171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:16.173335 containerd[1452]: 2025-01-29 12:50:16.155 [INFO][4171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:16.173335 containerd[1452]: 2025-01-29 12:50:16.163 [WARNING][4171] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" HandleID="k8s-pod-network.22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:16.173335 containerd[1452]: 2025-01-29 12:50:16.163 [INFO][4171] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" HandleID="k8s-pod-network.22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:16.173335 containerd[1452]: 2025-01-29 12:50:16.166 [INFO][4171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:16.173335 containerd[1452]: 2025-01-29 12:50:16.171 [INFO][4159] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Jan 29 12:50:16.174191 containerd[1452]: time="2025-01-29T12:50:16.174038131Z" level=info msg="TearDown network for sandbox \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\" successfully" Jan 29 12:50:16.174828 containerd[1452]: time="2025-01-29T12:50:16.174374473Z" level=info msg="StopPodSandbox for \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\" returns successfully" Jan 29 12:50:16.178234 containerd[1452]: time="2025-01-29T12:50:16.178087218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwt5p,Uid:b7e92067-48ec-4b6c-a725-b3129763f04a,Namespace:kube-system,Attempt:1,}" Jan 29 12:50:16.183203 systemd[1]: run-netns-cni\x2dd143942e\x2de438\x2d656e\x2d18dc\x2dbe41cfa87b98.mount: Deactivated successfully. Jan 29 12:50:16.209067 containerd[1452]: 2025-01-29 12:50:16.119 [INFO][4160] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Jan 29 12:50:16.209067 containerd[1452]: 2025-01-29 12:50:16.119 [INFO][4160] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" iface="eth0" netns="/var/run/netns/cni-e7c80375-e7b2-2771-bd3f-716206f1298d" Jan 29 12:50:16.209067 containerd[1452]: 2025-01-29 12:50:16.119 [INFO][4160] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" iface="eth0" netns="/var/run/netns/cni-e7c80375-e7b2-2771-bd3f-716206f1298d" Jan 29 12:50:16.209067 containerd[1452]: 2025-01-29 12:50:16.120 [INFO][4160] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" iface="eth0" netns="/var/run/netns/cni-e7c80375-e7b2-2771-bd3f-716206f1298d" Jan 29 12:50:16.209067 containerd[1452]: 2025-01-29 12:50:16.120 [INFO][4160] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Jan 29 12:50:16.209067 containerd[1452]: 2025-01-29 12:50:16.120 [INFO][4160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Jan 29 12:50:16.209067 containerd[1452]: 2025-01-29 12:50:16.187 [INFO][4174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" HandleID="k8s-pod-network.74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:16.209067 containerd[1452]: 2025-01-29 12:50:16.188 [INFO][4174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:16.209067 containerd[1452]: 2025-01-29 12:50:16.188 [INFO][4174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:16.209067 containerd[1452]: 2025-01-29 12:50:16.200 [WARNING][4174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" HandleID="k8s-pod-network.74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:16.209067 containerd[1452]: 2025-01-29 12:50:16.200 [INFO][4174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" HandleID="k8s-pod-network.74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:16.209067 containerd[1452]: 2025-01-29 12:50:16.203 [INFO][4174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:16.209067 containerd[1452]: 2025-01-29 12:50:16.205 [INFO][4160] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Jan 29 12:50:16.210035 containerd[1452]: time="2025-01-29T12:50:16.209976976Z" level=info msg="TearDown network for sandbox \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\" successfully" Jan 29 12:50:16.210122 containerd[1452]: time="2025-01-29T12:50:16.210103384Z" level=info msg="StopPodSandbox for \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\" returns successfully" Jan 29 12:50:16.212421 containerd[1452]: time="2025-01-29T12:50:16.211123548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n5k7q,Uid:0f12975c-89a3-46d1-87fb-a8eed8bcd180,Namespace:kube-system,Attempt:1,}" Jan 29 12:50:16.214052 systemd[1]: run-netns-cni\x2de7c80375\x2de7b2\x2d2771\x2dbd3f\x2d716206f1298d.mount: Deactivated successfully. Jan 29 12:50:16.466590 systemd-networkd[1374]: cali287789e54f2: Link UP Jan 29 12:50:16.467902 systemd-networkd[1374]: cali287789e54f2: Gained carrier Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.328 [INFO][4184] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0 coredns-668d6bf9bc- kube-system b7e92067-48ec-4b6c-a725-b3129763f04a 760 0 2025-01-29 12:49:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-6-7edc95d587.novalocal coredns-668d6bf9bc-zwt5p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali287789e54f2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwt5p" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-" Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.328 [INFO][4184] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwt5p" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.376 [INFO][4205] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" HandleID="k8s-pod-network.b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.395 [INFO][4205] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" HandleID="k8s-pod-network.b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050d8c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-6-7edc95d587.novalocal", "pod":"coredns-668d6bf9bc-zwt5p", "timestamp":"2025-01-29 12:50:16.376304926 +0000 UTC"}, Hostname:"ci-4081-3-0-6-7edc95d587.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.395 [INFO][4205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.396 [INFO][4205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.396 [INFO][4205] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6-7edc95d587.novalocal' Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.402 [INFO][4205] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.409 [INFO][4205] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.417 [INFO][4205] ipam/ipam.go 489: Trying affinity for 192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.422 [INFO][4205] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.430 [INFO][4205] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.430 [INFO][4205] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.64/26 handle="k8s-pod-network.b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.435 [INFO][4205] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.453 [INFO][4205] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.64/26 handle="k8s-pod-network.b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.460 [INFO][4205] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.67/26] block=192.168.32.64/26 handle="k8s-pod-network.b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.460 [INFO][4205] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.67/26] handle="k8s-pod-network.b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.460 [INFO][4205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:16.499590 containerd[1452]: 2025-01-29 12:50:16.460 [INFO][4205] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.67/26] IPv6=[] ContainerID="b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" HandleID="k8s-pod-network.b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:16.500388 containerd[1452]: 2025-01-29 12:50:16.463 [INFO][4184] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwt5p" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b7e92067-48ec-4b6c-a725-b3129763f04a", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-zwt5p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali287789e54f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:16.500388 containerd[1452]: 2025-01-29 12:50:16.463 [INFO][4184] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.67/32] ContainerID="b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwt5p" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:16.500388 containerd[1452]: 2025-01-29 12:50:16.463 [INFO][4184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali287789e54f2 ContainerID="b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwt5p" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:16.500388 containerd[1452]: 2025-01-29 12:50:16.468 [INFO][4184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwt5p" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:16.500388 containerd[1452]: 2025-01-29 12:50:16.470 [INFO][4184] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwt5p" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b7e92067-48ec-4b6c-a725-b3129763f04a", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d", Pod:"coredns-668d6bf9bc-zwt5p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali287789e54f2", MAC:"8e:d9:96:a3:91:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:16.500388 containerd[1452]: 2025-01-29 12:50:16.496 [INFO][4184] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwt5p" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:16.538546 containerd[1452]: time="2025-01-29T12:50:16.538468612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:50:16.538750 containerd[1452]: time="2025-01-29T12:50:16.538530278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:50:16.538750 containerd[1452]: time="2025-01-29T12:50:16.538545426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:50:16.538750 containerd[1452]: time="2025-01-29T12:50:16.538638791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:50:16.560647 systemd[1]: Started cri-containerd-b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d.scope - libcontainer container b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d. Jan 29 12:50:16.619591 containerd[1452]: time="2025-01-29T12:50:16.619473968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwt5p,Uid:b7e92067-48ec-4b6c-a725-b3129763f04a,Namespace:kube-system,Attempt:1,} returns sandbox id \"b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d\"" Jan 29 12:50:16.624558 containerd[1452]: time="2025-01-29T12:50:16.624501530Z" level=info msg="CreateContainer within sandbox \"b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:50:16.648005 systemd-networkd[1374]: cali8277b40c3f1: Link UP Jan 29 12:50:16.649343 systemd-networkd[1374]: cali8277b40c3f1: Gained carrier Jan 29 12:50:16.656681 containerd[1452]: time="2025-01-29T12:50:16.656639523Z" level=info msg="CreateContainer within sandbox \"b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6399dc54fc1e38fd90753f2f0b01dded10091f6353cfd4186fedc879cb6f599e\"" Jan 29 12:50:16.661616 containerd[1452]: time="2025-01-29T12:50:16.661584260Z" level=info msg="StartContainer for \"6399dc54fc1e38fd90753f2f0b01dded10091f6353cfd4186fedc879cb6f599e\"" Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.364 [INFO][4193] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0 coredns-668d6bf9bc- kube-system 0f12975c-89a3-46d1-87fb-a8eed8bcd180 761 0 2025-01-29 12:49:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-6-7edc95d587.novalocal coredns-668d6bf9bc-n5k7q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8277b40c3f1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" Namespace="kube-system" Pod="coredns-668d6bf9bc-n5k7q" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-" Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.365 [INFO][4193] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" Namespace="kube-system" Pod="coredns-668d6bf9bc-n5k7q" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.424 [INFO][4211] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" HandleID="k8s-pod-network.8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.493 [INFO][4211] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" HandleID="k8s-pod-network.8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031bc10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-6-7edc95d587.novalocal", "pod":"coredns-668d6bf9bc-n5k7q", "timestamp":"2025-01-29 12:50:16.424687026 +0000 UTC"}, Hostname:"ci-4081-3-0-6-7edc95d587.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.493 [INFO][4211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.494 [INFO][4211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.494 [INFO][4211] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6-7edc95d587.novalocal' Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.503 [INFO][4211] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.593 [INFO][4211] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.603 [INFO][4211] ipam/ipam.go 489: Trying affinity for 192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.608 [INFO][4211] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.614 [INFO][4211] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.614 [INFO][4211] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.64/26 handle="k8s-pod-network.8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.616 [INFO][4211] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17 Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.626 [INFO][4211] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.64/26 handle="k8s-pod-network.8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.640 [INFO][4211] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.68/26] block=192.168.32.64/26 handle="k8s-pod-network.8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.641 [INFO][4211] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.68/26] handle="k8s-pod-network.8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.641 [INFO][4211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:16.674450 containerd[1452]: 2025-01-29 12:50:16.641 [INFO][4211] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.68/26] IPv6=[] ContainerID="8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" HandleID="k8s-pod-network.8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:16.676627 containerd[1452]: 2025-01-29 12:50:16.643 [INFO][4193] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" Namespace="kube-system" Pod="coredns-668d6bf9bc-n5k7q" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0f12975c-89a3-46d1-87fb-a8eed8bcd180", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-n5k7q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8277b40c3f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:16.676627 containerd[1452]: 2025-01-29 12:50:16.644 [INFO][4193] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.68/32] ContainerID="8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" Namespace="kube-system" Pod="coredns-668d6bf9bc-n5k7q" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:16.676627 containerd[1452]: 2025-01-29 12:50:16.644 [INFO][4193] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8277b40c3f1 ContainerID="8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" Namespace="kube-system" Pod="coredns-668d6bf9bc-n5k7q" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:16.676627 containerd[1452]: 2025-01-29 12:50:16.650 [INFO][4193] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" Namespace="kube-system" Pod="coredns-668d6bf9bc-n5k7q" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:16.676627 containerd[1452]: 2025-01-29 12:50:16.650 [INFO][4193] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" Namespace="kube-system" Pod="coredns-668d6bf9bc-n5k7q" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0f12975c-89a3-46d1-87fb-a8eed8bcd180", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17", Pod:"coredns-668d6bf9bc-n5k7q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8277b40c3f1", MAC:"4e:a3:37:cb:30:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:16.676627 containerd[1452]: 2025-01-29 12:50:16.669 [INFO][4193] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17" Namespace="kube-system" Pod="coredns-668d6bf9bc-n5k7q" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:16.706597 systemd[1]: Started cri-containerd-6399dc54fc1e38fd90753f2f0b01dded10091f6353cfd4186fedc879cb6f599e.scope - libcontainer container 6399dc54fc1e38fd90753f2f0b01dded10091f6353cfd4186fedc879cb6f599e. Jan 29 12:50:16.718013 containerd[1452]: time="2025-01-29T12:50:16.717170476Z" level=info msg="StopPodSandbox for \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\"" Jan 29 12:50:16.744931 containerd[1452]: time="2025-01-29T12:50:16.744794181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:50:16.744931 containerd[1452]: time="2025-01-29T12:50:16.744877598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:50:16.745555 containerd[1452]: time="2025-01-29T12:50:16.744900962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:50:16.745555 containerd[1452]: time="2025-01-29T12:50:16.744989107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:50:16.782546 systemd[1]: Started cri-containerd-8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17.scope - libcontainer container 8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17. Jan 29 12:50:16.783235 containerd[1452]: time="2025-01-29T12:50:16.783200746Z" level=info msg="StartContainer for \"6399dc54fc1e38fd90753f2f0b01dded10091f6353cfd4186fedc879cb6f599e\" returns successfully" Jan 29 12:50:16.860870 containerd[1452]: time="2025-01-29T12:50:16.860762801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n5k7q,Uid:0f12975c-89a3-46d1-87fb-a8eed8bcd180,Namespace:kube-system,Attempt:1,} returns sandbox id \"8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17\"" Jan 29 12:50:16.866205 containerd[1452]: time="2025-01-29T12:50:16.865952588Z" level=info msg="CreateContainer within sandbox \"8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:50:16.898043 containerd[1452]: time="2025-01-29T12:50:16.897838349Z" level=info msg="CreateContainer within sandbox \"8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93720dad5707ada5db7a44fb627fe9ae82357e1a0db82a8dbdb1523280a55d5d\"" Jan 29 12:50:16.899731 containerd[1452]: time="2025-01-29T12:50:16.898783602Z" level=info msg="StartContainer for \"93720dad5707ada5db7a44fb627fe9ae82357e1a0db82a8dbdb1523280a55d5d\"" Jan 29 12:50:16.924998 containerd[1452]: 2025-01-29 12:50:16.867 [INFO][4338] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Jan 29 12:50:16.924998 containerd[1452]: 2025-01-29 12:50:16.867 [INFO][4338] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" iface="eth0" netns="/var/run/netns/cni-e95b78b7-564a-097b-d48c-99224e45e713" Jan 29 12:50:16.924998 containerd[1452]: 2025-01-29 12:50:16.870 [INFO][4338] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" iface="eth0" netns="/var/run/netns/cni-e95b78b7-564a-097b-d48c-99224e45e713" Jan 29 12:50:16.924998 containerd[1452]: 2025-01-29 12:50:16.871 [INFO][4338] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" iface="eth0" netns="/var/run/netns/cni-e95b78b7-564a-097b-d48c-99224e45e713" Jan 29 12:50:16.924998 containerd[1452]: 2025-01-29 12:50:16.871 [INFO][4338] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Jan 29 12:50:16.924998 containerd[1452]: 2025-01-29 12:50:16.871 [INFO][4338] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Jan 29 12:50:16.924998 containerd[1452]: 2025-01-29 12:50:16.902 [INFO][4382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" HandleID="k8s-pod-network.0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:16.924998 containerd[1452]: 2025-01-29 12:50:16.903 [INFO][4382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:16.924998 containerd[1452]: 2025-01-29 12:50:16.903 [INFO][4382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:16.924998 containerd[1452]: 2025-01-29 12:50:16.917 [WARNING][4382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" HandleID="k8s-pod-network.0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:16.924998 containerd[1452]: 2025-01-29 12:50:16.918 [INFO][4382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" HandleID="k8s-pod-network.0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:16.924998 containerd[1452]: 2025-01-29 12:50:16.921 [INFO][4382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:16.924998 containerd[1452]: 2025-01-29 12:50:16.922 [INFO][4338] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Jan 29 12:50:16.925608 containerd[1452]: time="2025-01-29T12:50:16.925127035Z" level=info msg="TearDown network for sandbox \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\" successfully" Jan 29 12:50:16.925608 containerd[1452]: time="2025-01-29T12:50:16.925153154Z" level=info msg="StopPodSandbox for \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\" returns successfully" Jan 29 12:50:16.925966 containerd[1452]: time="2025-01-29T12:50:16.925896799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76fcdb488d-h7k99,Uid:75b0dcc9-8d93-4002-b03a-5f5411f1a957,Namespace:calico-system,Attempt:1,}" Jan 29 12:50:16.949600 systemd[1]: Started cri-containerd-93720dad5707ada5db7a44fb627fe9ae82357e1a0db82a8dbdb1523280a55d5d.scope - libcontainer container 93720dad5707ada5db7a44fb627fe9ae82357e1a0db82a8dbdb1523280a55d5d. Jan 29 12:50:17.010388 containerd[1452]: time="2025-01-29T12:50:17.010244084Z" level=info msg="StartContainer for \"93720dad5707ada5db7a44fb627fe9ae82357e1a0db82a8dbdb1523280a55d5d\" returns successfully" Jan 29 12:50:17.077429 kubelet[2603]: I0129 12:50:17.075549 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zwt5p" podStartSLOduration=39.07553159 podStartE2EDuration="39.07553159s" podCreationTimestamp="2025-01-29 12:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:50:17.045349706 +0000 UTC m=+43.468306518" watchObservedRunningTime="2025-01-29 12:50:17.07553159 +0000 UTC m=+43.498488402" Jan 29 12:50:17.134901 kubelet[2603]: I0129 12:50:17.133808 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-n5k7q" podStartSLOduration=39.133788374 podStartE2EDuration="39.133788374s" podCreationTimestamp="2025-01-29 12:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:50:17.075982856 +0000 UTC m=+43.498939688" watchObservedRunningTime="2025-01-29 12:50:17.133788374 +0000 UTC m=+43.556745187" Jan 29 12:50:17.190926 systemd[1]: run-netns-cni\x2de95b78b7\x2d564a\x2d097b\x2dd48c\x2d99224e45e713.mount: Deactivated successfully. Jan 29 12:50:17.276692 systemd-networkd[1374]: cali5e8df10cea6: Link UP Jan 29 12:50:17.278242 systemd-networkd[1374]: cali5e8df10cea6: Gained carrier Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.045 [INFO][4406] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0 calico-kube-controllers-76fcdb488d- calico-system 75b0dcc9-8d93-4002-b03a-5f5411f1a957 774 0 2025-01-29 12:49:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76fcdb488d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-6-7edc95d587.novalocal calico-kube-controllers-76fcdb488d-h7k99 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5e8df10cea6 [] []}} ContainerID="68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" Namespace="calico-system" Pod="calico-kube-controllers-76fcdb488d-h7k99" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-" Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.045 [INFO][4406] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" Namespace="calico-system" Pod="calico-kube-controllers-76fcdb488d-h7k99" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.119 [INFO][4434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" HandleID="k8s-pod-network.68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.139 [INFO][4434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" HandleID="k8s-pod-network.68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030b400), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-6-7edc95d587.novalocal", "pod":"calico-kube-controllers-76fcdb488d-h7k99", "timestamp":"2025-01-29 12:50:17.11965522 +0000 UTC"}, Hostname:"ci-4081-3-0-6-7edc95d587.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.139 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.139 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.140 [INFO][4434] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6-7edc95d587.novalocal' Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.149 [INFO][4434] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.233 [INFO][4434] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.240 [INFO][4434] ipam/ipam.go 489: Trying affinity for 192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.244 [INFO][4434] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.248 [INFO][4434] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.248 [INFO][4434] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.64/26 handle="k8s-pod-network.68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.251 [INFO][4434] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9 Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.257 [INFO][4434] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.64/26 handle="k8s-pod-network.68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.267 [INFO][4434] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.69/26] block=192.168.32.64/26 handle="k8s-pod-network.68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.267 [INFO][4434] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.69/26] handle="k8s-pod-network.68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.267 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:17.305364 containerd[1452]: 2025-01-29 12:50:17.267 [INFO][4434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.69/26] IPv6=[] ContainerID="68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" HandleID="k8s-pod-network.68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:17.307168 containerd[1452]: 2025-01-29 12:50:17.269 [INFO][4406] cni-plugin/k8s.go 386: Populated endpoint ContainerID="68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" Namespace="calico-system" Pod="calico-kube-controllers-76fcdb488d-h7k99" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0", GenerateName:"calico-kube-controllers-76fcdb488d-", Namespace:"calico-system", SelfLink:"", UID:"75b0dcc9-8d93-4002-b03a-5f5411f1a957", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76fcdb488d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"", Pod:"calico-kube-controllers-76fcdb488d-h7k99", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5e8df10cea6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:17.307168 containerd[1452]: 2025-01-29 12:50:17.270 [INFO][4406] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.69/32] ContainerID="68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" Namespace="calico-system" Pod="calico-kube-controllers-76fcdb488d-h7k99" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:17.307168 containerd[1452]: 2025-01-29 12:50:17.270 [INFO][4406] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e8df10cea6 ContainerID="68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" Namespace="calico-system" Pod="calico-kube-controllers-76fcdb488d-h7k99" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:17.307168 containerd[1452]: 2025-01-29 12:50:17.278 [INFO][4406] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" Namespace="calico-system" Pod="calico-kube-controllers-76fcdb488d-h7k99" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:17.307168 containerd[1452]: 2025-01-29 12:50:17.281 [INFO][4406] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" Namespace="calico-system" Pod="calico-kube-controllers-76fcdb488d-h7k99" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0", GenerateName:"calico-kube-controllers-76fcdb488d-", Namespace:"calico-system", SelfLink:"", UID:"75b0dcc9-8d93-4002-b03a-5f5411f1a957", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76fcdb488d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9", Pod:"calico-kube-controllers-76fcdb488d-h7k99", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5e8df10cea6", MAC:"22:8e:df:f0:fe:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:17.307168 containerd[1452]: 2025-01-29 12:50:17.302 [INFO][4406] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9" Namespace="calico-system" Pod="calico-kube-controllers-76fcdb488d-h7k99" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:17.352694 containerd[1452]: time="2025-01-29T12:50:17.343265744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:50:17.352694 containerd[1452]: time="2025-01-29T12:50:17.343590314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:50:17.352694 containerd[1452]: time="2025-01-29T12:50:17.343659333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:50:17.352694 containerd[1452]: time="2025-01-29T12:50:17.345478656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:50:17.379570 systemd[1]: Started cri-containerd-68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9.scope - libcontainer container 68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9. Jan 29 12:50:17.433724 containerd[1452]: time="2025-01-29T12:50:17.433689708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76fcdb488d-h7k99,Uid:75b0dcc9-8d93-4002-b03a-5f5411f1a957,Namespace:calico-system,Attempt:1,} returns sandbox id \"68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9\"" Jan 29 12:50:17.657209 systemd-networkd[1374]: cali287789e54f2: Gained IPv6LL Jan 29 12:50:17.727814 containerd[1452]: time="2025-01-29T12:50:17.725792971Z" level=info msg="StopPodSandbox for \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\"" Jan 29 12:50:17.861417 containerd[1452]: 2025-01-29 12:50:17.803 [INFO][4514] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Jan 29 12:50:17.861417 containerd[1452]: 2025-01-29 12:50:17.803 [INFO][4514] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" iface="eth0" netns="/var/run/netns/cni-01f533c7-2b8b-b5b2-eec4-90d51c159589" Jan 29 12:50:17.861417 containerd[1452]: 2025-01-29 12:50:17.805 [INFO][4514] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" iface="eth0" netns="/var/run/netns/cni-01f533c7-2b8b-b5b2-eec4-90d51c159589" Jan 29 12:50:17.861417 containerd[1452]: 2025-01-29 12:50:17.806 [INFO][4514] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" iface="eth0" netns="/var/run/netns/cni-01f533c7-2b8b-b5b2-eec4-90d51c159589" Jan 29 12:50:17.861417 containerd[1452]: 2025-01-29 12:50:17.806 [INFO][4514] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Jan 29 12:50:17.861417 containerd[1452]: 2025-01-29 12:50:17.806 [INFO][4514] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Jan 29 12:50:17.861417 containerd[1452]: 2025-01-29 12:50:17.840 [INFO][4520] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" HandleID="k8s-pod-network.e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:17.861417 containerd[1452]: 2025-01-29 12:50:17.840 [INFO][4520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:17.861417 containerd[1452]: 2025-01-29 12:50:17.841 [INFO][4520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:17.861417 containerd[1452]: 2025-01-29 12:50:17.852 [WARNING][4520] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" HandleID="k8s-pod-network.e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:17.861417 containerd[1452]: 2025-01-29 12:50:17.852 [INFO][4520] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" HandleID="k8s-pod-network.e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:17.861417 containerd[1452]: 2025-01-29 12:50:17.855 [INFO][4520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:17.861417 containerd[1452]: 2025-01-29 12:50:17.858 [INFO][4514] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Jan 29 12:50:17.862245 containerd[1452]: time="2025-01-29T12:50:17.861825963Z" level=info msg="TearDown network for sandbox \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\" successfully" Jan 29 12:50:17.862245 containerd[1452]: time="2025-01-29T12:50:17.861876538Z" level=info msg="StopPodSandbox for \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\" returns successfully" Jan 29 12:50:17.863113 containerd[1452]: time="2025-01-29T12:50:17.862649678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d77dbf59-t4fpz,Uid:99f92faa-8e44-47bb-8c33-cf1d3c148912,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:50:17.865200 systemd[1]: run-netns-cni\x2d01f533c7\x2d2b8b\x2db5b2\x2deec4\x2d90d51c159589.mount: Deactivated successfully. Jan 29 12:50:18.054170 systemd-networkd[1374]: cali8cf0da9d859: Link UP Jan 29 12:50:18.059680 systemd-networkd[1374]: cali8cf0da9d859: Gained carrier Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:17.951 [INFO][4527] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0 calico-apiserver-55d77dbf59- calico-apiserver 99f92faa-8e44-47bb-8c33-cf1d3c148912 796 0 2025-01-29 12:49:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55d77dbf59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-6-7edc95d587.novalocal calico-apiserver-55d77dbf59-t4fpz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8cf0da9d859 [] []}} ContainerID="d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-t4fpz" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-" Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:17.951 [INFO][4527] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-t4fpz" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:17.985 [INFO][4537] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" HandleID="k8s-pod-network.d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:17.996 [INFO][4537] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" HandleID="k8s-pod-network.d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-6-7edc95d587.novalocal", "pod":"calico-apiserver-55d77dbf59-t4fpz", "timestamp":"2025-01-29 12:50:17.98572542 +0000 UTC"}, Hostname:"ci-4081-3-0-6-7edc95d587.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:17.996 [INFO][4537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:17.996 [INFO][4537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:17.996 [INFO][4537] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6-7edc95d587.novalocal' Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:17.998 [INFO][4537] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:18.002 [INFO][4537] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:18.008 [INFO][4537] ipam/ipam.go 489: Trying affinity for 192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:18.010 [INFO][4537] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:18.013 [INFO][4537] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.64/26 host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:18.013 [INFO][4537] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.64/26 handle="k8s-pod-network.d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:18.014 [INFO][4537] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8 Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:18.025 [INFO][4537] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.64/26 handle="k8s-pod-network.d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:18.046 [INFO][4537] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.70/26] block=192.168.32.64/26 handle="k8s-pod-network.d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:18.046 [INFO][4537] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.70/26] handle="k8s-pod-network.d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" host="ci-4081-3-0-6-7edc95d587.novalocal" Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:18.046 [INFO][4537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:18.103764 containerd[1452]: 2025-01-29 12:50:18.046 [INFO][4537] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.70/26] IPv6=[] ContainerID="d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" HandleID="k8s-pod-network.d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:18.104388 containerd[1452]: 2025-01-29 12:50:18.048 [INFO][4527] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-t4fpz" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0", GenerateName:"calico-apiserver-55d77dbf59-", Namespace:"calico-apiserver", SelfLink:"", UID:"99f92faa-8e44-47bb-8c33-cf1d3c148912", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d77dbf59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"", Pod:"calico-apiserver-55d77dbf59-t4fpz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cf0da9d859", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:18.104388 containerd[1452]: 2025-01-29 12:50:18.048 [INFO][4527] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.70/32] ContainerID="d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-t4fpz" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:18.104388 containerd[1452]: 2025-01-29 12:50:18.048 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8cf0da9d859 ContainerID="d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-t4fpz" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:18.104388 containerd[1452]: 2025-01-29 12:50:18.055 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-t4fpz" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:18.104388 containerd[1452]: 2025-01-29 12:50:18.055 [INFO][4527] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-t4fpz" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0", GenerateName:"calico-apiserver-55d77dbf59-", Namespace:"calico-apiserver", SelfLink:"", UID:"99f92faa-8e44-47bb-8c33-cf1d3c148912", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d77dbf59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8", Pod:"calico-apiserver-55d77dbf59-t4fpz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cf0da9d859", MAC:"56:fd:fb:d4:3b:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:18.104388 containerd[1452]: 2025-01-29 12:50:18.096 [INFO][4527] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8" Namespace="calico-apiserver" Pod="calico-apiserver-55d77dbf59-t4fpz" WorkloadEndpoint="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:18.105600 systemd-networkd[1374]: cali8277b40c3f1: Gained IPv6LL Jan 29 12:50:18.149427 containerd[1452]: time="2025-01-29T12:50:18.147048335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:50:18.152430 containerd[1452]: time="2025-01-29T12:50:18.147559694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:50:18.152430 containerd[1452]: time="2025-01-29T12:50:18.147578730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:50:18.152430 containerd[1452]: time="2025-01-29T12:50:18.149621312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:50:18.177279 systemd[1]: Started cri-containerd-d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8.scope - libcontainer container d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8. Jan 29 12:50:18.289535 containerd[1452]: time="2025-01-29T12:50:18.289491220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d77dbf59-t4fpz,Uid:99f92faa-8e44-47bb-8c33-cf1d3c148912,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8\"" Jan 29 12:50:18.679626 systemd-networkd[1374]: cali5e8df10cea6: Gained IPv6LL Jan 29 12:50:19.319573 systemd-networkd[1374]: cali8cf0da9d859: Gained IPv6LL Jan 29 12:50:19.333453 containerd[1452]: time="2025-01-29T12:50:19.333298425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:19.334883 containerd[1452]: time="2025-01-29T12:50:19.334830949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 12:50:19.340262 containerd[1452]: time="2025-01-29T12:50:19.340196776Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:19.343639 containerd[1452]: time="2025-01-29T12:50:19.343591625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:19.344348 containerd[1452]: time="2025-01-29T12:50:19.344306957Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 5.903355444s" Jan 29 12:50:19.344420 containerd[1452]: time="2025-01-29T12:50:19.344348074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 12:50:19.356850 containerd[1452]: time="2025-01-29T12:50:19.356500764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 12:50:19.358125 containerd[1452]: time="2025-01-29T12:50:19.357988044Z" level=info msg="CreateContainer within sandbox \"86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:50:19.385613 containerd[1452]: time="2025-01-29T12:50:19.385572324Z" level=info msg="CreateContainer within sandbox \"86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3092655a573242c72d4cff99e33545ebdd1c47fee1fc52302291500f599a834a\"" Jan 29 12:50:19.387079 containerd[1452]: time="2025-01-29T12:50:19.386489064Z" level=info msg="StartContainer for \"3092655a573242c72d4cff99e33545ebdd1c47fee1fc52302291500f599a834a\"" Jan 29 12:50:19.432350 systemd[1]: run-containerd-runc-k8s.io-3092655a573242c72d4cff99e33545ebdd1c47fee1fc52302291500f599a834a-runc.o3iIeW.mount: Deactivated successfully. Jan 29 12:50:19.441595 systemd[1]: Started cri-containerd-3092655a573242c72d4cff99e33545ebdd1c47fee1fc52302291500f599a834a.scope - libcontainer container 3092655a573242c72d4cff99e33545ebdd1c47fee1fc52302291500f599a834a. Jan 29 12:50:19.513122 containerd[1452]: time="2025-01-29T12:50:19.512552608Z" level=info msg="StartContainer for \"3092655a573242c72d4cff99e33545ebdd1c47fee1fc52302291500f599a834a\" returns successfully" Jan 29 12:50:20.074044 kubelet[2603]: I0129 12:50:20.073942 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55d77dbf59-pmmxr" podStartSLOduration=29.158484144 podStartE2EDuration="35.073907028s" podCreationTimestamp="2025-01-29 12:49:45 +0000 UTC" firstStartedPulling="2025-01-29 12:50:13.440514393 +0000 UTC m=+39.863471195" lastFinishedPulling="2025-01-29 12:50:19.355937277 +0000 UTC m=+45.778894079" observedRunningTime="2025-01-29 12:50:20.07071001 +0000 UTC m=+46.493666822" watchObservedRunningTime="2025-01-29 12:50:20.073907028 +0000 UTC m=+46.496863880" Jan 29 12:50:21.063413 kubelet[2603]: I0129 12:50:21.061739 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:50:21.618679 containerd[1452]: time="2025-01-29T12:50:21.618601108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:21.620596 containerd[1452]: time="2025-01-29T12:50:21.620430570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 12:50:21.622161 containerd[1452]: time="2025-01-29T12:50:21.622108438Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:21.627969 containerd[1452]: time="2025-01-29T12:50:21.627882330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:21.630733 containerd[1452]: time="2025-01-29T12:50:21.630659080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.273655463s" Jan 29 12:50:21.630998 containerd[1452]: time="2025-01-29T12:50:21.630924187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 12:50:21.641185 containerd[1452]: time="2025-01-29T12:50:21.641121086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 12:50:21.649820 containerd[1452]: time="2025-01-29T12:50:21.649741418Z" level=info msg="CreateContainer within sandbox \"6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 12:50:21.686294 containerd[1452]: time="2025-01-29T12:50:21.686158228Z" level=info msg="CreateContainer within sandbox \"6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"816fdf69733181534dc472058138f9e3baf97a19cbf9bb479e01932938a17f4c\"" Jan 29 12:50:21.688222 containerd[1452]: time="2025-01-29T12:50:21.687877443Z" level=info msg="StartContainer for \"816fdf69733181534dc472058138f9e3baf97a19cbf9bb479e01932938a17f4c\"" Jan 29 12:50:21.742709 systemd[1]: Started cri-containerd-816fdf69733181534dc472058138f9e3baf97a19cbf9bb479e01932938a17f4c.scope - libcontainer container 816fdf69733181534dc472058138f9e3baf97a19cbf9bb479e01932938a17f4c. Jan 29 12:50:21.783088 containerd[1452]: time="2025-01-29T12:50:21.783025726Z" level=info msg="StartContainer for \"816fdf69733181534dc472058138f9e3baf97a19cbf9bb479e01932938a17f4c\" returns successfully" Jan 29 12:50:26.405343 containerd[1452]: time="2025-01-29T12:50:26.405279660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:26.406864 containerd[1452]: time="2025-01-29T12:50:26.406811073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 12:50:26.408890 containerd[1452]: time="2025-01-29T12:50:26.408748346Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:26.411691 containerd[1452]: time="2025-01-29T12:50:26.411640462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:26.412569 containerd[1452]: time="2025-01-29T12:50:26.412261327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.771077173s" Jan 29 12:50:26.412569 containerd[1452]: time="2025-01-29T12:50:26.412296994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 12:50:26.421648 containerd[1452]: time="2025-01-29T12:50:26.421620134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:50:26.458784 containerd[1452]: time="2025-01-29T12:50:26.458739419Z" level=info msg="CreateContainer within sandbox \"68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 12:50:26.478005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915951638.mount: Deactivated successfully. Jan 29 12:50:26.481194 containerd[1452]: time="2025-01-29T12:50:26.481148899Z" level=info msg="CreateContainer within sandbox \"68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c7fa1734097867796e4ea942367949ca0809b1ad3aa652d8681787a18661d3d2\"" Jan 29 12:50:26.481808 containerd[1452]: time="2025-01-29T12:50:26.481773472Z" level=info msg="StartContainer for \"c7fa1734097867796e4ea942367949ca0809b1ad3aa652d8681787a18661d3d2\"" Jan 29 12:50:26.512542 systemd[1]: Started cri-containerd-c7fa1734097867796e4ea942367949ca0809b1ad3aa652d8681787a18661d3d2.scope - libcontainer container c7fa1734097867796e4ea942367949ca0809b1ad3aa652d8681787a18661d3d2. Jan 29 12:50:26.559353 containerd[1452]: time="2025-01-29T12:50:26.559307224Z" level=info msg="StartContainer for \"c7fa1734097867796e4ea942367949ca0809b1ad3aa652d8681787a18661d3d2\" returns successfully" Jan 29 12:50:26.831666 containerd[1452]: time="2025-01-29T12:50:26.831468769Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:26.834485 containerd[1452]: time="2025-01-29T12:50:26.833988996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 12:50:26.839649 containerd[1452]: time="2025-01-29T12:50:26.839562182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 417.899247ms" Jan 29 12:50:26.839893 containerd[1452]: time="2025-01-29T12:50:26.839642933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 12:50:26.842817 containerd[1452]: time="2025-01-29T12:50:26.842757206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 12:50:26.846561 containerd[1452]: time="2025-01-29T12:50:26.846272049Z" level=info msg="CreateContainer within sandbox \"d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:50:26.876889 containerd[1452]: time="2025-01-29T12:50:26.876823474Z" level=info msg="CreateContainer within sandbox \"d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"29f54253538a18fd6259e213c903341b82f6e618bf469d3af9fe9c3add2fdb81\"" Jan 29 12:50:26.879239 containerd[1452]: time="2025-01-29T12:50:26.878876115Z" level=info msg="StartContainer for \"29f54253538a18fd6259e213c903341b82f6e618bf469d3af9fe9c3add2fdb81\"" Jan 29 12:50:26.938561 systemd[1]: Started cri-containerd-29f54253538a18fd6259e213c903341b82f6e618bf469d3af9fe9c3add2fdb81.scope - libcontainer container 29f54253538a18fd6259e213c903341b82f6e618bf469d3af9fe9c3add2fdb81. Jan 29 12:50:26.986517 containerd[1452]: time="2025-01-29T12:50:26.986366546Z" level=info msg="StartContainer for \"29f54253538a18fd6259e213c903341b82f6e618bf469d3af9fe9c3add2fdb81\" returns successfully" Jan 29 12:50:27.139323 kubelet[2603]: I0129 12:50:27.139212 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76fcdb488d-h7k99" podStartSLOduration=33.153026821 podStartE2EDuration="42.139179933s" podCreationTimestamp="2025-01-29 12:49:45 +0000 UTC" firstStartedPulling="2025-01-29 12:50:17.435325186 +0000 UTC m=+43.858281998" lastFinishedPulling="2025-01-29 12:50:26.421478298 +0000 UTC m=+52.844435110" observedRunningTime="2025-01-29 12:50:27.135449695 +0000 UTC m=+53.558406577" watchObservedRunningTime="2025-01-29 12:50:27.139179933 +0000 UTC m=+53.562136785" Jan 29 12:50:27.171264 kubelet[2603]: I0129 12:50:27.171014 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55d77dbf59-t4fpz" podStartSLOduration=33.620894799 podStartE2EDuration="42.170997922s" podCreationTimestamp="2025-01-29 12:49:45 +0000 UTC" firstStartedPulling="2025-01-29 12:50:18.291552216 +0000 UTC m=+44.714509018" lastFinishedPulling="2025-01-29 12:50:26.841655288 +0000 UTC m=+53.264612141" observedRunningTime="2025-01-29 12:50:27.169006346 +0000 UTC m=+53.591963168" watchObservedRunningTime="2025-01-29 12:50:27.170997922 +0000 UTC m=+53.593954724" Jan 29 12:50:28.116961 kubelet[2603]: I0129 12:50:28.116918 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:50:28.118022 kubelet[2603]: I0129 12:50:28.117991 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:50:29.125579 kubelet[2603]: I0129 12:50:29.125523 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:50:29.165681 systemd[1]: run-containerd-runc-k8s.io-c7fa1734097867796e4ea942367949ca0809b1ad3aa652d8681787a18661d3d2-runc.Qqd9el.mount: Deactivated successfully. Jan 29 12:50:33.616194 kubelet[2603]: I0129 12:50:33.615879 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:50:33.744809 containerd[1452]: time="2025-01-29T12:50:33.744732429Z" level=info msg="StopPodSandbox for \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\"" Jan 29 12:50:33.854727 containerd[1452]: 2025-01-29 12:50:33.799 [WARNING][4856] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0", GenerateName:"calico-apiserver-55d77dbf59-", Namespace:"calico-apiserver", SelfLink:"", UID:"99f92faa-8e44-47bb-8c33-cf1d3c148912", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d77dbf59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8", Pod:"calico-apiserver-55d77dbf59-t4fpz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cf0da9d859", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:33.854727 containerd[1452]: 2025-01-29 12:50:33.799 [INFO][4856] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Jan 29 12:50:33.854727 containerd[1452]: 2025-01-29 12:50:33.799 [INFO][4856] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" iface="eth0" netns="" Jan 29 12:50:33.854727 containerd[1452]: 2025-01-29 12:50:33.799 [INFO][4856] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Jan 29 12:50:33.854727 containerd[1452]: 2025-01-29 12:50:33.799 [INFO][4856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Jan 29 12:50:33.854727 containerd[1452]: 2025-01-29 12:50:33.840 [INFO][4863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" HandleID="k8s-pod-network.e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:33.854727 containerd[1452]: 2025-01-29 12:50:33.841 [INFO][4863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:33.854727 containerd[1452]: 2025-01-29 12:50:33.841 [INFO][4863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:33.854727 containerd[1452]: 2025-01-29 12:50:33.849 [WARNING][4863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" HandleID="k8s-pod-network.e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:33.854727 containerd[1452]: 2025-01-29 12:50:33.849 [INFO][4863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" HandleID="k8s-pod-network.e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:33.854727 containerd[1452]: 2025-01-29 12:50:33.851 [INFO][4863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:33.854727 containerd[1452]: 2025-01-29 12:50:33.852 [INFO][4856] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Jan 29 12:50:33.855180 containerd[1452]: time="2025-01-29T12:50:33.854778820Z" level=info msg="TearDown network for sandbox \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\" successfully" Jan 29 12:50:33.855180 containerd[1452]: time="2025-01-29T12:50:33.854813324Z" level=info msg="StopPodSandbox for \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\" returns successfully" Jan 29 12:50:33.855774 containerd[1452]: time="2025-01-29T12:50:33.855728441Z" level=info msg="RemovePodSandbox for \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\"" Jan 29 12:50:33.855831 containerd[1452]: time="2025-01-29T12:50:33.855771071Z" level=info msg="Forcibly stopping sandbox \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\"" Jan 29 12:50:33.934489 containerd[1452]: 2025-01-29 12:50:33.899 [WARNING][4881] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0", GenerateName:"calico-apiserver-55d77dbf59-", Namespace:"calico-apiserver", SelfLink:"", UID:"99f92faa-8e44-47bb-8c33-cf1d3c148912", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d77dbf59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"d3b31854842c0d9843a81538f89a76cd95b07b5fbd68a32269841155aba210f8", Pod:"calico-apiserver-55d77dbf59-t4fpz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cf0da9d859", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:33.934489 containerd[1452]: 2025-01-29 12:50:33.900 [INFO][4881] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Jan 29 12:50:33.934489 containerd[1452]: 2025-01-29 12:50:33.900 [INFO][4881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" iface="eth0" netns="" Jan 29 12:50:33.934489 containerd[1452]: 2025-01-29 12:50:33.900 [INFO][4881] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Jan 29 12:50:33.934489 containerd[1452]: 2025-01-29 12:50:33.900 [INFO][4881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Jan 29 12:50:33.934489 containerd[1452]: 2025-01-29 12:50:33.923 [INFO][4887] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" HandleID="k8s-pod-network.e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:33.934489 containerd[1452]: 2025-01-29 12:50:33.923 [INFO][4887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:33.934489 containerd[1452]: 2025-01-29 12:50:33.923 [INFO][4887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:33.934489 containerd[1452]: 2025-01-29 12:50:33.930 [WARNING][4887] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" HandleID="k8s-pod-network.e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:33.934489 containerd[1452]: 2025-01-29 12:50:33.930 [INFO][4887] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" HandleID="k8s-pod-network.e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--t4fpz-eth0" Jan 29 12:50:33.934489 containerd[1452]: 2025-01-29 12:50:33.932 [INFO][4887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:33.934489 containerd[1452]: 2025-01-29 12:50:33.933 [INFO][4881] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb" Jan 29 12:50:33.935757 containerd[1452]: time="2025-01-29T12:50:33.934454587Z" level=info msg="TearDown network for sandbox \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\" successfully" Jan 29 12:50:34.178039 containerd[1452]: time="2025-01-29T12:50:34.177009974Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:50:34.178039 containerd[1452]: time="2025-01-29T12:50:34.177185774Z" level=info msg="RemovePodSandbox \"e95285ae47d8a8d3a834bc77664b3a06c0356bcd00c5dc1cf4e56767735f42cb\" returns successfully" Jan 29 12:50:34.178724 containerd[1452]: time="2025-01-29T12:50:34.178660290Z" level=info msg="StopPodSandbox for \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\"" Jan 29 12:50:34.334058 containerd[1452]: 2025-01-29 12:50:34.274 [WARNING][4905] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0", GenerateName:"calico-apiserver-55d77dbf59-", Namespace:"calico-apiserver", SelfLink:"", UID:"47cdff09-3717-4637-aaa6-498f177eaff7", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d77dbf59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0", Pod:"calico-apiserver-55d77dbf59-pmmxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89fde96589b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:34.334058 containerd[1452]: 2025-01-29 12:50:34.274 [INFO][4905] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Jan 29 12:50:34.334058 containerd[1452]: 2025-01-29 12:50:34.274 [INFO][4905] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" iface="eth0" netns="" Jan 29 12:50:34.334058 containerd[1452]: 2025-01-29 12:50:34.274 [INFO][4905] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Jan 29 12:50:34.334058 containerd[1452]: 2025-01-29 12:50:34.274 [INFO][4905] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Jan 29 12:50:34.334058 containerd[1452]: 2025-01-29 12:50:34.317 [INFO][4912] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" HandleID="k8s-pod-network.a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:34.334058 containerd[1452]: 2025-01-29 12:50:34.317 [INFO][4912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:34.334058 containerd[1452]: 2025-01-29 12:50:34.317 [INFO][4912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:34.334058 containerd[1452]: 2025-01-29 12:50:34.326 [WARNING][4912] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" HandleID="k8s-pod-network.a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:34.334058 containerd[1452]: 2025-01-29 12:50:34.327 [INFO][4912] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" HandleID="k8s-pod-network.a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:34.334058 containerd[1452]: 2025-01-29 12:50:34.329 [INFO][4912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:34.334058 containerd[1452]: 2025-01-29 12:50:34.331 [INFO][4905] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Jan 29 12:50:34.335127 containerd[1452]: time="2025-01-29T12:50:34.334791060Z" level=info msg="TearDown network for sandbox \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\" successfully" Jan 29 12:50:34.335127 containerd[1452]: time="2025-01-29T12:50:34.334845202Z" level=info msg="StopPodSandbox for \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\" returns successfully" Jan 29 12:50:34.336026 containerd[1452]: time="2025-01-29T12:50:34.335878671Z" level=info msg="RemovePodSandbox for \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\"" Jan 29 12:50:34.336087 containerd[1452]: time="2025-01-29T12:50:34.336027320Z" level=info msg="Forcibly stopping sandbox \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\"" Jan 29 12:50:34.450874 containerd[1452]: 2025-01-29 12:50:34.394 [WARNING][4930] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0", GenerateName:"calico-apiserver-55d77dbf59-", Namespace:"calico-apiserver", SelfLink:"", UID:"47cdff09-3717-4637-aaa6-498f177eaff7", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d77dbf59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"86408c4f55c191f64861f42de364b3f01c193eed529940a6cf1a2ac9805185c0", Pod:"calico-apiserver-55d77dbf59-pmmxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89fde96589b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:34.450874 containerd[1452]: 2025-01-29 12:50:34.394 [INFO][4930] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Jan 29 12:50:34.450874 containerd[1452]: 2025-01-29 12:50:34.394 [INFO][4930] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" iface="eth0" netns="" Jan 29 12:50:34.450874 containerd[1452]: 2025-01-29 12:50:34.394 [INFO][4930] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Jan 29 12:50:34.450874 containerd[1452]: 2025-01-29 12:50:34.394 [INFO][4930] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Jan 29 12:50:34.450874 containerd[1452]: 2025-01-29 12:50:34.438 [INFO][4937] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" HandleID="k8s-pod-network.a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:34.450874 containerd[1452]: 2025-01-29 12:50:34.438 [INFO][4937] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:34.450874 containerd[1452]: 2025-01-29 12:50:34.438 [INFO][4937] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:34.450874 containerd[1452]: 2025-01-29 12:50:34.446 [WARNING][4937] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" HandleID="k8s-pod-network.a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:34.450874 containerd[1452]: 2025-01-29 12:50:34.446 [INFO][4937] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" HandleID="k8s-pod-network.a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--apiserver--55d77dbf59--pmmxr-eth0" Jan 29 12:50:34.450874 containerd[1452]: 2025-01-29 12:50:34.448 [INFO][4937] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:34.450874 containerd[1452]: 2025-01-29 12:50:34.449 [INFO][4930] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4" Jan 29 12:50:34.451336 containerd[1452]: time="2025-01-29T12:50:34.450908528Z" level=info msg="TearDown network for sandbox \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\" successfully" Jan 29 12:50:34.455090 containerd[1452]: time="2025-01-29T12:50:34.455032685Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:50:34.455090 containerd[1452]: time="2025-01-29T12:50:34.455094982Z" level=info msg="RemovePodSandbox \"a2b598ee67f9cd9cd481f8195e7a9ff4baa96fbef80d0d16d91250e567f20fa4\" returns successfully" Jan 29 12:50:34.455762 containerd[1452]: time="2025-01-29T12:50:34.455720816Z" level=info msg="StopPodSandbox for \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\"" Jan 29 12:50:34.540450 containerd[1452]: 2025-01-29 12:50:34.501 [WARNING][4955] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0", GenerateName:"calico-kube-controllers-76fcdb488d-", Namespace:"calico-system", SelfLink:"", UID:"75b0dcc9-8d93-4002-b03a-5f5411f1a957", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76fcdb488d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9", Pod:"calico-kube-controllers-76fcdb488d-h7k99", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5e8df10cea6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:34.540450 containerd[1452]: 2025-01-29 12:50:34.501 [INFO][4955] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Jan 29 12:50:34.540450 containerd[1452]: 2025-01-29 12:50:34.501 [INFO][4955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" iface="eth0" netns="" Jan 29 12:50:34.540450 containerd[1452]: 2025-01-29 12:50:34.501 [INFO][4955] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Jan 29 12:50:34.540450 containerd[1452]: 2025-01-29 12:50:34.501 [INFO][4955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Jan 29 12:50:34.540450 containerd[1452]: 2025-01-29 12:50:34.528 [INFO][4962] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" HandleID="k8s-pod-network.0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:34.540450 containerd[1452]: 2025-01-29 12:50:34.528 [INFO][4962] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:34.540450 containerd[1452]: 2025-01-29 12:50:34.528 [INFO][4962] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:34.540450 containerd[1452]: 2025-01-29 12:50:34.534 [WARNING][4962] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" HandleID="k8s-pod-network.0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:34.540450 containerd[1452]: 2025-01-29 12:50:34.534 [INFO][4962] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" HandleID="k8s-pod-network.0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:34.540450 containerd[1452]: 2025-01-29 12:50:34.537 [INFO][4962] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:34.540450 containerd[1452]: 2025-01-29 12:50:34.539 [INFO][4955] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Jan 29 12:50:34.541269 containerd[1452]: time="2025-01-29T12:50:34.540828913Z" level=info msg="TearDown network for sandbox \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\" successfully" Jan 29 12:50:34.541269 containerd[1452]: time="2025-01-29T12:50:34.540855022Z" level=info msg="StopPodSandbox for \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\" returns successfully" Jan 29 12:50:34.543041 containerd[1452]: time="2025-01-29T12:50:34.542199885Z" level=info msg="RemovePodSandbox for \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\"" Jan 29 12:50:34.543041 containerd[1452]: time="2025-01-29T12:50:34.542227888Z" level=info msg="Forcibly stopping sandbox \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\"" Jan 29 12:50:34.619634 containerd[1452]: 2025-01-29 12:50:34.587 [WARNING][4980] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0", GenerateName:"calico-kube-controllers-76fcdb488d-", Namespace:"calico-system", SelfLink:"", UID:"75b0dcc9-8d93-4002-b03a-5f5411f1a957", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76fcdb488d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"68b8a9a663c7233cac44c234269dff39309ca68af6cfc90f4a6f81addc8f33b9", Pod:"calico-kube-controllers-76fcdb488d-h7k99", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5e8df10cea6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:34.619634 containerd[1452]: 2025-01-29 12:50:34.587 [INFO][4980] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Jan 29 12:50:34.619634 containerd[1452]: 2025-01-29 12:50:34.587 [INFO][4980] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" iface="eth0" netns="" Jan 29 12:50:34.619634 containerd[1452]: 2025-01-29 12:50:34.587 [INFO][4980] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Jan 29 12:50:34.619634 containerd[1452]: 2025-01-29 12:50:34.587 [INFO][4980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Jan 29 12:50:34.619634 containerd[1452]: 2025-01-29 12:50:34.607 [INFO][4987] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" HandleID="k8s-pod-network.0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:34.619634 containerd[1452]: 2025-01-29 12:50:34.607 [INFO][4987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:34.619634 containerd[1452]: 2025-01-29 12:50:34.607 [INFO][4987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:34.619634 containerd[1452]: 2025-01-29 12:50:34.615 [WARNING][4987] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" HandleID="k8s-pod-network.0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:34.619634 containerd[1452]: 2025-01-29 12:50:34.615 [INFO][4987] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" HandleID="k8s-pod-network.0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-calico--kube--controllers--76fcdb488d--h7k99-eth0" Jan 29 12:50:34.619634 containerd[1452]: 2025-01-29 12:50:34.617 [INFO][4987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:34.619634 containerd[1452]: 2025-01-29 12:50:34.618 [INFO][4980] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7" Jan 29 12:50:34.620293 containerd[1452]: time="2025-01-29T12:50:34.619650336Z" level=info msg="TearDown network for sandbox \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\" successfully" Jan 29 12:50:34.624145 containerd[1452]: time="2025-01-29T12:50:34.624114611Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:50:34.624218 containerd[1452]: time="2025-01-29T12:50:34.624171488Z" level=info msg="RemovePodSandbox \"0272479474272f478a8c47fc98baa9f3ed350930b76578e3afa2b10273b550e7\" returns successfully" Jan 29 12:50:34.624636 containerd[1452]: time="2025-01-29T12:50:34.624611834Z" level=info msg="StopPodSandbox for \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\"" Jan 29 12:50:34.708088 containerd[1452]: 2025-01-29 12:50:34.675 [WARNING][5007] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b7e92067-48ec-4b6c-a725-b3129763f04a", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d", Pod:"coredns-668d6bf9bc-zwt5p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali287789e54f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:34.708088 containerd[1452]: 2025-01-29 12:50:34.676 [INFO][5007] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Jan 29 12:50:34.708088 containerd[1452]: 2025-01-29 12:50:34.676 [INFO][5007] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" iface="eth0" netns="" Jan 29 12:50:34.708088 containerd[1452]: 2025-01-29 12:50:34.676 [INFO][5007] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Jan 29 12:50:34.708088 containerd[1452]: 2025-01-29 12:50:34.676 [INFO][5007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Jan 29 12:50:34.708088 containerd[1452]: 2025-01-29 12:50:34.696 [INFO][5013] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" HandleID="k8s-pod-network.22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:34.708088 containerd[1452]: 2025-01-29 12:50:34.696 [INFO][5013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:34.708088 containerd[1452]: 2025-01-29 12:50:34.696 [INFO][5013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:34.708088 containerd[1452]: 2025-01-29 12:50:34.703 [WARNING][5013] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" HandleID="k8s-pod-network.22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:34.708088 containerd[1452]: 2025-01-29 12:50:34.703 [INFO][5013] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" HandleID="k8s-pod-network.22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:34.708088 containerd[1452]: 2025-01-29 12:50:34.704 [INFO][5013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:34.708088 containerd[1452]: 2025-01-29 12:50:34.706 [INFO][5007] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Jan 29 12:50:34.709827 containerd[1452]: time="2025-01-29T12:50:34.708210389Z" level=info msg="TearDown network for sandbox \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\" successfully" Jan 29 12:50:34.709827 containerd[1452]: time="2025-01-29T12:50:34.708340152Z" level=info msg="StopPodSandbox for \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\" returns successfully" Jan 29 12:50:34.709827 containerd[1452]: time="2025-01-29T12:50:34.709231114Z" level=info msg="RemovePodSandbox for \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\"" Jan 29 12:50:34.709827 containerd[1452]: time="2025-01-29T12:50:34.709259287Z" level=info msg="Forcibly stopping sandbox \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\"" Jan 29 12:50:34.789508 containerd[1452]: 2025-01-29 12:50:34.748 [WARNING][5031] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b7e92067-48ec-4b6c-a725-b3129763f04a", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"b7763515e74ce0bd01086bc28d3a2b84be8c683c9e9da21568ed1a99b2a1060d", Pod:"coredns-668d6bf9bc-zwt5p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali287789e54f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:34.789508 containerd[1452]: 2025-01-29 12:50:34.748 [INFO][5031] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Jan 29 12:50:34.789508 containerd[1452]: 2025-01-29 12:50:34.748 [INFO][5031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" iface="eth0" netns="" Jan 29 12:50:34.789508 containerd[1452]: 2025-01-29 12:50:34.748 [INFO][5031] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Jan 29 12:50:34.789508 containerd[1452]: 2025-01-29 12:50:34.748 [INFO][5031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Jan 29 12:50:34.789508 containerd[1452]: 2025-01-29 12:50:34.772 [INFO][5037] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" HandleID="k8s-pod-network.22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:34.789508 containerd[1452]: 2025-01-29 12:50:34.772 [INFO][5037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:34.789508 containerd[1452]: 2025-01-29 12:50:34.772 [INFO][5037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:34.789508 containerd[1452]: 2025-01-29 12:50:34.780 [WARNING][5037] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" HandleID="k8s-pod-network.22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:34.789508 containerd[1452]: 2025-01-29 12:50:34.780 [INFO][5037] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" HandleID="k8s-pod-network.22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--zwt5p-eth0" Jan 29 12:50:34.789508 containerd[1452]: 2025-01-29 12:50:34.783 [INFO][5037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:34.789508 containerd[1452]: 2025-01-29 12:50:34.787 [INFO][5031] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8" Jan 29 12:50:34.789508 containerd[1452]: time="2025-01-29T12:50:34.789304045Z" level=info msg="TearDown network for sandbox \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\" successfully" Jan 29 12:50:34.843970 containerd[1452]: time="2025-01-29T12:50:34.843883493Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:50:34.844086 containerd[1452]: time="2025-01-29T12:50:34.844022844Z" level=info msg="RemovePodSandbox \"22cfac8add9ed21afb3426426eccd25f5cafd5b7b75585a15670862a00ee09a8\" returns successfully" Jan 29 12:50:34.844673 containerd[1452]: time="2025-01-29T12:50:34.844619663Z" level=info msg="StopPodSandbox for \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\"" Jan 29 12:50:34.937751 containerd[1452]: 2025-01-29 12:50:34.897 [WARNING][5055] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b3395d29-aa34-40db-87bd-39bbc4377d98", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457", Pod:"csi-node-driver-fldpg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie988f29dbcf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:34.937751 containerd[1452]: 2025-01-29 12:50:34.897 [INFO][5055] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Jan 29 12:50:34.937751 containerd[1452]: 2025-01-29 12:50:34.897 [INFO][5055] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" iface="eth0" netns="" Jan 29 12:50:34.937751 containerd[1452]: 2025-01-29 12:50:34.897 [INFO][5055] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Jan 29 12:50:34.937751 containerd[1452]: 2025-01-29 12:50:34.897 [INFO][5055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Jan 29 12:50:34.937751 containerd[1452]: 2025-01-29 12:50:34.925 [INFO][5061] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" HandleID="k8s-pod-network.61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:34.937751 containerd[1452]: 2025-01-29 12:50:34.925 [INFO][5061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:34.937751 containerd[1452]: 2025-01-29 12:50:34.925 [INFO][5061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:34.937751 containerd[1452]: 2025-01-29 12:50:34.933 [WARNING][5061] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" HandleID="k8s-pod-network.61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:34.937751 containerd[1452]: 2025-01-29 12:50:34.933 [INFO][5061] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" HandleID="k8s-pod-network.61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:34.937751 containerd[1452]: 2025-01-29 12:50:34.934 [INFO][5061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:34.937751 containerd[1452]: 2025-01-29 12:50:34.936 [INFO][5055] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Jan 29 12:50:34.937751 containerd[1452]: time="2025-01-29T12:50:34.937717929Z" level=info msg="TearDown network for sandbox \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\" successfully" Jan 29 12:50:34.938220 containerd[1452]: time="2025-01-29T12:50:34.937757734Z" level=info msg="StopPodSandbox for \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\" returns successfully" Jan 29 12:50:34.940721 containerd[1452]: time="2025-01-29T12:50:34.940690836Z" level=info msg="RemovePodSandbox for \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\"" Jan 29 12:50:34.940795 containerd[1452]: time="2025-01-29T12:50:34.940725060Z" level=info msg="Forcibly stopping sandbox \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\"" Jan 29 12:50:35.018793 containerd[1452]: 2025-01-29 12:50:34.984 [WARNING][5079] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b3395d29-aa34-40db-87bd-39bbc4377d98", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457", Pod:"csi-node-driver-fldpg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie988f29dbcf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:35.018793 containerd[1452]: 2025-01-29 12:50:34.984 [INFO][5079] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Jan 29 12:50:35.018793 containerd[1452]: 2025-01-29 12:50:34.984 [INFO][5079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" iface="eth0" netns="" Jan 29 12:50:35.018793 containerd[1452]: 2025-01-29 12:50:34.984 [INFO][5079] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Jan 29 12:50:35.018793 containerd[1452]: 2025-01-29 12:50:34.984 [INFO][5079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Jan 29 12:50:35.018793 containerd[1452]: 2025-01-29 12:50:35.007 [INFO][5085] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" HandleID="k8s-pod-network.61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:35.018793 containerd[1452]: 2025-01-29 12:50:35.007 [INFO][5085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:35.018793 containerd[1452]: 2025-01-29 12:50:35.007 [INFO][5085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:35.018793 containerd[1452]: 2025-01-29 12:50:35.014 [WARNING][5085] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" HandleID="k8s-pod-network.61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:35.018793 containerd[1452]: 2025-01-29 12:50:35.015 [INFO][5085] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" HandleID="k8s-pod-network.61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-csi--node--driver--fldpg-eth0" Jan 29 12:50:35.018793 containerd[1452]: 2025-01-29 12:50:35.016 [INFO][5085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:35.018793 containerd[1452]: 2025-01-29 12:50:35.017 [INFO][5079] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b" Jan 29 12:50:35.019236 containerd[1452]: time="2025-01-29T12:50:35.018840800Z" level=info msg="TearDown network for sandbox \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\" successfully" Jan 29 12:50:35.023241 containerd[1452]: time="2025-01-29T12:50:35.023079041Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:50:35.023241 containerd[1452]: time="2025-01-29T12:50:35.023136208Z" level=info msg="RemovePodSandbox \"61c91d992f21d5d08f84a06c4f95c7be754dd7b2c57fbfb744ccb6608b0d668b\" returns successfully" Jan 29 12:50:35.024073 containerd[1452]: time="2025-01-29T12:50:35.024044131Z" level=info msg="StopPodSandbox for \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\"" Jan 29 12:50:35.095645 containerd[1452]: 2025-01-29 12:50:35.060 [WARNING][5103] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0f12975c-89a3-46d1-87fb-a8eed8bcd180", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17", Pod:"coredns-668d6bf9bc-n5k7q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8277b40c3f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:35.095645 containerd[1452]: 2025-01-29 12:50:35.061 [INFO][5103] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Jan 29 12:50:35.095645 containerd[1452]: 2025-01-29 12:50:35.061 [INFO][5103] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" iface="eth0" netns="" Jan 29 12:50:35.095645 containerd[1452]: 2025-01-29 12:50:35.061 [INFO][5103] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Jan 29 12:50:35.095645 containerd[1452]: 2025-01-29 12:50:35.061 [INFO][5103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Jan 29 12:50:35.095645 containerd[1452]: 2025-01-29 12:50:35.084 [INFO][5109] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" HandleID="k8s-pod-network.74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:35.095645 containerd[1452]: 2025-01-29 12:50:35.084 [INFO][5109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:35.095645 containerd[1452]: 2025-01-29 12:50:35.084 [INFO][5109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:35.095645 containerd[1452]: 2025-01-29 12:50:35.091 [WARNING][5109] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" HandleID="k8s-pod-network.74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:35.095645 containerd[1452]: 2025-01-29 12:50:35.091 [INFO][5109] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" HandleID="k8s-pod-network.74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:35.095645 containerd[1452]: 2025-01-29 12:50:35.093 [INFO][5109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:35.095645 containerd[1452]: 2025-01-29 12:50:35.094 [INFO][5103] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Jan 29 12:50:35.095645 containerd[1452]: time="2025-01-29T12:50:35.095340677Z" level=info msg="TearDown network for sandbox \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\" successfully" Jan 29 12:50:35.095645 containerd[1452]: time="2025-01-29T12:50:35.095367598Z" level=info msg="StopPodSandbox for \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\" returns successfully" Jan 29 12:50:35.096449 containerd[1452]: time="2025-01-29T12:50:35.096426044Z" level=info msg="RemovePodSandbox for \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\"" Jan 29 12:50:35.096511 containerd[1452]: time="2025-01-29T12:50:35.096454718Z" level=info msg="Forcibly stopping sandbox \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\"" Jan 29 12:50:35.177353 containerd[1452]: 2025-01-29 12:50:35.139 [WARNING][5127] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0f12975c-89a3-46d1-87fb-a8eed8bcd180", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 49, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-7edc95d587.novalocal", ContainerID:"8f38b978f121e1cc3546a061c951adb3ecdab77f7c4913beb9cb9631ea25ec17", Pod:"coredns-668d6bf9bc-n5k7q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8277b40c3f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:50:35.177353 containerd[1452]: 2025-01-29 12:50:35.139 [INFO][5127] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Jan 29 12:50:35.177353 containerd[1452]: 2025-01-29 12:50:35.139 [INFO][5127] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" iface="eth0" netns="" Jan 29 12:50:35.177353 containerd[1452]: 2025-01-29 12:50:35.139 [INFO][5127] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Jan 29 12:50:35.177353 containerd[1452]: 2025-01-29 12:50:35.139 [INFO][5127] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Jan 29 12:50:35.177353 containerd[1452]: 2025-01-29 12:50:35.166 [INFO][5133] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" HandleID="k8s-pod-network.74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:35.177353 containerd[1452]: 2025-01-29 12:50:35.166 [INFO][5133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:50:35.177353 containerd[1452]: 2025-01-29 12:50:35.166 [INFO][5133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:50:35.177353 containerd[1452]: 2025-01-29 12:50:35.173 [WARNING][5133] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" HandleID="k8s-pod-network.74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:35.177353 containerd[1452]: 2025-01-29 12:50:35.173 [INFO][5133] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" HandleID="k8s-pod-network.74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Workload="ci--4081--3--0--6--7edc95d587.novalocal-k8s-coredns--668d6bf9bc--n5k7q-eth0" Jan 29 12:50:35.177353 containerd[1452]: 2025-01-29 12:50:35.175 [INFO][5133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:50:35.177353 containerd[1452]: 2025-01-29 12:50:35.176 [INFO][5127] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac" Jan 29 12:50:35.177820 containerd[1452]: time="2025-01-29T12:50:35.177365680Z" level=info msg="TearDown network for sandbox \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\" successfully" Jan 29 12:50:35.181998 containerd[1452]: time="2025-01-29T12:50:35.181957093Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:50:35.182054 containerd[1452]: time="2025-01-29T12:50:35.182012908Z" level=info msg="RemovePodSandbox \"74978ae69d4c43d156938d00d27372fd728590eaa6eca079e2830aa8fb1a22ac\" returns successfully" Jan 29 12:50:41.017292 kubelet[2603]: I0129 12:50:41.017095 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:50:41.709165 systemd[1]: Started sshd@9-172.24.4.220:22-172.24.4.1:58148.service - OpenSSH per-connection server daemon (172.24.4.1:58148). Jan 29 12:50:43.128933 sshd[5149]: Accepted publickey for core from 172.24.4.1 port 58148 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:50:43.133775 sshd[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:50:43.146044 systemd-logind[1437]: New session 12 of user core. Jan 29 12:50:43.154750 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:50:43.924320 containerd[1452]: time="2025-01-29T12:50:43.924282054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:43.926615 containerd[1452]: time="2025-01-29T12:50:43.926575225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 12:50:43.928217 containerd[1452]: time="2025-01-29T12:50:43.928183432Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:43.930786 containerd[1452]: time="2025-01-29T12:50:43.930743625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:50:43.931443 containerd[1452]: time="2025-01-29T12:50:43.931300068Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 17.088481517s" Jan 29 12:50:43.931443 containerd[1452]: time="2025-01-29T12:50:43.931331768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 12:50:43.934193 containerd[1452]: time="2025-01-29T12:50:43.934148271Z" level=info msg="CreateContainer within sandbox \"6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 12:50:43.937240 sshd[5149]: pam_unix(sshd:session): session closed for user core Jan 29 12:50:43.941953 systemd[1]: sshd@9-172.24.4.220:22-172.24.4.1:58148.service: Deactivated successfully. Jan 29 12:50:43.946007 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:50:43.947474 systemd-logind[1437]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:50:43.958776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1888542955.mount: Deactivated successfully. Jan 29 12:50:43.962549 systemd-logind[1437]: Removed session 12. Jan 29 12:50:43.965732 containerd[1452]: time="2025-01-29T12:50:43.965702202Z" level=info msg="CreateContainer within sandbox \"6256a813ea7224f0ce34a3c2b756b9813dd973dc2ad1ca2c800b87ab687be457\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"74727c101460a273af6fffd8ac6fce27fe969759974706f32a2275f732f3bd57\"" Jan 29 12:50:43.967112 containerd[1452]: time="2025-01-29T12:50:43.967073545Z" level=info msg="StartContainer for \"74727c101460a273af6fffd8ac6fce27fe969759974706f32a2275f732f3bd57\"" Jan 29 12:50:44.004572 systemd[1]: Started cri-containerd-74727c101460a273af6fffd8ac6fce27fe969759974706f32a2275f732f3bd57.scope - libcontainer container 74727c101460a273af6fffd8ac6fce27fe969759974706f32a2275f732f3bd57. Jan 29 12:50:44.035984 containerd[1452]: time="2025-01-29T12:50:44.035869467Z" level=info msg="StartContainer for \"74727c101460a273af6fffd8ac6fce27fe969759974706f32a2275f732f3bd57\" returns successfully" Jan 29 12:50:44.217328 kubelet[2603]: I0129 12:50:44.216014 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fldpg" podStartSLOduration=29.46568937 podStartE2EDuration="59.215999236s" podCreationTimestamp="2025-01-29 12:49:45 +0000 UTC" firstStartedPulling="2025-01-29 12:50:14.181934264 +0000 UTC m=+40.604891066" lastFinishedPulling="2025-01-29 12:50:43.93224412 +0000 UTC m=+70.355200932" observedRunningTime="2025-01-29 12:50:44.214978802 +0000 UTC m=+70.637935624" watchObservedRunningTime="2025-01-29 12:50:44.215999236 +0000 UTC m=+70.638956038" Jan 29 12:50:44.866585 kubelet[2603]: I0129 12:50:44.866481 2603 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 12:50:44.867567 kubelet[2603]: I0129 12:50:44.866738 2603 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 12:50:48.959979 systemd[1]: Started sshd@10-172.24.4.220:22-172.24.4.1:46270.service - OpenSSH per-connection server daemon (172.24.4.1:46270). Jan 29 12:50:50.253359 sshd[5225]: Accepted publickey for core from 172.24.4.1 port 46270 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:50:50.259837 sshd[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:50:50.272282 systemd-logind[1437]: New session 13 of user core. Jan 29 12:50:50.281745 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:50:51.090639 sshd[5225]: pam_unix(sshd:session): session closed for user core Jan 29 12:50:51.098375 systemd-logind[1437]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:50:51.098744 systemd[1]: sshd@10-172.24.4.220:22-172.24.4.1:46270.service: Deactivated successfully. Jan 29 12:50:51.103572 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:50:51.108307 systemd-logind[1437]: Removed session 13. Jan 29 12:50:56.114003 systemd[1]: Started sshd@11-172.24.4.220:22-172.24.4.1:50162.service - OpenSSH per-connection server daemon (172.24.4.1:50162). Jan 29 12:50:57.663179 sshd[5247]: Accepted publickey for core from 172.24.4.1 port 50162 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:50:57.666163 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:50:57.678567 systemd-logind[1437]: New session 14 of user core. Jan 29 12:50:57.687708 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:50:58.360459 sshd[5247]: pam_unix(sshd:session): session closed for user core Jan 29 12:50:58.367945 systemd[1]: sshd@11-172.24.4.220:22-172.24.4.1:50162.service: Deactivated successfully. Jan 29 12:50:58.369383 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:50:58.373452 systemd-logind[1437]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:50:58.379944 systemd[1]: Started sshd@12-172.24.4.220:22-172.24.4.1:50178.service - OpenSSH per-connection server daemon (172.24.4.1:50178). Jan 29 12:50:58.382966 systemd-logind[1437]: Removed session 14. Jan 29 12:50:59.202278 systemd[1]: run-containerd-runc-k8s.io-c7fa1734097867796e4ea942367949ca0809b1ad3aa652d8681787a18661d3d2-runc.tLhJVb.mount: Deactivated successfully. Jan 29 12:50:59.222933 systemd[1]: run-containerd-runc-k8s.io-c7fa1734097867796e4ea942367949ca0809b1ad3aa652d8681787a18661d3d2-runc.iqQnfI.mount: Deactivated successfully. Jan 29 12:50:59.585951 sshd[5263]: Accepted publickey for core from 172.24.4.1 port 50178 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:50:59.589282 sshd[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:50:59.601172 systemd-logind[1437]: New session 15 of user core. Jan 29 12:50:59.607716 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:51:00.477930 sshd[5263]: pam_unix(sshd:session): session closed for user core Jan 29 12:51:00.492956 systemd[1]: sshd@12-172.24.4.220:22-172.24.4.1:50178.service: Deactivated successfully. Jan 29 12:51:00.500970 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:51:00.506005 systemd-logind[1437]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:51:00.514317 systemd[1]: Started sshd@13-172.24.4.220:22-172.24.4.1:50186.service - OpenSSH per-connection server daemon (172.24.4.1:50186). Jan 29 12:51:00.522893 systemd-logind[1437]: Removed session 15. Jan 29 12:51:01.817136 sshd[5312]: Accepted publickey for core from 172.24.4.1 port 50186 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:51:01.820668 sshd[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:51:01.835320 systemd-logind[1437]: New session 16 of user core. Jan 29 12:51:01.841821 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:51:02.667148 sshd[5312]: pam_unix(sshd:session): session closed for user core Jan 29 12:51:02.673713 systemd[1]: sshd@13-172.24.4.220:22-172.24.4.1:50186.service: Deactivated successfully. Jan 29 12:51:02.679668 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:51:02.683817 systemd-logind[1437]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:51:02.687908 systemd-logind[1437]: Removed session 16. Jan 29 12:51:07.688005 systemd[1]: Started sshd@14-172.24.4.220:22-172.24.4.1:55772.service - OpenSSH per-connection server daemon (172.24.4.1:55772). Jan 29 12:51:08.843614 sshd[5325]: Accepted publickey for core from 172.24.4.1 port 55772 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:51:08.846057 sshd[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:51:08.852042 systemd-logind[1437]: New session 17 of user core. Jan 29 12:51:08.864766 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:51:09.632751 sshd[5325]: pam_unix(sshd:session): session closed for user core Jan 29 12:51:09.639643 systemd[1]: sshd@14-172.24.4.220:22-172.24.4.1:55772.service: Deactivated successfully. Jan 29 12:51:09.644837 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:51:09.646656 systemd-logind[1437]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:51:09.649660 systemd-logind[1437]: Removed session 17. Jan 29 12:51:14.655026 systemd[1]: Started sshd@15-172.24.4.220:22-172.24.4.1:40592.service - OpenSSH per-connection server daemon (172.24.4.1:40592). Jan 29 12:51:15.796185 sshd[5366]: Accepted publickey for core from 172.24.4.1 port 40592 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:51:15.798259 sshd[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:51:15.808583 systemd-logind[1437]: New session 18 of user core. Jan 29 12:51:15.814734 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:51:16.574280 sshd[5366]: pam_unix(sshd:session): session closed for user core Jan 29 12:51:16.580915 systemd[1]: sshd@15-172.24.4.220:22-172.24.4.1:40592.service: Deactivated successfully. Jan 29 12:51:16.584280 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:51:16.586336 systemd-logind[1437]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:51:16.589094 systemd-logind[1437]: Removed session 18. Jan 29 12:51:21.597908 systemd[1]: Started sshd@16-172.24.4.220:22-172.24.4.1:40608.service - OpenSSH per-connection server daemon (172.24.4.1:40608). Jan 29 12:51:23.362587 sshd[5379]: Accepted publickey for core from 172.24.4.1 port 40608 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:51:23.365446 sshd[5379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:51:23.376198 systemd-logind[1437]: New session 19 of user core. Jan 29 12:51:23.384718 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:51:24.244356 sshd[5379]: pam_unix(sshd:session): session closed for user core Jan 29 12:51:24.256871 systemd[1]: sshd@16-172.24.4.220:22-172.24.4.1:40608.service: Deactivated successfully. Jan 29 12:51:24.260299 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:51:24.262777 systemd-logind[1437]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:51:24.272033 systemd[1]: Started sshd@17-172.24.4.220:22-172.24.4.1:52598.service - OpenSSH per-connection server daemon (172.24.4.1:52598). Jan 29 12:51:24.275632 systemd-logind[1437]: Removed session 19. Jan 29 12:51:25.606845 sshd[5391]: Accepted publickey for core from 172.24.4.1 port 52598 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:51:25.611269 sshd[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:51:25.623516 systemd-logind[1437]: New session 20 of user core. Jan 29 12:51:25.628770 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:51:26.640645 sshd[5391]: pam_unix(sshd:session): session closed for user core Jan 29 12:51:26.654335 systemd[1]: sshd@17-172.24.4.220:22-172.24.4.1:52598.service: Deactivated successfully. Jan 29 12:51:26.659069 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:51:26.665093 systemd-logind[1437]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:51:26.672168 systemd[1]: Started sshd@18-172.24.4.220:22-172.24.4.1:52614.service - OpenSSH per-connection server daemon (172.24.4.1:52614). Jan 29 12:51:26.677310 systemd-logind[1437]: Removed session 20. Jan 29 12:51:27.903599 sshd[5403]: Accepted publickey for core from 172.24.4.1 port 52614 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:51:27.906555 sshd[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:51:27.916823 systemd-logind[1437]: New session 21 of user core. Jan 29 12:51:27.925719 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:51:29.823977 sshd[5403]: pam_unix(sshd:session): session closed for user core Jan 29 12:51:29.836153 systemd[1]: sshd@18-172.24.4.220:22-172.24.4.1:52614.service: Deactivated successfully. Jan 29 12:51:29.840289 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:51:29.843556 systemd-logind[1437]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:51:29.852096 systemd[1]: Started sshd@19-172.24.4.220:22-172.24.4.1:52624.service - OpenSSH per-connection server daemon (172.24.4.1:52624). Jan 29 12:51:29.857008 systemd-logind[1437]: Removed session 21. Jan 29 12:51:31.091142 sshd[5443]: Accepted publickey for core from 172.24.4.1 port 52624 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:51:31.095753 sshd[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:51:31.108436 systemd-logind[1437]: New session 22 of user core. Jan 29 12:51:31.113782 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:51:32.138886 sshd[5443]: pam_unix(sshd:session): session closed for user core Jan 29 12:51:32.150524 systemd[1]: sshd@19-172.24.4.220:22-172.24.4.1:52624.service: Deactivated successfully. Jan 29 12:51:32.155533 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:51:32.157746 systemd-logind[1437]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:51:32.167062 systemd[1]: Started sshd@20-172.24.4.220:22-172.24.4.1:52638.service - OpenSSH per-connection server daemon (172.24.4.1:52638). Jan 29 12:51:32.170565 systemd-logind[1437]: Removed session 22. Jan 29 12:51:33.504338 sshd[5454]: Accepted publickey for core from 172.24.4.1 port 52638 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:51:33.507378 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:51:33.517894 systemd-logind[1437]: New session 23 of user core. Jan 29 12:51:33.526703 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:51:34.328069 sshd[5454]: pam_unix(sshd:session): session closed for user core Jan 29 12:51:34.335325 systemd[1]: sshd@20-172.24.4.220:22-172.24.4.1:52638.service: Deactivated successfully. Jan 29 12:51:34.339762 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:51:34.341897 systemd-logind[1437]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:51:34.344728 systemd-logind[1437]: Removed session 23. Jan 29 12:51:39.347950 systemd[1]: Started sshd@21-172.24.4.220:22-172.24.4.1:42826.service - OpenSSH per-connection server daemon (172.24.4.1:42826). Jan 29 12:51:40.786844 sshd[5481]: Accepted publickey for core from 172.24.4.1 port 42826 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:51:40.790003 sshd[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:51:40.800956 systemd-logind[1437]: New session 24 of user core. Jan 29 12:51:40.806129 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:51:41.538644 sshd[5481]: pam_unix(sshd:session): session closed for user core Jan 29 12:51:41.543088 systemd[1]: sshd@21-172.24.4.220:22-172.24.4.1:42826.service: Deactivated successfully. Jan 29 12:51:41.546268 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:51:41.550823 systemd-logind[1437]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:51:41.553462 systemd-logind[1437]: Removed session 24. Jan 29 12:51:46.569725 systemd[1]: Started sshd@22-172.24.4.220:22-172.24.4.1:33840.service - OpenSSH per-connection server daemon (172.24.4.1:33840). Jan 29 12:51:47.933728 sshd[5528]: Accepted publickey for core from 172.24.4.1 port 33840 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:51:47.936317 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:51:47.950874 systemd-logind[1437]: New session 25 of user core. Jan 29 12:51:47.956564 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 12:51:48.638340 sshd[5528]: pam_unix(sshd:session): session closed for user core Jan 29 12:51:48.648185 systemd[1]: sshd@22-172.24.4.220:22-172.24.4.1:33840.service: Deactivated successfully. Jan 29 12:51:48.654297 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 12:51:48.660481 systemd-logind[1437]: Session 25 logged out. Waiting for processes to exit. Jan 29 12:51:48.663470 systemd-logind[1437]: Removed session 25. Jan 29 12:51:53.653683 systemd[1]: Started sshd@23-172.24.4.220:22-172.24.4.1:36408.service - OpenSSH per-connection server daemon (172.24.4.1:36408). Jan 29 12:51:54.843921 sshd[5545]: Accepted publickey for core from 172.24.4.1 port 36408 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:51:54.847598 sshd[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:51:54.859992 systemd-logind[1437]: New session 26 of user core. Jan 29 12:51:54.871898 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 12:51:55.593530 sshd[5545]: pam_unix(sshd:session): session closed for user core Jan 29 12:51:55.596893 systemd[1]: sshd@23-172.24.4.220:22-172.24.4.1:36408.service: Deactivated successfully. Jan 29 12:51:55.599151 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 12:51:55.600999 systemd-logind[1437]: Session 26 logged out. Waiting for processes to exit. Jan 29 12:51:55.602208 systemd-logind[1437]: Removed session 26. Jan 29 12:52:00.617040 systemd[1]: Started sshd@24-172.24.4.220:22-172.24.4.1:36414.service - OpenSSH per-connection server daemon (172.24.4.1:36414). Jan 29 12:52:02.096525 sshd[5596]: Accepted publickey for core from 172.24.4.1 port 36414 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:52:02.100361 sshd[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:52:02.111536 systemd-logind[1437]: New session 27 of user core. Jan 29 12:52:02.117931 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 12:52:02.874192 sshd[5596]: pam_unix(sshd:session): session closed for user core Jan 29 12:52:02.881163 systemd[1]: sshd@24-172.24.4.220:22-172.24.4.1:36414.service: Deactivated successfully. Jan 29 12:52:02.885525 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 12:52:02.888256 systemd-logind[1437]: Session 27 logged out. Waiting for processes to exit. Jan 29 12:52:02.890992 systemd-logind[1437]: Removed session 27.