Jan 30 15:44:35.131475 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 15:44:35.131503 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:44:35.131514 kernel: BIOS-provided physical RAM map: Jan 30 15:44:35.131523 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 15:44:35.131531 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 15:44:35.131542 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 15:44:35.131551 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 30 15:44:35.131559 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 30 15:44:35.131567 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 15:44:35.131575 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 15:44:35.131584 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 30 15:44:35.131592 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 15:44:35.131601 kernel: NX (Execute Disable) protection: active Jan 30 15:44:35.131609 kernel: APIC: Static calls initialized Jan 30 15:44:35.131621 kernel: SMBIOS 3.0.0 present. Jan 30 15:44:35.131629 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 30 15:44:35.131638 kernel: Hypervisor detected: KVM Jan 30 15:44:35.131646 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 15:44:35.131654 kernel: kvm-clock: using sched offset of 3326994796 cycles Jan 30 15:44:35.131665 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 15:44:35.131674 kernel: tsc: Detected 1996.249 MHz processor Jan 30 15:44:35.131683 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 15:44:35.131692 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 15:44:35.131701 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 30 15:44:35.131709 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 15:44:35.131718 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 15:44:35.131726 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 30 15:44:35.131735 kernel: ACPI: Early table checksum verification disabled Jan 30 15:44:35.131746 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 30 15:44:35.131754 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:35.131763 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:35.131771 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:35.131780 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 30 15:44:35.131788 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:35.131797 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:35.131805 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 30 15:44:35.131813 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 30 15:44:35.131824 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 30 15:44:35.131832 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 30 15:44:35.131841 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 30 15:44:35.131853 kernel: No NUMA configuration found Jan 30 15:44:35.131862 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 30 15:44:35.131870 kernel: NODE_DATA(0) allocated [mem 0x13fffa000-0x13fffffff] Jan 30 15:44:35.131881 kernel: Zone ranges: Jan 30 15:44:35.131890 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 15:44:35.131899 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 15:44:35.131908 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 30 15:44:35.131916 kernel: Movable zone start for each node Jan 30 15:44:35.131925 kernel: Early memory node ranges Jan 30 15:44:35.131934 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 15:44:35.131943 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 30 15:44:35.131954 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 30 15:44:35.131963 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 30 15:44:35.131971 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 15:44:35.131980 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 15:44:35.131989 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 30 15:44:35.131998 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 15:44:35.132007 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 15:44:35.132015 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 15:44:35.132024 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 15:44:35.132035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 15:44:35.132067 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 15:44:35.132076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 15:44:35.132085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 15:44:35.132093 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 15:44:35.132103 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 15:44:35.132111 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 15:44:35.132120 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 30 15:44:35.132129 kernel: Booting paravirtualized kernel on KVM Jan 30 15:44:35.132141 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 15:44:35.132151 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 15:44:35.132160 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 15:44:35.132168 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 15:44:35.132177 kernel: pcpu-alloc: [0] 0 1 Jan 30 15:44:35.132186 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 15:44:35.132196 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:44:35.132206 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 15:44:35.132217 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 15:44:35.132226 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 15:44:35.132235 kernel: Fallback order for Node 0: 0 Jan 30 15:44:35.132243 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 30 15:44:35.132253 kernel: Policy zone: Normal Jan 30 15:44:35.132262 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 15:44:35.132270 kernel: software IO TLB: area num 2. Jan 30 15:44:35.132280 kernel: Memory: 3966216K/4193772K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 227296K reserved, 0K cma-reserved) Jan 30 15:44:35.132289 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 15:44:35.132301 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 15:44:35.132310 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 15:44:35.132318 kernel: Dynamic Preempt: voluntary Jan 30 15:44:35.132327 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 15:44:35.132337 kernel: rcu: RCU event tracing is enabled. Jan 30 15:44:35.132346 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 15:44:35.132355 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 15:44:35.132364 kernel: Rude variant of Tasks RCU enabled. Jan 30 15:44:35.132372 kernel: Tracing variant of Tasks RCU enabled. Jan 30 15:44:35.132383 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 15:44:35.132392 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 15:44:35.132402 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 15:44:35.132411 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 15:44:35.132420 kernel: Console: colour VGA+ 80x25 Jan 30 15:44:35.132428 kernel: printk: console [tty0] enabled Jan 30 15:44:35.132437 kernel: printk: console [ttyS0] enabled Jan 30 15:44:35.132447 kernel: ACPI: Core revision 20230628 Jan 30 15:44:35.132456 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 15:44:35.132465 kernel: x2apic enabled Jan 30 15:44:35.132477 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 15:44:35.132486 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 15:44:35.132495 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 15:44:35.132505 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 30 15:44:35.132514 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 15:44:35.132522 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 15:44:35.132531 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 15:44:35.132540 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 15:44:35.132549 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 15:44:35.132562 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 15:44:35.132570 kernel: Speculative Store Bypass: Vulnerable Jan 30 15:44:35.132579 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 30 15:44:35.132588 kernel: Freeing SMP alternatives memory: 32K Jan 30 15:44:35.132605 kernel: pid_max: default: 32768 minimum: 301 Jan 30 15:44:35.132616 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 15:44:35.132626 kernel: landlock: Up and running. Jan 30 15:44:35.132635 kernel: SELinux: Initializing. Jan 30 15:44:35.132644 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 15:44:35.132654 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 15:44:35.132664 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 30 15:44:35.132675 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:44:35.132685 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:44:35.132694 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:44:35.132704 kernel: Performance Events: AMD PMU driver. Jan 30 15:44:35.132713 kernel: ... version: 0 Jan 30 15:44:35.132724 kernel: ... bit width: 48 Jan 30 15:44:35.132733 kernel: ... generic registers: 4 Jan 30 15:44:35.132742 kernel: ... value mask: 0000ffffffffffff Jan 30 15:44:35.132752 kernel: ... max period: 00007fffffffffff Jan 30 15:44:35.132761 kernel: ... fixed-purpose events: 0 Jan 30 15:44:35.132770 kernel: ... event mask: 000000000000000f Jan 30 15:44:35.132779 kernel: signal: max sigframe size: 1440 Jan 30 15:44:35.132789 kernel: rcu: Hierarchical SRCU implementation. Jan 30 15:44:35.132798 kernel: rcu: Max phase no-delay instances is 400. Jan 30 15:44:35.132810 kernel: smp: Bringing up secondary CPUs ... Jan 30 15:44:35.132819 kernel: smpboot: x86: Booting SMP configuration: Jan 30 15:44:35.132828 kernel: .... node #0, CPUs: #1 Jan 30 15:44:35.132837 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 15:44:35.132847 kernel: smpboot: Max logical packages: 2 Jan 30 15:44:35.132856 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 30 15:44:35.132865 kernel: devtmpfs: initialized Jan 30 15:44:35.132874 kernel: x86/mm: Memory block size: 128MB Jan 30 15:44:35.132884 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 15:44:35.132895 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 15:44:35.132904 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 15:44:35.132914 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 15:44:35.132923 kernel: audit: initializing netlink subsys (disabled) Jan 30 15:44:35.132932 kernel: audit: type=2000 audit(1738251874.050:1): state=initialized audit_enabled=0 res=1 Jan 30 15:44:35.132942 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 15:44:35.132951 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 15:44:35.132960 kernel: cpuidle: using governor menu Jan 30 15:44:35.132969 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 15:44:35.132981 kernel: dca service started, version 1.12.1 Jan 30 15:44:35.132990 kernel: PCI: Using configuration type 1 for base access Jan 30 15:44:35.132999 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 15:44:35.133009 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 15:44:35.133019 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 15:44:35.133028 kernel: ACPI: Added _OSI(Module Device) Jan 30 15:44:35.133037 kernel: ACPI: Added _OSI(Processor Device) Jan 30 15:44:35.135399 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 15:44:35.135414 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 15:44:35.135425 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 15:44:35.135440 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 15:44:35.135450 kernel: ACPI: Interpreter enabled Jan 30 15:44:35.135463 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 15:44:35.135477 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 15:44:35.135492 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 15:44:35.135507 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 15:44:35.135521 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 15:44:35.135531 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 15:44:35.135683 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 15:44:35.135792 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 15:44:35.135887 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 15:44:35.135901 kernel: acpiphp: Slot [3] registered Jan 30 15:44:35.135912 kernel: acpiphp: Slot [4] registered Jan 30 15:44:35.135921 kernel: acpiphp: Slot [5] registered Jan 30 15:44:35.135930 kernel: acpiphp: Slot [6] registered Jan 30 15:44:35.135939 kernel: acpiphp: Slot [7] registered Jan 30 15:44:35.135952 kernel: acpiphp: Slot [8] registered Jan 30 15:44:35.135962 kernel: acpiphp: Slot [9] registered Jan 30 15:44:35.135971 kernel: acpiphp: Slot [10] registered Jan 30 15:44:35.135980 kernel: acpiphp: Slot [11] registered Jan 30 15:44:35.135989 kernel: acpiphp: Slot [12] registered Jan 30 15:44:35.135998 kernel: acpiphp: Slot [13] registered Jan 30 15:44:35.136007 kernel: acpiphp: Slot [14] registered Jan 30 15:44:35.136017 kernel: acpiphp: Slot [15] registered Jan 30 15:44:35.136026 kernel: acpiphp: Slot [16] registered Jan 30 15:44:35.136037 kernel: acpiphp: Slot [17] registered Jan 30 15:44:35.136063 kernel: acpiphp: Slot [18] registered Jan 30 15:44:35.136073 kernel: acpiphp: Slot [19] registered Jan 30 15:44:35.136082 kernel: acpiphp: Slot [20] registered Jan 30 15:44:35.136091 kernel: acpiphp: Slot [21] registered Jan 30 15:44:35.136100 kernel: acpiphp: Slot [22] registered Jan 30 15:44:35.136110 kernel: acpiphp: Slot [23] registered Jan 30 15:44:35.136119 kernel: acpiphp: Slot [24] registered Jan 30 15:44:35.136128 kernel: acpiphp: Slot [25] registered Jan 30 15:44:35.136137 kernel: acpiphp: Slot [26] registered Jan 30 15:44:35.136149 kernel: acpiphp: Slot [27] registered Jan 30 15:44:35.136158 kernel: acpiphp: Slot [28] registered Jan 30 15:44:35.136168 kernel: acpiphp: Slot [29] registered Jan 30 15:44:35.136177 kernel: acpiphp: Slot [30] registered Jan 30 15:44:35.136186 kernel: acpiphp: Slot [31] registered Jan 30 15:44:35.136195 kernel: PCI host bridge to bus 0000:00 Jan 30 15:44:35.136304 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 15:44:35.136395 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 15:44:35.136487 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 15:44:35.136573 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 15:44:35.136660 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 30 15:44:35.136746 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 15:44:35.136859 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 15:44:35.136965 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 15:44:35.139107 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 15:44:35.139215 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 30 15:44:35.139323 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 15:44:35.139421 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 15:44:35.139517 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 15:44:35.139617 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 15:44:35.139722 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 15:44:35.139827 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 15:44:35.139923 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 15:44:35.140028 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 15:44:35.140153 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 15:44:35.140252 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 30 15:44:35.140350 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 30 15:44:35.140446 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 30 15:44:35.140549 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 15:44:35.140654 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 15:44:35.140768 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 30 15:44:35.140867 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 30 15:44:35.140967 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 30 15:44:35.143102 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 30 15:44:35.143221 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 30 15:44:35.143328 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 15:44:35.143425 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 30 15:44:35.143522 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 30 15:44:35.143633 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 15:44:35.143738 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 30 15:44:35.143844 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 30 15:44:35.143956 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 15:44:35.144096 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 30 15:44:35.144200 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 30 15:44:35.144299 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 30 15:44:35.144313 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 15:44:35.144322 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 15:44:35.144332 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 15:44:35.144342 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 15:44:35.144355 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 15:44:35.144365 kernel: iommu: Default domain type: Translated Jan 30 15:44:35.144375 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 15:44:35.144384 kernel: PCI: Using ACPI for IRQ routing Jan 30 15:44:35.144393 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 15:44:35.144403 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 15:44:35.144412 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 30 15:44:35.144506 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 15:44:35.144603 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 15:44:35.144707 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 15:44:35.144721 kernel: vgaarb: loaded Jan 30 15:44:35.144731 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 15:44:35.144740 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 15:44:35.144749 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 15:44:35.144758 kernel: pnp: PnP ACPI init Jan 30 15:44:35.144859 kernel: pnp 00:03: [dma 2] Jan 30 15:44:35.144875 kernel: pnp: PnP ACPI: found 5 devices Jan 30 15:44:35.144885 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 15:44:35.144898 kernel: NET: Registered PF_INET protocol family Jan 30 15:44:35.144907 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 15:44:35.144917 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 15:44:35.144926 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 15:44:35.144936 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 15:44:35.144945 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 15:44:35.144955 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 15:44:35.144965 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 15:44:35.144977 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 15:44:35.144993 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 15:44:35.145020 kernel: NET: Registered PF_XDP protocol family Jan 30 15:44:35.145500 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 15:44:35.145803 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 15:44:35.145929 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 15:44:35.146013 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 30 15:44:35.146125 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 30 15:44:35.146254 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 15:44:35.146419 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 15:44:35.146441 kernel: PCI: CLS 0 bytes, default 64 Jan 30 15:44:35.146452 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 15:44:35.146461 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 30 15:44:35.146471 kernel: Initialise system trusted keyrings Jan 30 15:44:35.146481 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 15:44:35.146490 kernel: Key type asymmetric registered Jan 30 15:44:35.146500 kernel: Asymmetric key parser 'x509' registered Jan 30 15:44:35.146514 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 15:44:35.146523 kernel: io scheduler mq-deadline registered Jan 30 15:44:35.146533 kernel: io scheduler kyber registered Jan 30 15:44:35.146542 kernel: io scheduler bfq registered Jan 30 15:44:35.146551 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 15:44:35.146561 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 15:44:35.146571 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 15:44:35.146580 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 15:44:35.146590 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 15:44:35.146602 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 15:44:35.146611 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 15:44:35.146620 kernel: random: crng init done Jan 30 15:44:35.146630 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 15:44:35.146639 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 15:44:35.146649 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 15:44:35.146754 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 15:44:35.146769 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 15:44:35.146859 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 15:44:35.146953 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T15:44:34 UTC (1738251874) Jan 30 15:44:35.147067 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 30 15:44:35.147083 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 15:44:35.147093 kernel: NET: Registered PF_INET6 protocol family Jan 30 15:44:35.147102 kernel: Segment Routing with IPv6 Jan 30 15:44:35.147112 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 15:44:35.147121 kernel: NET: Registered PF_PACKET protocol family Jan 30 15:44:35.147130 kernel: Key type dns_resolver registered Jan 30 15:44:35.147143 kernel: IPI shorthand broadcast: enabled Jan 30 15:44:35.147153 kernel: sched_clock: Marking stable (1042008003, 169216403)->(1251873629, -40649223) Jan 30 15:44:35.147162 kernel: registered taskstats version 1 Jan 30 15:44:35.147172 kernel: Loading compiled-in X.509 certificates Jan 30 15:44:35.147181 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 15:44:35.147190 kernel: Key type .fscrypt registered Jan 30 15:44:35.147199 kernel: Key type fscrypt-provisioning registered Jan 30 15:44:35.147209 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 15:44:35.147220 kernel: ima: Allocated hash algorithm: sha1 Jan 30 15:44:35.147230 kernel: ima: No architecture policies found Jan 30 15:44:35.147239 kernel: clk: Disabling unused clocks Jan 30 15:44:35.147248 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 15:44:35.147258 kernel: Write protecting the kernel read-only data: 36864k Jan 30 15:44:35.147267 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 15:44:35.147276 kernel: Run /init as init process Jan 30 15:44:35.147286 kernel: with arguments: Jan 30 15:44:35.147296 kernel: /init Jan 30 15:44:35.147305 kernel: with environment: Jan 30 15:44:35.147316 kernel: HOME=/ Jan 30 15:44:35.147325 kernel: TERM=linux Jan 30 15:44:35.147334 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 15:44:35.147347 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:44:35.147359 systemd[1]: Detected virtualization kvm. Jan 30 15:44:35.147370 systemd[1]: Detected architecture x86-64. Jan 30 15:44:35.147380 systemd[1]: Running in initrd. Jan 30 15:44:35.147392 systemd[1]: No hostname configured, using default hostname. Jan 30 15:44:35.147402 systemd[1]: Hostname set to . Jan 30 15:44:35.147412 systemd[1]: Initializing machine ID from VM UUID. Jan 30 15:44:35.147423 systemd[1]: Queued start job for default target initrd.target. Jan 30 15:44:35.147433 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:44:35.147443 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:44:35.147454 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 15:44:35.147473 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:44:35.147491 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 15:44:35.147503 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 15:44:35.147515 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 15:44:35.147526 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 15:44:35.147539 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:44:35.147550 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:44:35.147560 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:44:35.147571 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:44:35.147581 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:44:35.147591 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:44:35.147602 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:44:35.147612 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:44:35.147623 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 15:44:35.147636 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 15:44:35.147647 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:44:35.147657 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:44:35.147667 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:44:35.147678 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:44:35.147690 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 15:44:35.147700 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:44:35.147710 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 15:44:35.147721 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 15:44:35.147733 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:44:35.147744 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:44:35.147754 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:44:35.147765 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 15:44:35.147775 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:44:35.147786 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 15:44:35.147799 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:44:35.147829 systemd-journald[183]: Collecting audit messages is disabled. Jan 30 15:44:35.147859 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:44:35.147870 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 15:44:35.147881 systemd-journald[183]: Journal started Jan 30 15:44:35.147910 systemd-journald[183]: Runtime Journal (/run/log/journal/1dd18775a0ed45abb36853ef61ededca) is 8.0M, max 78.3M, 70.3M free. Jan 30 15:44:35.110071 systemd-modules-load[184]: Inserted module 'overlay' Jan 30 15:44:35.195081 kernel: Bridge firewalling registered Jan 30 15:44:35.195157 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:44:35.152076 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 30 15:44:35.195548 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:44:35.196250 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:35.204267 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:44:35.207348 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:44:35.216968 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:44:35.229679 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:44:35.237248 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:44:35.244772 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:44:35.248804 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:44:35.251491 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:44:35.260212 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 15:44:35.274587 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:44:35.293078 dracut-cmdline[217]: dracut-dracut-053 Jan 30 15:44:35.293877 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:44:35.307127 systemd-resolved[218]: Positive Trust Anchors: Jan 30 15:44:35.307872 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:44:35.308653 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:44:35.314119 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 30 15:44:35.315734 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:44:35.316991 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:44:35.365131 kernel: SCSI subsystem initialized Jan 30 15:44:35.376108 kernel: Loading iSCSI transport class v2.0-870. Jan 30 15:44:35.389195 kernel: iscsi: registered transport (tcp) Jan 30 15:44:35.411373 kernel: iscsi: registered transport (qla4xxx) Jan 30 15:44:35.411444 kernel: QLogic iSCSI HBA Driver Jan 30 15:44:35.473565 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 15:44:35.479400 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 15:44:35.532429 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 15:44:35.532539 kernel: device-mapper: uevent: version 1.0.3 Jan 30 15:44:35.534114 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 15:44:35.607236 kernel: raid6: sse2x4 gen() 4781 MB/s Jan 30 15:44:35.625140 kernel: raid6: sse2x2 gen() 5989 MB/s Jan 30 15:44:35.643713 kernel: raid6: sse2x1 gen() 9183 MB/s Jan 30 15:44:35.643775 kernel: raid6: using algorithm sse2x1 gen() 9183 MB/s Jan 30 15:44:35.662760 kernel: raid6: .... xor() 6711 MB/s, rmw enabled Jan 30 15:44:35.662829 kernel: raid6: using ssse3x2 recovery algorithm Jan 30 15:44:35.688552 kernel: xor: measuring software checksum speed Jan 30 15:44:35.688637 kernel: prefetch64-sse : 17161 MB/sec Jan 30 15:44:35.689098 kernel: generic_sse : 15605 MB/sec Jan 30 15:44:35.690373 kernel: xor: using function: prefetch64-sse (17161 MB/sec) Jan 30 15:44:35.892113 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 15:44:35.906925 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:44:35.922751 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:44:35.938301 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 30 15:44:35.949024 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:44:35.958302 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 15:44:35.978896 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 30 15:44:36.040233 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:44:36.050496 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:44:36.135968 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:44:36.145239 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 15:44:36.161503 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 15:44:36.175996 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:44:36.178641 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:44:36.180795 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:44:36.191627 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 15:44:36.221440 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 30 15:44:36.263379 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 30 15:44:36.263714 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 15:44:36.263737 kernel: GPT:17805311 != 20971519 Jan 30 15:44:36.263750 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 15:44:36.263762 kernel: GPT:17805311 != 20971519 Jan 30 15:44:36.263774 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 15:44:36.263785 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:44:36.221992 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:44:36.267074 kernel: libata version 3.00 loaded. Jan 30 15:44:36.269945 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:44:36.271527 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 15:44:36.279328 kernel: scsi host0: ata_piix Jan 30 15:44:36.279459 kernel: scsi host1: ata_piix Jan 30 15:44:36.279592 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 30 15:44:36.279613 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 30 15:44:36.270109 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:44:36.280885 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:44:36.281807 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:44:36.281967 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:36.285009 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:44:36.300633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:44:36.302030 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (452) Jan 30 15:44:36.315065 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (458) Jan 30 15:44:36.331689 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 15:44:36.374518 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 15:44:36.375500 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:36.382973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 15:44:36.388188 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 15:44:36.388860 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 15:44:36.397277 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 15:44:36.401213 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:44:36.420524 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:44:36.529652 disk-uuid[500]: Primary Header is updated. Jan 30 15:44:36.529652 disk-uuid[500]: Secondary Entries is updated. Jan 30 15:44:36.529652 disk-uuid[500]: Secondary Header is updated. Jan 30 15:44:36.629142 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:44:36.771129 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:44:37.812160 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:44:37.812255 disk-uuid[510]: The operation has completed successfully. Jan 30 15:44:37.892312 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 15:44:37.892422 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 15:44:37.918206 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 15:44:37.925572 sh[523]: Success Jan 30 15:44:37.954101 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 30 15:44:38.042795 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 15:44:38.055243 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 15:44:38.056801 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 15:44:38.078141 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 15:44:38.078224 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:44:38.081100 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 15:44:38.085386 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 15:44:38.085446 kernel: BTRFS info (device dm-0): using free space tree Jan 30 15:44:38.101705 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 15:44:38.102865 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 15:44:38.109232 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 15:44:38.114490 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 15:44:38.131112 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:44:38.131162 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:44:38.135251 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:44:38.146092 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:44:38.162370 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 15:44:38.167675 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:44:38.181864 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 15:44:38.190373 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 15:44:38.223288 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:44:38.232253 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:44:38.255965 systemd-networkd[705]: lo: Link UP Jan 30 15:44:38.255973 systemd-networkd[705]: lo: Gained carrier Jan 30 15:44:38.257178 systemd-networkd[705]: Enumeration completed Jan 30 15:44:38.257290 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:44:38.257572 systemd-networkd[705]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:44:38.257576 systemd-networkd[705]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:44:38.257968 systemd[1]: Reached target network.target - Network. Jan 30 15:44:38.258619 systemd-networkd[705]: eth0: Link UP Jan 30 15:44:38.258623 systemd-networkd[705]: eth0: Gained carrier Jan 30 15:44:38.258631 systemd-networkd[705]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:44:38.274108 systemd-networkd[705]: eth0: DHCPv4 address 172.24.4.138/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 15:44:38.373955 ignition[656]: Ignition 2.19.0 Jan 30 15:44:38.373967 ignition[656]: Stage: fetch-offline Jan 30 15:44:38.374013 ignition[656]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:38.375835 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:44:38.374024 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:38.374161 ignition[656]: parsed url from cmdline: "" Jan 30 15:44:38.374166 ignition[656]: no config URL provided Jan 30 15:44:38.374172 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:44:38.374182 ignition[656]: no config at "/usr/lib/ignition/user.ign" Jan 30 15:44:38.374188 ignition[656]: failed to fetch config: resource requires networking Jan 30 15:44:38.374449 ignition[656]: Ignition finished successfully Jan 30 15:44:38.382249 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 15:44:38.397526 ignition[717]: Ignition 2.19.0 Jan 30 15:44:38.397537 ignition[717]: Stage: fetch Jan 30 15:44:38.397722 ignition[717]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:38.397734 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:38.397833 ignition[717]: parsed url from cmdline: "" Jan 30 15:44:38.397837 ignition[717]: no config URL provided Jan 30 15:44:38.397843 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:44:38.397852 ignition[717]: no config at "/usr/lib/ignition/user.ign" Jan 30 15:44:38.398101 ignition[717]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 30 15:44:38.398143 ignition[717]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 30 15:44:38.398185 ignition[717]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 30 15:44:38.577939 ignition[717]: GET result: OK Jan 30 15:44:38.578214 ignition[717]: parsing config with SHA512: fe0e2be9fbfbad5118f6a57e32cbe0a795a0f7c811caf1dc293b12d74429f57ec3946a66b7fdb2a978d94fa67ba349c9f25572508e7b49a5fc8dad91c89ff4f1 Jan 30 15:44:38.591312 unknown[717]: fetched base config from "system" Jan 30 15:44:38.591387 unknown[717]: fetched base config from "system" Jan 30 15:44:38.593136 ignition[717]: fetch: fetch complete Jan 30 15:44:38.591414 unknown[717]: fetched user config from "openstack" Jan 30 15:44:38.593156 ignition[717]: fetch: fetch passed Jan 30 15:44:38.599278 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 15:44:38.593283 ignition[717]: Ignition finished successfully Jan 30 15:44:38.619337 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 15:44:38.652931 ignition[723]: Ignition 2.19.0 Jan 30 15:44:38.653118 ignition[723]: Stage: kargs Jan 30 15:44:38.653540 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:38.653575 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:38.658647 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 15:44:38.656171 ignition[723]: kargs: kargs passed Jan 30 15:44:38.656283 ignition[723]: Ignition finished successfully Jan 30 15:44:38.677078 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 15:44:38.706549 ignition[730]: Ignition 2.19.0 Jan 30 15:44:38.706576 ignition[730]: Stage: disks Jan 30 15:44:38.707004 ignition[730]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:38.707030 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:38.712342 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 15:44:38.709656 ignition[730]: disks: disks passed Jan 30 15:44:38.715025 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 15:44:38.709769 ignition[730]: Ignition finished successfully Jan 30 15:44:38.717186 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 15:44:38.720003 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:44:38.722528 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:44:38.725706 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:44:38.735484 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 15:44:38.770560 systemd-fsck[738]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 15:44:38.787414 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 15:44:38.794246 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 15:44:38.933320 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 15:44:38.933949 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 15:44:38.934925 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 15:44:38.943281 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:44:38.946917 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 15:44:38.949033 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 15:44:38.956116 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (746) Jan 30 15:44:38.961306 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:44:38.961390 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:44:38.961434 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:44:38.963447 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 30 15:44:38.968101 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:44:38.970607 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 15:44:38.970680 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:44:38.976922 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:44:38.988358 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 15:44:39.001342 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 15:44:39.172473 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 15:44:39.185674 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Jan 30 15:44:39.191324 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 15:44:39.200157 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 15:44:39.346907 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 15:44:39.362371 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 15:44:39.365313 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 15:44:39.376158 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 15:44:39.382255 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:44:39.438313 ignition[863]: INFO : Ignition 2.19.0 Jan 30 15:44:39.439985 ignition[863]: INFO : Stage: mount Jan 30 15:44:39.438589 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 15:44:39.441837 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:39.441837 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:39.441837 ignition[863]: INFO : mount: mount passed Jan 30 15:44:39.441837 ignition[863]: INFO : Ignition finished successfully Jan 30 15:44:39.442405 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 15:44:40.197514 systemd-networkd[705]: eth0: Gained IPv6LL Jan 30 15:44:46.254160 coreos-metadata[748]: Jan 30 15:44:46.254 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:44:46.294735 coreos-metadata[748]: Jan 30 15:44:46.294 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 15:44:46.308739 coreos-metadata[748]: Jan 30 15:44:46.308 INFO Fetch successful Jan 30 15:44:46.310295 coreos-metadata[748]: Jan 30 15:44:46.309 INFO wrote hostname ci-4081-3-0-f-c7edc085f7.novalocal to /sysroot/etc/hostname Jan 30 15:44:46.312430 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 30 15:44:46.312688 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 30 15:44:46.325254 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 15:44:46.362530 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:44:46.381105 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (880) Jan 30 15:44:46.388342 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:44:46.388407 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:44:46.392591 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:44:46.405272 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:44:46.411229 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:44:46.455610 ignition[898]: INFO : Ignition 2.19.0 Jan 30 15:44:46.457324 ignition[898]: INFO : Stage: files Jan 30 15:44:46.457324 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:46.457324 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:46.462502 ignition[898]: DEBUG : files: compiled without relabeling support, skipping Jan 30 15:44:46.463437 ignition[898]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 15:44:46.463437 ignition[898]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 15:44:46.470007 ignition[898]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 15:44:46.471254 ignition[898]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 15:44:46.472447 unknown[898]: wrote ssh authorized keys file for user: core Jan 30 15:44:46.473215 ignition[898]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 15:44:46.475677 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 15:44:46.476771 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 15:44:46.555663 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 15:44:46.973885 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 15:44:46.973885 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 15:44:46.978901 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 15:44:46.978901 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:44:46.978901 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:44:46.978901 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:44:46.978901 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:44:46.978901 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:44:46.978901 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:44:46.978901 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:44:46.978901 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:44:46.978901 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 15:44:46.978901 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 15:44:46.978901 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 15:44:46.978901 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 15:44:47.533632 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 15:44:49.198843 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 15:44:49.198843 ignition[898]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 15:44:49.206370 ignition[898]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:44:49.206370 ignition[898]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:44:49.206370 ignition[898]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 15:44:49.206370 ignition[898]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 15:44:49.206370 ignition[898]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 15:44:49.206370 ignition[898]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:44:49.206370 ignition[898]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:44:49.206370 ignition[898]: INFO : files: files passed Jan 30 15:44:49.206370 ignition[898]: INFO : Ignition finished successfully Jan 30 15:44:49.205404 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 15:44:49.217488 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 15:44:49.220196 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 15:44:49.223983 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 15:44:49.224111 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 15:44:49.239848 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:44:49.240988 initrd-setup-root-after-ignition[925]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:44:49.240988 initrd-setup-root-after-ignition[925]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:44:49.243817 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:44:49.245348 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 15:44:49.250189 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 15:44:49.317684 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 15:44:49.317915 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 15:44:49.321408 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 15:44:49.330100 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 15:44:49.332764 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 15:44:49.340433 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 15:44:49.374207 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:44:49.387311 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 15:44:49.410716 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:44:49.412450 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:44:49.415650 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 15:44:49.418569 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 15:44:49.418849 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:44:49.422109 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 15:44:49.423982 systemd[1]: Stopped target basic.target - Basic System. Jan 30 15:44:49.426940 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 15:44:49.429611 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:44:49.432283 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 15:44:49.435311 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 15:44:49.438332 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:44:49.441363 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 15:44:49.444300 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 15:44:49.447376 systemd[1]: Stopped target swap.target - Swaps. Jan 30 15:44:49.450036 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 15:44:49.450409 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:44:49.453704 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:44:49.455776 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:44:49.458317 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 15:44:49.460780 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:44:49.463194 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 15:44:49.463479 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 15:44:49.466939 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 15:44:49.467404 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:44:49.471033 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 15:44:49.471471 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 15:44:49.481149 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 15:44:49.493287 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 15:44:49.498366 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 15:44:49.498753 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:44:49.502780 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 15:44:49.503124 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:44:49.515930 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 15:44:49.516067 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 15:44:49.534165 ignition[950]: INFO : Ignition 2.19.0 Jan 30 15:44:49.535164 ignition[950]: INFO : Stage: umount Jan 30 15:44:49.536440 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:49.536440 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:49.535181 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 15:44:49.540020 ignition[950]: INFO : umount: umount passed Jan 30 15:44:49.540020 ignition[950]: INFO : Ignition finished successfully Jan 30 15:44:49.541681 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 15:44:49.541814 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 15:44:49.543178 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 15:44:49.543273 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 15:44:49.544808 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 15:44:49.544885 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 15:44:49.546006 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 15:44:49.546074 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 15:44:49.546993 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 15:44:49.547032 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 15:44:49.548028 systemd[1]: Stopped target network.target - Network. Jan 30 15:44:49.548953 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 15:44:49.548996 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:44:49.550084 systemd[1]: Stopped target paths.target - Path Units. Jan 30 15:44:49.551013 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 15:44:49.551286 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:44:49.552116 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 15:44:49.553090 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 15:44:49.554349 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 15:44:49.554384 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:44:49.555587 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 15:44:49.555621 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:44:49.556600 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 15:44:49.556643 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 15:44:49.557766 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 15:44:49.557808 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 15:44:49.559019 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 15:44:49.559079 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 15:44:49.560193 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 15:44:49.561238 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 15:44:49.564087 systemd-networkd[705]: eth0: DHCPv6 lease lost Jan 30 15:44:49.566038 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 15:44:49.566159 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 15:44:49.567525 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 15:44:49.567561 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:44:49.576224 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 15:44:49.577370 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 15:44:49.577432 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:44:49.578886 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:44:49.580509 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 15:44:49.580611 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 15:44:49.592297 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 15:44:49.592432 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:44:49.594540 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 15:44:49.594672 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 15:44:49.596283 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 15:44:49.596336 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 15:44:49.597238 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 15:44:49.597273 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:44:49.598380 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 15:44:49.598424 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:44:49.600068 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 15:44:49.600110 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 15:44:49.601242 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:44:49.601284 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:44:49.609198 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 15:44:49.611361 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 15:44:49.611413 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:44:49.611915 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 15:44:49.611954 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 15:44:49.612472 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 15:44:49.612511 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:44:49.613040 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 15:44:49.614415 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:44:49.615678 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 15:44:49.615722 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:44:49.616873 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 15:44:49.616914 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:44:49.618201 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:44:49.618255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:49.619826 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 15:44:49.619908 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 15:44:49.620964 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 15:44:49.628945 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 15:44:49.634089 systemd[1]: Switching root. Jan 30 15:44:49.664201 systemd-journald[183]: Journal stopped Jan 30 15:44:51.521743 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 30 15:44:51.521798 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 15:44:51.521814 kernel: SELinux: policy capability open_perms=1 Jan 30 15:44:51.521826 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 15:44:51.521837 kernel: SELinux: policy capability always_check_network=0 Jan 30 15:44:51.521852 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 15:44:51.521867 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 15:44:51.521878 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 15:44:51.521891 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 15:44:51.521903 kernel: audit: type=1403 audit(1738251890.455:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 15:44:51.521915 systemd[1]: Successfully loaded SELinux policy in 73.747ms. Jan 30 15:44:51.521935 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.719ms. Jan 30 15:44:51.521949 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:44:51.521962 systemd[1]: Detected virtualization kvm. Jan 30 15:44:51.521976 systemd[1]: Detected architecture x86-64. Jan 30 15:44:51.521988 systemd[1]: Detected first boot. Jan 30 15:44:51.522001 systemd[1]: Hostname set to . Jan 30 15:44:51.522013 systemd[1]: Initializing machine ID from VM UUID. Jan 30 15:44:51.522025 zram_generator::config[992]: No configuration found. Jan 30 15:44:51.522039 systemd[1]: Populated /etc with preset unit settings. Jan 30 15:44:51.522076 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 15:44:51.522092 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 15:44:51.522106 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 15:44:51.522119 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 15:44:51.522131 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 15:44:51.522143 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 15:44:51.522156 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 15:44:51.522168 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 15:44:51.522180 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 15:44:51.522194 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 15:44:51.522208 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 15:44:51.522220 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:44:51.522247 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:44:51.522260 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 15:44:51.522272 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 15:44:51.522284 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 15:44:51.522296 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:44:51.522308 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 15:44:51.522321 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:44:51.522338 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 15:44:51.522351 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 15:44:51.522365 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 15:44:51.522378 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 15:44:51.522390 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:44:51.522403 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:44:51.522418 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:44:51.522430 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:44:51.522443 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 15:44:51.522458 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 15:44:51.522471 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:44:51.522484 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:44:51.522496 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:44:51.522509 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 15:44:51.522522 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 15:44:51.522539 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 15:44:51.522551 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 15:44:51.522564 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:51.522576 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 15:44:51.522589 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 15:44:51.522601 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 15:44:51.522615 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 15:44:51.522628 systemd[1]: Reached target machines.target - Containers. Jan 30 15:44:51.522643 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 15:44:51.522657 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:44:51.522670 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:44:51.522683 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 15:44:51.522696 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:44:51.522708 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:44:51.522721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:44:51.522734 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 15:44:51.522747 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:44:51.522762 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 15:44:51.522775 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 15:44:51.522788 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 15:44:51.522800 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 15:44:51.522813 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 15:44:51.522825 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:44:51.522837 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:44:51.522850 kernel: fuse: init (API version 7.39) Jan 30 15:44:51.522862 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 15:44:51.522877 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 15:44:51.522890 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:44:51.522902 kernel: loop: module loaded Jan 30 15:44:51.522915 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 15:44:51.522928 systemd[1]: Stopped verity-setup.service. Jan 30 15:44:51.522941 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:51.522954 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 15:44:51.522967 kernel: ACPI: bus type drm_connector registered Jan 30 15:44:51.522980 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 15:44:51.522994 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 15:44:51.523007 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 15:44:51.523020 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 15:44:51.523033 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 15:44:51.523062 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 15:44:51.523075 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:44:51.523088 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 15:44:51.523101 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 15:44:51.523114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:44:51.523128 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:44:51.523158 systemd-journald[1088]: Collecting audit messages is disabled. Jan 30 15:44:51.523186 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:44:51.523199 systemd-journald[1088]: Journal started Jan 30 15:44:51.523224 systemd-journald[1088]: Runtime Journal (/run/log/journal/1dd18775a0ed45abb36853ef61ededca) is 8.0M, max 78.3M, 70.3M free. Jan 30 15:44:51.171719 systemd[1]: Queued start job for default target multi-user.target. Jan 30 15:44:51.194409 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 15:44:51.194811 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 15:44:51.524411 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:44:51.528099 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:44:51.529004 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:44:51.529202 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:44:51.529940 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 15:44:51.530106 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 15:44:51.530862 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:44:51.530996 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:44:51.531894 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:44:51.532658 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 15:44:51.533521 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 15:44:51.546190 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 15:44:51.553180 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 15:44:51.560196 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 15:44:51.562124 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 15:44:51.562163 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:44:51.563928 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 15:44:51.572211 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 15:44:51.573801 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 15:44:51.574677 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:44:51.579278 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 15:44:51.582289 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 15:44:51.582984 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:44:51.584967 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 15:44:51.587922 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:44:51.593692 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:44:51.598295 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 15:44:51.599917 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:44:51.603770 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:44:51.604769 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 15:44:51.606491 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 15:44:51.607491 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 15:44:51.613806 systemd-journald[1088]: Time spent on flushing to /var/log/journal/1dd18775a0ed45abb36853ef61ededca is 47.754ms for 947 entries. Jan 30 15:44:51.613806 systemd-journald[1088]: System Journal (/var/log/journal/1dd18775a0ed45abb36853ef61ededca) is 8.0M, max 584.8M, 576.8M free. Jan 30 15:44:51.702575 systemd-journald[1088]: Received client request to flush runtime journal. Jan 30 15:44:51.702621 kernel: loop0: detected capacity change from 0 to 8 Jan 30 15:44:51.702643 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 15:44:51.619258 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 15:44:51.644690 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 15:44:51.645851 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 15:44:51.655077 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 15:44:51.674729 udevadm[1132]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 15:44:51.675779 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:44:51.705425 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 15:44:51.719076 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 15:44:51.732308 systemd-tmpfiles[1126]: ACLs are not supported, ignoring. Jan 30 15:44:51.732325 systemd-tmpfiles[1126]: ACLs are not supported, ignoring. Jan 30 15:44:51.739909 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:44:51.748325 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 15:44:51.749635 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 15:44:51.752244 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 15:44:51.799083 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 15:44:51.819987 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 15:44:51.827269 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:44:51.853616 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Jan 30 15:44:51.853638 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Jan 30 15:44:51.859249 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:44:51.871075 kernel: loop3: detected capacity change from 0 to 218376 Jan 30 15:44:51.928500 kernel: loop4: detected capacity change from 0 to 8 Jan 30 15:44:51.935094 kernel: loop5: detected capacity change from 0 to 140768 Jan 30 15:44:52.021333 kernel: loop6: detected capacity change from 0 to 142488 Jan 30 15:44:52.077149 kernel: loop7: detected capacity change from 0 to 218376 Jan 30 15:44:52.147205 (sd-merge)[1154]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 30 15:44:52.148277 (sd-merge)[1154]: Merged extensions into '/usr'. Jan 30 15:44:52.163820 systemd[1]: Reloading requested from client PID 1125 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 15:44:52.163842 systemd[1]: Reloading... Jan 30 15:44:52.254972 zram_generator::config[1177]: No configuration found. Jan 30 15:44:52.436691 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:44:52.500909 systemd[1]: Reloading finished in 336 ms. Jan 30 15:44:52.537466 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 15:44:52.547266 systemd[1]: Starting ensure-sysext.service... Jan 30 15:44:52.553199 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:44:52.585234 systemd[1]: Reloading requested from client PID 1235 ('systemctl') (unit ensure-sysext.service)... Jan 30 15:44:52.585374 systemd[1]: Reloading... Jan 30 15:44:52.615949 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 15:44:52.616349 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 15:44:52.617701 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 15:44:52.618164 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jan 30 15:44:52.618324 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jan 30 15:44:52.640613 systemd-tmpfiles[1236]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:44:52.640818 systemd-tmpfiles[1236]: Skipping /boot Jan 30 15:44:52.651549 systemd-tmpfiles[1236]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:44:52.651694 systemd-tmpfiles[1236]: Skipping /boot Jan 30 15:44:52.674070 zram_generator::config[1267]: No configuration found. Jan 30 15:44:52.838805 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:44:52.903672 systemd[1]: Reloading finished in 317 ms. Jan 30 15:44:52.915693 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 15:44:52.922712 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:44:52.931074 ldconfig[1120]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 15:44:52.934246 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 15:44:52.938088 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 15:44:52.948280 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 15:44:52.951935 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:44:52.957232 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:44:52.970409 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 15:44:52.971596 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 15:44:52.978855 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:52.979331 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:44:52.985354 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:44:52.988282 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:44:52.997324 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:44:52.999166 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:44:53.006352 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 15:44:53.006980 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:53.008011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:44:53.009135 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:44:53.013202 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:44:53.013378 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:44:53.015520 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 15:44:53.022038 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:53.024317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:44:53.028336 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:44:53.031274 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:44:53.031865 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:44:53.042351 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 15:44:53.043058 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:53.044637 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 15:44:53.045639 augenrules[1353]: No rules Jan 30 15:44:53.045959 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:44:53.047168 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:44:53.048461 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:44:53.048615 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:44:53.051777 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:44:53.053108 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:44:53.053578 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Jan 30 15:44:53.055255 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 15:44:53.065006 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:53.065258 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:44:53.072257 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:44:53.075399 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:44:53.079633 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:44:53.083998 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:44:53.084661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:44:53.084801 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:53.086954 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 15:44:53.087923 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:44:53.088085 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:44:53.092213 systemd[1]: Finished ensure-sysext.service. Jan 30 15:44:53.103259 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 15:44:53.105101 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 15:44:53.108307 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:44:53.119207 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:44:53.120036 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 15:44:53.128987 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 15:44:53.138486 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:44:53.139120 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:44:53.140967 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:44:53.141149 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:44:53.142614 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:44:53.143106 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:44:53.149586 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:44:53.149654 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:44:53.264156 systemd-networkd[1376]: lo: Link UP Jan 30 15:44:53.264166 systemd-networkd[1376]: lo: Gained carrier Jan 30 15:44:53.264637 systemd-networkd[1376]: Enumeration completed Jan 30 15:44:53.264787 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:44:53.275262 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 15:44:53.279066 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1387) Jan 30 15:44:53.294657 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 15:44:53.329458 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 15:44:53.332163 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 15:44:53.335579 systemd-resolved[1332]: Positive Trust Anchors: Jan 30 15:44:53.335594 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:44:53.335640 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:44:53.343500 systemd-resolved[1332]: Using system hostname 'ci-4081-3-0-f-c7edc085f7.novalocal'. Jan 30 15:44:53.344066 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 15:44:53.346870 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:44:53.349279 systemd[1]: Reached target network.target - Network. Jan 30 15:44:53.349801 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:44:53.358065 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 15:44:53.380630 kernel: ACPI: button: Power Button [PWRF] Jan 30 15:44:53.409608 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:44:53.409620 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:44:53.412104 systemd-networkd[1376]: eth0: Link UP Jan 30 15:44:53.412112 systemd-networkd[1376]: eth0: Gained carrier Jan 30 15:44:53.412130 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:44:53.420087 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 15:44:53.426080 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 15:44:53.426137 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 15:44:53.427392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:44:53.427455 systemd-networkd[1376]: eth0: DHCPv4 address 172.24.4.138/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 15:44:53.428514 systemd-timesyncd[1371]: Network configuration changed, trying to establish connection. Jan 30 15:44:53.435201 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 15:44:53.435273 kernel: Console: switching to colour dummy device 80x25 Jan 30 15:44:53.438222 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 15:44:53.438317 kernel: [drm] features: -context_init Jan 30 15:44:53.441405 kernel: [drm] number of scanouts: 1 Jan 30 15:44:53.441449 kernel: [drm] number of cap sets: 0 Jan 30 15:44:53.444081 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 15:44:53.443688 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 15:44:53.448351 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 15:44:53.449873 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:44:53.450096 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:53.462948 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 15:44:53.463023 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 15:44:53.456406 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:44:53.470064 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 15:44:53.970139 systemd-resolved[1332]: Clock change detected. Flushing caches. Jan 30 15:44:53.970382 systemd-timesyncd[1371]: Contacted time server 212.227.232.161:123 (0.flatcar.pool.ntp.org). Jan 30 15:44:53.970424 systemd-timesyncd[1371]: Initial clock synchronization to Thu 2025-01-30 15:44:53.969779 UTC. Jan 30 15:44:53.970468 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:44:53.970636 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:53.975967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:44:53.982284 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 15:44:53.987965 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 15:44:53.995178 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 15:44:54.010703 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:44:54.040291 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 15:44:54.040541 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:44:54.044921 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 15:44:54.065994 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:44:54.073214 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:54.075167 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:44:54.075349 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 15:44:54.075472 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 15:44:54.076185 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 15:44:54.076938 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 15:44:54.077026 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 15:44:54.077097 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 15:44:54.077133 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:44:54.077207 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:44:54.078988 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 15:44:54.080849 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 15:44:54.098325 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 15:44:54.099173 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 15:44:54.099330 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:44:54.099965 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:44:54.100455 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:44:54.100484 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:44:54.108818 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 15:44:54.118081 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 15:44:54.132974 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 15:44:54.139166 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 15:44:54.153032 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 15:44:54.154583 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 15:44:54.165008 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 15:44:54.173301 jq[1441]: false Jan 30 15:44:54.178152 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 15:44:54.184023 dbus-daemon[1440]: [system] SELinux support is enabled Jan 30 15:44:54.184604 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 15:44:54.193242 extend-filesystems[1442]: Found loop4 Jan 30 15:44:54.193242 extend-filesystems[1442]: Found loop5 Jan 30 15:44:54.193242 extend-filesystems[1442]: Found loop6 Jan 30 15:44:54.193242 extend-filesystems[1442]: Found loop7 Jan 30 15:44:54.193242 extend-filesystems[1442]: Found vda Jan 30 15:44:54.193242 extend-filesystems[1442]: Found vda1 Jan 30 15:44:54.193242 extend-filesystems[1442]: Found vda2 Jan 30 15:44:54.193242 extend-filesystems[1442]: Found vda3 Jan 30 15:44:54.193242 extend-filesystems[1442]: Found usr Jan 30 15:44:54.193242 extend-filesystems[1442]: Found vda4 Jan 30 15:44:54.193242 extend-filesystems[1442]: Found vda6 Jan 30 15:44:54.193242 extend-filesystems[1442]: Found vda7 Jan 30 15:44:54.193242 extend-filesystems[1442]: Found vda9 Jan 30 15:44:54.193242 extend-filesystems[1442]: Checking size of /dev/vda9 Jan 30 15:44:54.196921 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 15:44:54.215887 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 15:44:54.221636 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 15:44:54.222192 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 15:44:54.232875 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 15:44:54.236821 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 15:44:54.237810 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 15:44:54.244217 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 15:44:54.256523 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 15:44:54.263502 update_engine[1457]: I20250130 15:44:54.261255 1457 main.cc:92] Flatcar Update Engine starting Jan 30 15:44:54.263502 update_engine[1457]: I20250130 15:44:54.262856 1457 update_check_scheduler.cc:74] Next update check in 4m12s Jan 30 15:44:54.289018 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1392) Jan 30 15:44:54.256736 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 15:44:54.289207 extend-filesystems[1442]: Resized partition /dev/vda9 Jan 30 15:44:54.321366 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 30 15:44:54.264148 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 15:44:54.321497 jq[1459]: true Jan 30 15:44:54.321738 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) Jan 30 15:44:54.392759 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 30 15:44:54.264301 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 15:44:54.289231 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 15:44:54.289403 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 15:44:54.393803 jq[1467]: true Jan 30 15:44:54.337275 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 15:44:54.353394 systemd[1]: Started update-engine.service - Update Engine. Jan 30 15:44:54.366147 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 15:44:54.369475 systemd-logind[1453]: New seat seat0. Jan 30 15:44:54.372217 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 15:44:54.372248 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 15:44:54.377119 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 15:44:54.377142 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 15:44:54.386134 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 15:44:54.400865 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 15:44:54.410252 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 15:44:54.410252 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 15:44:54.410252 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 30 15:44:54.400886 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 15:44:54.414326 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Jan 30 15:44:54.401089 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 15:44:54.415570 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 15:44:54.415792 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 15:44:54.417409 tar[1464]: linux-amd64/LICENSE Jan 30 15:44:54.417662 tar[1464]: linux-amd64/helm Jan 30 15:44:54.437877 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:44:54.442014 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 15:44:54.458020 systemd[1]: Starting sshkeys.service... Jan 30 15:44:54.488927 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 15:44:54.505038 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 15:44:54.583854 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 15:44:54.778337 containerd[1471]: time="2025-01-30T15:44:54.778253459Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 15:44:54.840692 containerd[1471]: time="2025-01-30T15:44:54.838648620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:54.840692 containerd[1471]: time="2025-01-30T15:44:54.840318041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:44:54.840692 containerd[1471]: time="2025-01-30T15:44:54.840346284Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 15:44:54.840692 containerd[1471]: time="2025-01-30T15:44:54.840362414Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 15:44:54.840692 containerd[1471]: time="2025-01-30T15:44:54.840511814Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 15:44:54.840692 containerd[1471]: time="2025-01-30T15:44:54.840532333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:54.840692 containerd[1471]: time="2025-01-30T15:44:54.840591123Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:44:54.840692 containerd[1471]: time="2025-01-30T15:44:54.840607634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:54.840972 containerd[1471]: time="2025-01-30T15:44:54.840784656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:44:54.840972 containerd[1471]: time="2025-01-30T15:44:54.840805244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:54.840972 containerd[1471]: time="2025-01-30T15:44:54.840823739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:44:54.840972 containerd[1471]: time="2025-01-30T15:44:54.840835752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:54.840972 containerd[1471]: time="2025-01-30T15:44:54.840914099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:54.841148 containerd[1471]: time="2025-01-30T15:44:54.841120245Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:54.841257 containerd[1471]: time="2025-01-30T15:44:54.841233017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:44:54.841284 containerd[1471]: time="2025-01-30T15:44:54.841256100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 15:44:54.841372 containerd[1471]: time="2025-01-30T15:44:54.841350226Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 15:44:54.841429 containerd[1471]: time="2025-01-30T15:44:54.841409077Z" level=info msg="metadata content store policy set" policy=shared Jan 30 15:44:54.849307 containerd[1471]: time="2025-01-30T15:44:54.849261063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 15:44:54.849347 containerd[1471]: time="2025-01-30T15:44:54.849332778Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 15:44:54.849368 containerd[1471]: time="2025-01-30T15:44:54.849356853Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 15:44:54.849389 containerd[1471]: time="2025-01-30T15:44:54.849374847Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 15:44:54.849421 containerd[1471]: time="2025-01-30T15:44:54.849391388Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 15:44:54.849920 containerd[1471]: time="2025-01-30T15:44:54.849545066Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 15:44:54.849920 containerd[1471]: time="2025-01-30T15:44:54.849807829Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 15:44:54.849980 containerd[1471]: time="2025-01-30T15:44:54.849921121Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 15:44:54.849980 containerd[1471]: time="2025-01-30T15:44:54.849939746Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 15:44:54.849980 containerd[1471]: time="2025-01-30T15:44:54.849953342Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 15:44:54.849980 containerd[1471]: time="2025-01-30T15:44:54.849971977Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 15:44:54.850062 containerd[1471]: time="2025-01-30T15:44:54.849986594Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 15:44:54.850062 containerd[1471]: time="2025-01-30T15:44:54.850001863Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 15:44:54.850062 containerd[1471]: time="2025-01-30T15:44:54.850016731Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 15:44:54.850062 containerd[1471]: time="2025-01-30T15:44:54.850032059Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 15:44:54.850062 containerd[1471]: time="2025-01-30T15:44:54.850045535Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 15:44:54.850161 containerd[1471]: time="2025-01-30T15:44:54.850066394Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 15:44:54.850161 containerd[1471]: time="2025-01-30T15:44:54.850080741Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 15:44:54.850161 containerd[1471]: time="2025-01-30T15:44:54.850103513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850161 containerd[1471]: time="2025-01-30T15:44:54.850118411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850161 containerd[1471]: time="2025-01-30T15:44:54.850132728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850161 containerd[1471]: time="2025-01-30T15:44:54.850152575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850608 containerd[1471]: time="2025-01-30T15:44:54.850172773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850608 containerd[1471]: time="2025-01-30T15:44:54.850189104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850608 containerd[1471]: time="2025-01-30T15:44:54.850202549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850608 containerd[1471]: time="2025-01-30T15:44:54.850217377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850608 containerd[1471]: time="2025-01-30T15:44:54.850231904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850608 containerd[1471]: time="2025-01-30T15:44:54.850247994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850608 containerd[1471]: time="2025-01-30T15:44:54.850261419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850608 containerd[1471]: time="2025-01-30T15:44:54.850275636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850608 containerd[1471]: time="2025-01-30T15:44:54.850291075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850608 containerd[1471]: time="2025-01-30T15:44:54.850314128Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 15:44:54.850608 containerd[1471]: time="2025-01-30T15:44:54.850337692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850608 containerd[1471]: time="2025-01-30T15:44:54.850351849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850608 containerd[1471]: time="2025-01-30T15:44:54.850366096Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 15:44:54.850608 containerd[1471]: time="2025-01-30T15:44:54.850429134Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 15:44:54.850916 containerd[1471]: time="2025-01-30T15:44:54.850451957Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 15:44:54.850916 containerd[1471]: time="2025-01-30T15:44:54.850465783Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 15:44:54.850916 containerd[1471]: time="2025-01-30T15:44:54.850480069Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 15:44:54.850916 containerd[1471]: time="2025-01-30T15:44:54.850492032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.850916 containerd[1471]: time="2025-01-30T15:44:54.850505607Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 15:44:54.850916 containerd[1471]: time="2025-01-30T15:44:54.850520756Z" level=info msg="NRI interface is disabled by configuration." Jan 30 15:44:54.850916 containerd[1471]: time="2025-01-30T15:44:54.850532728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 15:44:54.851069 containerd[1471]: time="2025-01-30T15:44:54.850863018Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 15:44:54.851069 containerd[1471]: time="2025-01-30T15:44:54.850940122Z" level=info msg="Connect containerd service" Jan 30 15:44:54.851069 containerd[1471]: time="2025-01-30T15:44:54.850975950Z" level=info msg="using legacy CRI server" Jan 30 15:44:54.851069 containerd[1471]: time="2025-01-30T15:44:54.850983904Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 15:44:54.851259 containerd[1471]: time="2025-01-30T15:44:54.851085305Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 15:44:54.851641 containerd[1471]: time="2025-01-30T15:44:54.851615509Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 15:44:54.855685 containerd[1471]: time="2025-01-30T15:44:54.851993107Z" level=info msg="Start subscribing containerd event" Jan 30 15:44:54.855685 containerd[1471]: time="2025-01-30T15:44:54.852042380Z" level=info msg="Start recovering state" Jan 30 15:44:54.855685 containerd[1471]: time="2025-01-30T15:44:54.852388569Z" level=info msg="Start event monitor" Jan 30 15:44:54.855685 containerd[1471]: time="2025-01-30T15:44:54.852455645Z" level=info msg="Start snapshots syncer" Jan 30 15:44:54.855685 containerd[1471]: time="2025-01-30T15:44:54.852470943Z" level=info msg="Start cni network conf syncer for default" Jan 30 15:44:54.855685 containerd[1471]: time="2025-01-30T15:44:54.852479249Z" level=info msg="Start streaming server" Jan 30 15:44:54.855685 containerd[1471]: time="2025-01-30T15:44:54.853551700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 15:44:54.855685 containerd[1471]: time="2025-01-30T15:44:54.854407666Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 15:44:54.860631 containerd[1471]: time="2025-01-30T15:44:54.857934981Z" level=info msg="containerd successfully booted in 0.081273s" Jan 30 15:44:54.858023 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 15:44:54.977092 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 15:44:55.001833 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 15:44:55.015039 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 15:44:55.021647 systemd[1]: Started sshd@0-172.24.4.138:22-172.24.4.1:46416.service - OpenSSH per-connection server daemon (172.24.4.1:46416). Jan 30 15:44:55.029770 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 15:44:55.030035 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 15:44:55.043167 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 15:44:55.067328 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 15:44:55.078178 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 15:44:55.089043 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 15:44:55.089996 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 15:44:55.218745 tar[1464]: linux-amd64/README.md Jan 30 15:44:55.230415 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 15:44:55.408955 systemd-networkd[1376]: eth0: Gained IPv6LL Jan 30 15:44:55.413392 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 15:44:55.418651 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 15:44:55.433880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:44:55.448166 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 15:44:55.500490 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 15:44:55.929376 sshd[1525]: Accepted publickey for core from 172.24.4.1 port 46416 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:44:55.934321 sshd[1525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:44:55.962803 systemd-logind[1453]: New session 1 of user core. Jan 30 15:44:55.969555 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 15:44:55.986236 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 15:44:56.080378 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 15:44:56.095752 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 15:44:56.117081 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 15:44:56.434131 systemd[1552]: Queued start job for default target default.target. Jan 30 15:44:56.438984 systemd[1552]: Created slice app.slice - User Application Slice. Jan 30 15:44:56.439012 systemd[1552]: Reached target paths.target - Paths. Jan 30 15:44:56.439026 systemd[1552]: Reached target timers.target - Timers. Jan 30 15:44:56.442785 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 15:44:56.451780 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 15:44:56.452401 systemd[1552]: Reached target sockets.target - Sockets. Jan 30 15:44:56.452418 systemd[1552]: Reached target basic.target - Basic System. Jan 30 15:44:56.452452 systemd[1552]: Reached target default.target - Main User Target. Jan 30 15:44:56.452478 systemd[1552]: Startup finished in 321ms. Jan 30 15:44:56.454039 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 15:44:56.461188 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 15:44:56.863660 systemd[1]: Started sshd@1-172.24.4.138:22-172.24.4.1:39176.service - OpenSSH per-connection server daemon (172.24.4.1:39176). Jan 30 15:44:57.504330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:44:57.520245 (kubelet)[1570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:44:58.559435 sshd[1563]: Accepted publickey for core from 172.24.4.1 port 39176 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:44:58.564190 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:44:58.581047 systemd-logind[1453]: New session 2 of user core. Jan 30 15:44:58.593115 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 15:44:58.933994 kubelet[1570]: E0130 15:44:58.933666 1570 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:44:58.939317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:44:58.939582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:44:58.940227 systemd[1]: kubelet.service: Consumed 1.879s CPU time. Jan 30 15:44:59.077337 sshd[1563]: pam_unix(sshd:session): session closed for user core Jan 30 15:44:59.088928 systemd[1]: sshd@1-172.24.4.138:22-172.24.4.1:39176.service: Deactivated successfully. Jan 30 15:44:59.092472 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 15:44:59.096322 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Jan 30 15:44:59.104940 systemd[1]: Started sshd@2-172.24.4.138:22-172.24.4.1:39190.service - OpenSSH per-connection server daemon (172.24.4.1:39190). Jan 30 15:44:59.112626 systemd-logind[1453]: Removed session 2. Jan 30 15:45:00.124384 login[1532]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 15:45:00.135206 systemd-logind[1453]: New session 3 of user core. Jan 30 15:45:00.142311 login[1533]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 15:45:00.144358 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 15:45:00.158485 systemd-logind[1453]: New session 4 of user core. Jan 30 15:45:00.169197 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 15:45:00.552343 sshd[1584]: Accepted publickey for core from 172.24.4.1 port 39190 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:00.555283 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:00.565624 systemd-logind[1453]: New session 5 of user core. Jan 30 15:45:00.577137 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 15:45:01.212380 coreos-metadata[1437]: Jan 30 15:45:01.212 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:45:01.261317 coreos-metadata[1437]: Jan 30 15:45:01.261 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 30 15:45:01.341813 sshd[1584]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:01.348283 systemd[1]: sshd@2-172.24.4.138:22-172.24.4.1:39190.service: Deactivated successfully. Jan 30 15:45:01.352073 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 15:45:01.355209 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Jan 30 15:45:01.357719 systemd-logind[1453]: Removed session 5. Jan 30 15:45:01.511423 coreos-metadata[1437]: Jan 30 15:45:01.511 INFO Fetch successful Jan 30 15:45:01.511423 coreos-metadata[1437]: Jan 30 15:45:01.511 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 15:45:01.525773 coreos-metadata[1437]: Jan 30 15:45:01.525 INFO Fetch successful Jan 30 15:45:01.525773 coreos-metadata[1437]: Jan 30 15:45:01.525 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 30 15:45:01.542788 coreos-metadata[1437]: Jan 30 15:45:01.542 INFO Fetch successful Jan 30 15:45:01.542788 coreos-metadata[1437]: Jan 30 15:45:01.542 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 30 15:45:01.555892 coreos-metadata[1437]: Jan 30 15:45:01.555 INFO Fetch successful Jan 30 15:45:01.555892 coreos-metadata[1437]: Jan 30 15:45:01.555 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 30 15:45:01.569111 coreos-metadata[1437]: Jan 30 15:45:01.569 INFO Fetch successful Jan 30 15:45:01.569111 coreos-metadata[1437]: Jan 30 15:45:01.569 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 30 15:45:01.582620 coreos-metadata[1437]: Jan 30 15:45:01.582 INFO Fetch successful Jan 30 15:45:01.629379 coreos-metadata[1502]: Jan 30 15:45:01.628 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:45:01.630906 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 15:45:01.632646 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 15:45:01.673416 coreos-metadata[1502]: Jan 30 15:45:01.673 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 30 15:45:01.688659 coreos-metadata[1502]: Jan 30 15:45:01.688 INFO Fetch successful Jan 30 15:45:01.688659 coreos-metadata[1502]: Jan 30 15:45:01.688 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 15:45:01.702930 coreos-metadata[1502]: Jan 30 15:45:01.702 INFO Fetch successful Jan 30 15:45:01.708801 unknown[1502]: wrote ssh authorized keys file for user: core Jan 30 15:45:01.747933 update-ssh-keys[1626]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:45:01.748955 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 15:45:01.752938 systemd[1]: Finished sshkeys.service. Jan 30 15:45:01.757152 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 15:45:01.757488 systemd[1]: Startup finished in 1.284s (kernel) + 15.615s (initrd) + 10.883s (userspace) = 27.783s. Jan 30 15:45:08.980271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 15:45:08.987154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:45:09.426036 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:45:09.426131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:09.892870 kubelet[1637]: E0130 15:45:09.892767 1637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:45:09.900494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:45:09.900916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:45:11.365140 systemd[1]: Started sshd@3-172.24.4.138:22-172.24.4.1:53920.service - OpenSSH per-connection server daemon (172.24.4.1:53920). Jan 30 15:45:13.048506 sshd[1645]: Accepted publickey for core from 172.24.4.1 port 53920 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:13.051220 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:13.062204 systemd-logind[1453]: New session 6 of user core. Jan 30 15:45:13.069145 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 15:45:13.830786 sshd[1645]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:13.842959 systemd[1]: sshd@3-172.24.4.138:22-172.24.4.1:53920.service: Deactivated successfully. Jan 30 15:45:13.846760 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 15:45:13.850852 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Jan 30 15:45:13.860463 systemd[1]: Started sshd@4-172.24.4.138:22-172.24.4.1:48146.service - OpenSSH per-connection server daemon (172.24.4.1:48146). Jan 30 15:45:13.863436 systemd-logind[1453]: Removed session 6. Jan 30 15:45:15.199993 sshd[1652]: Accepted publickey for core from 172.24.4.1 port 48146 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:15.202814 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:15.212633 systemd-logind[1453]: New session 7 of user core. Jan 30 15:45:15.222006 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 15:45:15.759126 sshd[1652]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:15.777178 systemd[1]: sshd@4-172.24.4.138:22-172.24.4.1:48146.service: Deactivated successfully. Jan 30 15:45:15.780914 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 15:45:15.782933 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Jan 30 15:45:15.793568 systemd[1]: Started sshd@5-172.24.4.138:22-172.24.4.1:48148.service - OpenSSH per-connection server daemon (172.24.4.1:48148). Jan 30 15:45:15.797425 systemd-logind[1453]: Removed session 7. Jan 30 15:45:17.280850 sshd[1659]: Accepted publickey for core from 172.24.4.1 port 48148 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:17.283615 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:17.294023 systemd-logind[1453]: New session 8 of user core. Jan 30 15:45:17.311088 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 15:45:18.062122 sshd[1659]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:18.075380 systemd[1]: sshd@5-172.24.4.138:22-172.24.4.1:48148.service: Deactivated successfully. Jan 30 15:45:18.079142 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 15:45:18.082170 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Jan 30 15:45:18.092434 systemd[1]: Started sshd@6-172.24.4.138:22-172.24.4.1:48152.service - OpenSSH per-connection server daemon (172.24.4.1:48152). Jan 30 15:45:18.094971 systemd-logind[1453]: Removed session 8. Jan 30 15:45:19.453650 sshd[1666]: Accepted publickey for core from 172.24.4.1 port 48152 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:19.456409 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:19.466010 systemd-logind[1453]: New session 9 of user core. Jan 30 15:45:19.479142 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 15:45:19.912820 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 15:45:19.913459 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:45:19.915375 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 15:45:19.925435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:45:19.933071 sudo[1669]: pam_unix(sudo:session): session closed for user root Jan 30 15:45:20.191107 sshd[1666]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:20.219275 systemd[1]: Started sshd@7-172.24.4.138:22-172.24.4.1:48162.service - OpenSSH per-connection server daemon (172.24.4.1:48162). Jan 30 15:45:20.220511 systemd[1]: sshd@6-172.24.4.138:22-172.24.4.1:48152.service: Deactivated successfully. Jan 30 15:45:20.224662 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 15:45:20.228069 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Jan 30 15:45:20.231139 systemd-logind[1453]: Removed session 9. Jan 30 15:45:20.480015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:20.480313 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:45:20.562564 kubelet[1684]: E0130 15:45:20.562522 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:45:20.565524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:45:20.565873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:45:21.277998 sshd[1675]: Accepted publickey for core from 172.24.4.1 port 48162 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:21.281241 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:21.293905 systemd-logind[1453]: New session 10 of user core. Jan 30 15:45:21.302179 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 15:45:21.728179 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 15:45:21.728956 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:45:21.736133 sudo[1693]: pam_unix(sudo:session): session closed for user root Jan 30 15:45:21.747378 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 15:45:21.748071 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:45:21.784777 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 15:45:21.788801 auditctl[1696]: No rules Jan 30 15:45:21.789575 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 15:45:21.790118 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 15:45:21.797431 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 15:45:21.858396 augenrules[1715]: No rules Jan 30 15:45:21.862068 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 15:45:21.865467 sudo[1692]: pam_unix(sudo:session): session closed for user root Jan 30 15:45:22.121468 sshd[1675]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:22.133305 systemd[1]: sshd@7-172.24.4.138:22-172.24.4.1:48162.service: Deactivated successfully. Jan 30 15:45:22.136123 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 15:45:22.139054 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Jan 30 15:45:22.145226 systemd[1]: Started sshd@8-172.24.4.138:22-172.24.4.1:48166.service - OpenSSH per-connection server daemon (172.24.4.1:48166). Jan 30 15:45:22.148379 systemd-logind[1453]: Removed session 10. Jan 30 15:45:23.426242 sshd[1723]: Accepted publickey for core from 172.24.4.1 port 48166 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:23.428929 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:23.440372 systemd-logind[1453]: New session 11 of user core. Jan 30 15:45:23.452092 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 15:45:23.856810 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 15:45:23.858234 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:45:24.647004 (dockerd)[1741]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 15:45:24.647136 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 15:45:25.170734 dockerd[1741]: time="2025-01-30T15:45:25.170534604Z" level=info msg="Starting up" Jan 30 15:45:25.388079 dockerd[1741]: time="2025-01-30T15:45:25.387989491Z" level=info msg="Loading containers: start." Jan 30 15:45:25.557741 kernel: Initializing XFRM netlink socket Jan 30 15:45:25.659713 systemd-networkd[1376]: docker0: Link UP Jan 30 15:45:25.676016 dockerd[1741]: time="2025-01-30T15:45:25.674944665Z" level=info msg="Loading containers: done." Jan 30 15:45:25.698468 dockerd[1741]: time="2025-01-30T15:45:25.698399685Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 15:45:25.698780 dockerd[1741]: time="2025-01-30T15:45:25.698517887Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 15:45:25.698780 dockerd[1741]: time="2025-01-30T15:45:25.698630127Z" level=info msg="Daemon has completed initialization" Jan 30 15:45:25.699132 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2984511744-merged.mount: Deactivated successfully. Jan 30 15:45:25.797218 dockerd[1741]: time="2025-01-30T15:45:25.796868523Z" level=info msg="API listen on /run/docker.sock" Jan 30 15:45:25.797877 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 15:45:27.269712 containerd[1471]: time="2025-01-30T15:45:27.269477231Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 15:45:28.178753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119447385.mount: Deactivated successfully. Jan 30 15:45:30.730231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 15:45:30.739645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:45:30.903128 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:30.912860 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:45:31.212130 kubelet[1906]: E0130 15:45:31.211469 1906 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:45:31.218371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:45:31.219053 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:45:33.382224 containerd[1471]: time="2025-01-30T15:45:33.382163937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:33.383704 containerd[1471]: time="2025-01-30T15:45:33.383547480Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674832" Jan 30 15:45:33.385077 containerd[1471]: time="2025-01-30T15:45:33.385035497Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:33.388328 containerd[1471]: time="2025-01-30T15:45:33.388249195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:33.389555 containerd[1471]: time="2025-01-30T15:45:33.389402557Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 6.119855005s" Jan 30 15:45:33.389555 containerd[1471]: time="2025-01-30T15:45:33.389435439Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 30 15:45:33.390091 containerd[1471]: time="2025-01-30T15:45:33.389984231Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 15:45:35.419722 containerd[1471]: time="2025-01-30T15:45:35.419373174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:35.421094 containerd[1471]: time="2025-01-30T15:45:35.420904536Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770719" Jan 30 15:45:35.422193 containerd[1471]: time="2025-01-30T15:45:35.422122784Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:35.425768 containerd[1471]: time="2025-01-30T15:45:35.425724756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:35.427490 containerd[1471]: time="2025-01-30T15:45:35.426986954Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 2.036821849s" Jan 30 15:45:35.427490 containerd[1471]: time="2025-01-30T15:45:35.427045799Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 30 15:45:35.428077 containerd[1471]: time="2025-01-30T15:45:35.428044985Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 15:45:37.063620 containerd[1471]: time="2025-01-30T15:45:37.063537255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:37.065046 containerd[1471]: time="2025-01-30T15:45:37.064968393Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169767" Jan 30 15:45:37.067692 containerd[1471]: time="2025-01-30T15:45:37.066161589Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:37.069828 containerd[1471]: time="2025-01-30T15:45:37.069793627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:37.071338 containerd[1471]: time="2025-01-30T15:45:37.071303944Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 1.643228068s" Jan 30 15:45:37.071387 containerd[1471]: time="2025-01-30T15:45:37.071339132Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 30 15:45:37.071816 containerd[1471]: time="2025-01-30T15:45:37.071790913Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 15:45:38.478540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2542987946.mount: Deactivated successfully. Jan 30 15:45:39.075594 containerd[1471]: time="2025-01-30T15:45:39.075346927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:39.094440 containerd[1471]: time="2025-01-30T15:45:39.094336194Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909474" Jan 30 15:45:39.113459 containerd[1471]: time="2025-01-30T15:45:39.113368242Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:39.149630 containerd[1471]: time="2025-01-30T15:45:39.149474691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:39.151890 containerd[1471]: time="2025-01-30T15:45:39.151457009Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.078811563s" Jan 30 15:45:39.151890 containerd[1471]: time="2025-01-30T15:45:39.151578512Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 15:45:39.153793 containerd[1471]: time="2025-01-30T15:45:39.153369240Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 15:45:39.990050 update_engine[1457]: I20250130 15:45:39.988822 1457 update_attempter.cc:509] Updating boot flags... Jan 30 15:45:40.042227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383345593.mount: Deactivated successfully. Jan 30 15:45:40.064411 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1976) Jan 30 15:45:40.119699 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1975) Jan 30 15:45:40.172748 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1975) Jan 30 15:45:41.229558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 15:45:41.237833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:45:41.774544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:41.778547 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:45:41.861484 kubelet[2040]: E0130 15:45:41.861401 2040 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:45:41.864917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:45:41.865204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:45:41.916359 containerd[1471]: time="2025-01-30T15:45:41.916208005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:41.918786 containerd[1471]: time="2025-01-30T15:45:41.918598723Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jan 30 15:45:41.920472 containerd[1471]: time="2025-01-30T15:45:41.920399033Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:41.930114 containerd[1471]: time="2025-01-30T15:45:41.930052483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:41.933474 containerd[1471]: time="2025-01-30T15:45:41.933388693Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.779952241s" Jan 30 15:45:41.933603 containerd[1471]: time="2025-01-30T15:45:41.933472355Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 30 15:45:41.934572 containerd[1471]: time="2025-01-30T15:45:41.934469705Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 15:45:42.542263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount341277841.mount: Deactivated successfully. Jan 30 15:45:42.554942 containerd[1471]: time="2025-01-30T15:45:42.554824462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:42.556887 containerd[1471]: time="2025-01-30T15:45:42.556769519Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 30 15:45:42.559717 containerd[1471]: time="2025-01-30T15:45:42.558333906Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:42.566010 containerd[1471]: time="2025-01-30T15:45:42.565947969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:42.567965 containerd[1471]: time="2025-01-30T15:45:42.567871017Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 633.342213ms" Jan 30 15:45:42.567965 containerd[1471]: time="2025-01-30T15:45:42.567953368Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 15:45:42.570507 containerd[1471]: time="2025-01-30T15:45:42.570238514Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 15:45:43.245907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2605173200.mount: Deactivated successfully. Jan 30 15:45:47.041228 containerd[1471]: time="2025-01-30T15:45:47.041163000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:47.044710 containerd[1471]: time="2025-01-30T15:45:47.044521745Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551328" Jan 30 15:45:47.046336 containerd[1471]: time="2025-01-30T15:45:47.046276760Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:47.052055 containerd[1471]: time="2025-01-30T15:45:47.051080637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:47.052055 containerd[1471]: time="2025-01-30T15:45:47.051925648Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.480848133s" Jan 30 15:45:47.052055 containerd[1471]: time="2025-01-30T15:45:47.051953306Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 30 15:45:51.220878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:51.231971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:45:51.271800 systemd[1]: Reloading requested from client PID 2133 ('systemctl') (unit session-11.scope)... Jan 30 15:45:51.272060 systemd[1]: Reloading... Jan 30 15:45:51.364717 zram_generator::config[2171]: No configuration found. Jan 30 15:45:51.524991 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:45:51.611995 systemd[1]: Reloading finished in 339 ms. Jan 30 15:45:51.660628 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 15:45:51.660881 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 15:45:51.661179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:51.666964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:45:51.790746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:51.802928 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 15:45:52.019475 kubelet[2238]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:45:52.019475 kubelet[2238]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 15:45:52.019475 kubelet[2238]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:45:52.020124 kubelet[2238]: I0130 15:45:52.019610 2238 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:45:52.686063 kubelet[2238]: I0130 15:45:52.685997 2238 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 15:45:52.686063 kubelet[2238]: I0130 15:45:52.686030 2238 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:45:52.688721 kubelet[2238]: I0130 15:45:52.687124 2238 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 15:45:52.722313 kubelet[2238]: E0130 15:45:52.722177 2238 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:52.723499 kubelet[2238]: I0130 15:45:52.722969 2238 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:45:52.744429 kubelet[2238]: E0130 15:45:52.744366 2238 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 15:45:52.746786 kubelet[2238]: I0130 15:45:52.744823 2238 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 15:45:52.753348 kubelet[2238]: I0130 15:45:52.753288 2238 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 15:45:52.754274 kubelet[2238]: I0130 15:45:52.754203 2238 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:45:52.754925 kubelet[2238]: I0130 15:45:52.754431 2238 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-f-c7edc085f7.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 15:45:52.755303 kubelet[2238]: I0130 15:45:52.755267 2238 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:45:52.755466 kubelet[2238]: I0130 15:45:52.755444 2238 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 15:45:52.755954 kubelet[2238]: I0130 15:45:52.755920 2238 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:45:52.769819 kubelet[2238]: I0130 15:45:52.769768 2238 kubelet.go:446] "Attempting to sync node with API server" Jan 30 15:45:52.770136 kubelet[2238]: I0130 15:45:52.770083 2238 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:45:52.770346 kubelet[2238]: I0130 15:45:52.770320 2238 kubelet.go:352] "Adding apiserver pod source" Jan 30 15:45:52.770500 kubelet[2238]: I0130 15:45:52.770478 2238 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:45:52.779154 kubelet[2238]: W0130 15:45:52.778570 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-f-c7edc085f7.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Jan 30 15:45:52.779154 kubelet[2238]: E0130 15:45:52.778637 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-f-c7edc085f7.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:52.779154 kubelet[2238]: W0130 15:45:52.778992 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Jan 30 15:45:52.779154 kubelet[2238]: E0130 15:45:52.779024 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:52.780150 kubelet[2238]: I0130 15:45:52.780097 2238 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 15:45:52.780594 kubelet[2238]: I0130 15:45:52.780541 2238 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:45:52.782716 kubelet[2238]: W0130 15:45:52.782568 2238 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 15:45:52.791698 kubelet[2238]: I0130 15:45:52.791620 2238 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 15:45:52.791806 kubelet[2238]: I0130 15:45:52.791759 2238 server.go:1287] "Started kubelet" Jan 30 15:45:52.795230 kubelet[2238]: I0130 15:45:52.795077 2238 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:45:52.796008 kubelet[2238]: I0130 15:45:52.795937 2238 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:45:52.800247 kubelet[2238]: I0130 15:45:52.800208 2238 server.go:490] "Adding debug handlers to kubelet server" Jan 30 15:45:52.804517 kubelet[2238]: I0130 15:45:52.804412 2238 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:45:52.805325 kubelet[2238]: I0130 15:45:52.804988 2238 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:45:52.806729 kubelet[2238]: I0130 15:45:52.806397 2238 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 15:45:52.809116 kubelet[2238]: I0130 15:45:52.809084 2238 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 15:45:52.809497 kubelet[2238]: E0130 15:45:52.809475 2238 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-f-c7edc085f7.novalocal\" not found" Jan 30 15:45:52.813518 kubelet[2238]: E0130 15:45:52.813481 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-f-c7edc085f7.novalocal?timeout=10s\": dial tcp 172.24.4.138:6443: connect: connection refused" interval="200ms" Jan 30 15:45:52.814018 kubelet[2238]: I0130 15:45:52.814006 2238 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 15:45:52.815054 kubelet[2238]: W0130 15:45:52.815017 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Jan 30 15:45:52.815336 kubelet[2238]: E0130 15:45:52.815136 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:52.817092 kubelet[2238]: I0130 15:45:52.816496 2238 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:45:52.817219 kubelet[2238]: E0130 15:45:52.815193 2238 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.138:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.138:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-f-c7edc085f7.novalocal.181f82f10047670f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-f-c7edc085f7.novalocal,UID:ci-4081-3-0-f-c7edc085f7.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-f-c7edc085f7.novalocal,},FirstTimestamp:2025-01-30 15:45:52.791709455 +0000 UTC m=+0.985452293,LastTimestamp:2025-01-30 15:45:52.791709455 +0000 UTC m=+0.985452293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-f-c7edc085f7.novalocal,}" Jan 30 15:45:52.818117 kubelet[2238]: I0130 15:45:52.818101 2238 factory.go:221] Registration of the containerd container factory successfully Jan 30 15:45:52.818202 kubelet[2238]: I0130 15:45:52.818193 2238 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:45:52.818321 kubelet[2238]: I0130 15:45:52.818305 2238 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 15:45:52.827316 kubelet[2238]: I0130 15:45:52.827282 2238 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:45:52.829304 kubelet[2238]: I0130 15:45:52.829075 2238 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:45:52.829304 kubelet[2238]: I0130 15:45:52.829093 2238 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 15:45:52.829304 kubelet[2238]: I0130 15:45:52.829111 2238 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 15:45:52.829304 kubelet[2238]: I0130 15:45:52.829118 2238 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 15:45:52.829304 kubelet[2238]: E0130 15:45:52.829155 2238 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 15:45:52.835610 kubelet[2238]: W0130 15:45:52.835576 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Jan 30 15:45:52.835794 kubelet[2238]: E0130 15:45:52.835773 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:52.835962 kubelet[2238]: E0130 15:45:52.835946 2238 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 15:45:52.848190 kubelet[2238]: I0130 15:45:52.848139 2238 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 15:45:52.848190 kubelet[2238]: I0130 15:45:52.848161 2238 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 15:45:52.848190 kubelet[2238]: I0130 15:45:52.848177 2238 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:45:52.854313 kubelet[2238]: I0130 15:45:52.854220 2238 policy_none.go:49] "None policy: Start" Jan 30 15:45:52.854313 kubelet[2238]: I0130 15:45:52.854251 2238 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 15:45:52.854313 kubelet[2238]: I0130 15:45:52.854266 2238 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:45:52.870460 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 15:45:52.890396 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 15:45:52.898998 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 15:45:52.910150 kubelet[2238]: E0130 15:45:52.910112 2238 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-f-c7edc085f7.novalocal\" not found" Jan 30 15:45:52.912585 kubelet[2238]: I0130 15:45:52.912537 2238 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:45:52.913020 kubelet[2238]: I0130 15:45:52.912976 2238 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 15:45:52.913140 kubelet[2238]: I0130 15:45:52.913031 2238 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:45:52.913848 kubelet[2238]: I0130 15:45:52.913817 2238 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:45:52.917972 kubelet[2238]: E0130 15:45:52.917934 2238 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 15:45:52.918518 kubelet[2238]: E0130 15:45:52.918451 2238 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-f-c7edc085f7.novalocal\" not found" Jan 30 15:45:52.950899 systemd[1]: Created slice kubepods-burstable-pod311074102b6989260c1d31a5f2783bb1.slice - libcontainer container kubepods-burstable-pod311074102b6989260c1d31a5f2783bb1.slice. Jan 30 15:45:52.967705 kubelet[2238]: E0130 15:45:52.967134 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-f-c7edc085f7.novalocal\" not found" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:52.974432 systemd[1]: Created slice kubepods-burstable-pod6c1f936e102fefa4264e5965eb7b63a7.slice - libcontainer container kubepods-burstable-pod6c1f936e102fefa4264e5965eb7b63a7.slice. Jan 30 15:45:52.991918 kubelet[2238]: E0130 15:45:52.991879 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-f-c7edc085f7.novalocal\" not found" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:52.995739 systemd[1]: Created slice kubepods-burstable-pod18ff90b85bef74bfdc0d387333573f01.slice - libcontainer container kubepods-burstable-pod18ff90b85bef74bfdc0d387333573f01.slice. Jan 30 15:45:52.999794 kubelet[2238]: E0130 15:45:52.999757 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-f-c7edc085f7.novalocal\" not found" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.016338 kubelet[2238]: I0130 15:45:53.016281 2238 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.017113 kubelet[2238]: E0130 15:45:53.016944 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-f-c7edc085f7.novalocal?timeout=10s\": dial tcp 172.24.4.138:6443: connect: connection refused" interval="400ms" Jan 30 15:45:53.017113 kubelet[2238]: E0130 15:45:53.016997 2238 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.138:6443/api/v1/nodes\": dial tcp 172.24.4.138:6443: connect: connection refused" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.017816 kubelet[2238]: I0130 15:45:53.017485 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c1f936e102fefa4264e5965eb7b63a7-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"6c1f936e102fefa4264e5965eb7b63a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.017816 kubelet[2238]: I0130 15:45:53.017585 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c1f936e102fefa4264e5965eb7b63a7-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"6c1f936e102fefa4264e5965eb7b63a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.017816 kubelet[2238]: I0130 15:45:53.017638 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/311074102b6989260c1d31a5f2783bb1-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"311074102b6989260c1d31a5f2783bb1\") " pod="kube-system/kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.017816 kubelet[2238]: I0130 15:45:53.017741 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/311074102b6989260c1d31a5f2783bb1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"311074102b6989260c1d31a5f2783bb1\") " pod="kube-system/kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.018482 kubelet[2238]: I0130 15:45:53.018197 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c1f936e102fefa4264e5965eb7b63a7-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"6c1f936e102fefa4264e5965eb7b63a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.018482 kubelet[2238]: I0130 15:45:53.018265 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6c1f936e102fefa4264e5965eb7b63a7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"6c1f936e102fefa4264e5965eb7b63a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.018482 kubelet[2238]: I0130 15:45:53.018313 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c1f936e102fefa4264e5965eb7b63a7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"6c1f936e102fefa4264e5965eb7b63a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.018482 kubelet[2238]: I0130 15:45:53.018362 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/18ff90b85bef74bfdc0d387333573f01-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"18ff90b85bef74bfdc0d387333573f01\") " pod="kube-system/kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.018833 kubelet[2238]: I0130 15:45:53.018406 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/311074102b6989260c1d31a5f2783bb1-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"311074102b6989260c1d31a5f2783bb1\") " pod="kube-system/kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.220414 kubelet[2238]: I0130 15:45:53.220247 2238 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.222477 kubelet[2238]: E0130 15:45:53.222376 2238 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.138:6443/api/v1/nodes\": dial tcp 172.24.4.138:6443: connect: connection refused" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.270150 containerd[1471]: time="2025-01-30T15:45:53.270031236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal,Uid:311074102b6989260c1d31a5f2783bb1,Namespace:kube-system,Attempt:0,}" Jan 30 15:45:53.297010 containerd[1471]: time="2025-01-30T15:45:53.296860106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal,Uid:6c1f936e102fefa4264e5965eb7b63a7,Namespace:kube-system,Attempt:0,}" Jan 30 15:45:53.302969 containerd[1471]: time="2025-01-30T15:45:53.302334876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal,Uid:18ff90b85bef74bfdc0d387333573f01,Namespace:kube-system,Attempt:0,}" Jan 30 15:45:53.418374 kubelet[2238]: E0130 15:45:53.418293 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-f-c7edc085f7.novalocal?timeout=10s\": dial tcp 172.24.4.138:6443: connect: connection refused" interval="800ms" Jan 30 15:45:53.625914 kubelet[2238]: I0130 15:45:53.625848 2238 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.626445 kubelet[2238]: E0130 15:45:53.626378 2238 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.138:6443/api/v1/nodes\": dial tcp 172.24.4.138:6443: connect: connection refused" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:53.650731 kubelet[2238]: W0130 15:45:53.649461 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Jan 30 15:45:53.650731 kubelet[2238]: E0130 15:45:53.649548 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:53.712320 kubelet[2238]: W0130 15:45:53.712260 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Jan 30 15:45:53.712757 kubelet[2238]: E0130 15:45:53.712617 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:53.869738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3524138265.mount: Deactivated successfully. Jan 30 15:45:53.885852 containerd[1471]: time="2025-01-30T15:45:53.883921572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:45:53.886751 containerd[1471]: time="2025-01-30T15:45:53.886639481Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:45:53.890029 containerd[1471]: time="2025-01-30T15:45:53.889955743Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 15:45:53.890741 containerd[1471]: time="2025-01-30T15:45:53.890639909Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 15:45:53.891027 containerd[1471]: time="2025-01-30T15:45:53.890980199Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:45:53.893261 containerd[1471]: time="2025-01-30T15:45:53.893187494Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 15:45:53.894009 containerd[1471]: time="2025-01-30T15:45:53.893932268Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:45:53.903425 containerd[1471]: time="2025-01-30T15:45:53.903347287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:45:53.906067 containerd[1471]: time="2025-01-30T15:45:53.905991945Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 635.769736ms" Jan 30 15:45:53.913650 containerd[1471]: time="2025-01-30T15:45:53.913559695Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 611.048803ms" Jan 30 15:45:53.924971 containerd[1471]: time="2025-01-30T15:45:53.924880398Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 627.87473ms" Jan 30 15:45:54.062698 kubelet[2238]: W0130 15:45:54.062076 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-f-c7edc085f7.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Jan 30 15:45:54.062698 kubelet[2238]: E0130 15:45:54.062225 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-f-c7edc085f7.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:54.112588 containerd[1471]: time="2025-01-30T15:45:54.111769776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:45:54.112588 containerd[1471]: time="2025-01-30T15:45:54.111903877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:45:54.112588 containerd[1471]: time="2025-01-30T15:45:54.111926668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:54.112588 containerd[1471]: time="2025-01-30T15:45:54.112024083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:54.120525 containerd[1471]: time="2025-01-30T15:45:54.120392831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:45:54.120660 containerd[1471]: time="2025-01-30T15:45:54.120578675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:45:54.123852 containerd[1471]: time="2025-01-30T15:45:54.123606906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:54.123852 containerd[1471]: time="2025-01-30T15:45:54.123775670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:54.135136 containerd[1471]: time="2025-01-30T15:45:54.133711183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:45:54.135136 containerd[1471]: time="2025-01-30T15:45:54.134816059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:45:54.135136 containerd[1471]: time="2025-01-30T15:45:54.134833410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:54.135136 containerd[1471]: time="2025-01-30T15:45:54.134907684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:54.151315 systemd[1]: Started cri-containerd-c76e4b8ca2d2d61987072e74c63f76494b4b369f71e179b5e4b679222b1e436d.scope - libcontainer container c76e4b8ca2d2d61987072e74c63f76494b4b369f71e179b5e4b679222b1e436d. Jan 30 15:45:54.168922 systemd[1]: Started cri-containerd-accbc236f5b9e7e7cb54e7ba28fbe11e50669b4387ec946f8d6a02f1c2a26bc3.scope - libcontainer container accbc236f5b9e7e7cb54e7ba28fbe11e50669b4387ec946f8d6a02f1c2a26bc3. Jan 30 15:45:54.170697 systemd[1]: Started cri-containerd-f8f4baa47c928c445aa5af56abceb05b30c2051d4eb584c84b96e05cea501a67.scope - libcontainer container f8f4baa47c928c445aa5af56abceb05b30c2051d4eb584c84b96e05cea501a67. Jan 30 15:45:54.219227 kubelet[2238]: E0130 15:45:54.219091 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-f-c7edc085f7.novalocal?timeout=10s\": dial tcp 172.24.4.138:6443: connect: connection refused" interval="1.6s" Jan 30 15:45:54.231018 containerd[1471]: time="2025-01-30T15:45:54.230665035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal,Uid:6c1f936e102fefa4264e5965eb7b63a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"accbc236f5b9e7e7cb54e7ba28fbe11e50669b4387ec946f8d6a02f1c2a26bc3\"" Jan 30 15:45:54.242185 containerd[1471]: time="2025-01-30T15:45:54.241808248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal,Uid:18ff90b85bef74bfdc0d387333573f01,Namespace:kube-system,Attempt:0,} returns sandbox id \"c76e4b8ca2d2d61987072e74c63f76494b4b369f71e179b5e4b679222b1e436d\"" Jan 30 15:45:54.245572 containerd[1471]: time="2025-01-30T15:45:54.245534394Z" level=info msg="CreateContainer within sandbox \"accbc236f5b9e7e7cb54e7ba28fbe11e50669b4387ec946f8d6a02f1c2a26bc3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 15:45:54.248067 containerd[1471]: time="2025-01-30T15:45:54.248040538Z" level=info msg="CreateContainer within sandbox \"c76e4b8ca2d2d61987072e74c63f76494b4b369f71e179b5e4b679222b1e436d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 15:45:54.255075 containerd[1471]: time="2025-01-30T15:45:54.255046437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal,Uid:311074102b6989260c1d31a5f2783bb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8f4baa47c928c445aa5af56abceb05b30c2051d4eb584c84b96e05cea501a67\"" Jan 30 15:45:54.258117 containerd[1471]: time="2025-01-30T15:45:54.258048321Z" level=info msg="CreateContainer within sandbox \"f8f4baa47c928c445aa5af56abceb05b30c2051d4eb584c84b96e05cea501a67\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 15:45:54.281946 containerd[1471]: time="2025-01-30T15:45:54.281908126Z" level=info msg="CreateContainer within sandbox \"accbc236f5b9e7e7cb54e7ba28fbe11e50669b4387ec946f8d6a02f1c2a26bc3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"25bb54aab7f0c1cca29647bdfdcd7feabbe7dbfb0def6afc6043e4e7e2ca4841\"" Jan 30 15:45:54.283026 containerd[1471]: time="2025-01-30T15:45:54.282989138Z" level=info msg="StartContainer for \"25bb54aab7f0c1cca29647bdfdcd7feabbe7dbfb0def6afc6043e4e7e2ca4841\"" Jan 30 15:45:54.291340 containerd[1471]: time="2025-01-30T15:45:54.291284915Z" level=info msg="CreateContainer within sandbox \"c76e4b8ca2d2d61987072e74c63f76494b4b369f71e179b5e4b679222b1e436d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bff858e7c90e383967ef2885b909bf2aa6721bbc8cfc2713e9c4035ae0d9fa1d\"" Jan 30 15:45:54.292699 containerd[1471]: time="2025-01-30T15:45:54.292293778Z" level=info msg="StartContainer for \"bff858e7c90e383967ef2885b909bf2aa6721bbc8cfc2713e9c4035ae0d9fa1d\"" Jan 30 15:45:54.304646 containerd[1471]: time="2025-01-30T15:45:54.304598420Z" level=info msg="CreateContainer within sandbox \"f8f4baa47c928c445aa5af56abceb05b30c2051d4eb584c84b96e05cea501a67\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"286ee1d0a889c9e621ea72ba6733f609530eb04eda3e6ea1aa51521189ceaf73\"" Jan 30 15:45:54.306391 containerd[1471]: time="2025-01-30T15:45:54.306358302Z" level=info msg="StartContainer for \"286ee1d0a889c9e621ea72ba6733f609530eb04eda3e6ea1aa51521189ceaf73\"" Jan 30 15:45:54.320848 systemd[1]: Started cri-containerd-25bb54aab7f0c1cca29647bdfdcd7feabbe7dbfb0def6afc6043e4e7e2ca4841.scope - libcontainer container 25bb54aab7f0c1cca29647bdfdcd7feabbe7dbfb0def6afc6043e4e7e2ca4841. Jan 30 15:45:54.333843 systemd[1]: Started cri-containerd-bff858e7c90e383967ef2885b909bf2aa6721bbc8cfc2713e9c4035ae0d9fa1d.scope - libcontainer container bff858e7c90e383967ef2885b909bf2aa6721bbc8cfc2713e9c4035ae0d9fa1d. Jan 30 15:45:54.353842 systemd[1]: Started cri-containerd-286ee1d0a889c9e621ea72ba6733f609530eb04eda3e6ea1aa51521189ceaf73.scope - libcontainer container 286ee1d0a889c9e621ea72ba6733f609530eb04eda3e6ea1aa51521189ceaf73. Jan 30 15:45:54.373777 kubelet[2238]: W0130 15:45:54.373589 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Jan 30 15:45:54.373777 kubelet[2238]: E0130 15:45:54.373665 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:54.395615 containerd[1471]: time="2025-01-30T15:45:54.395392822Z" level=info msg="StartContainer for \"25bb54aab7f0c1cca29647bdfdcd7feabbe7dbfb0def6afc6043e4e7e2ca4841\" returns successfully" Jan 30 15:45:54.427960 containerd[1471]: time="2025-01-30T15:45:54.426924643Z" level=info msg="StartContainer for \"286ee1d0a889c9e621ea72ba6733f609530eb04eda3e6ea1aa51521189ceaf73\" returns successfully" Jan 30 15:45:54.427960 containerd[1471]: time="2025-01-30T15:45:54.426941102Z" level=info msg="StartContainer for \"bff858e7c90e383967ef2885b909bf2aa6721bbc8cfc2713e9c4035ae0d9fa1d\" returns successfully" Jan 30 15:45:54.429998 kubelet[2238]: I0130 15:45:54.429819 2238 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:54.430349 kubelet[2238]: E0130 15:45:54.430311 2238 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.138:6443/api/v1/nodes\": dial tcp 172.24.4.138:6443: connect: connection refused" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:54.861499 kubelet[2238]: E0130 15:45:54.861473 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-f-c7edc085f7.novalocal\" not found" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:54.864968 kubelet[2238]: E0130 15:45:54.864920 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-f-c7edc085f7.novalocal\" not found" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:54.868696 kubelet[2238]: E0130 15:45:54.866656 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-f-c7edc085f7.novalocal\" not found" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:55.868762 kubelet[2238]: E0130 15:45:55.868490 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-f-c7edc085f7.novalocal\" not found" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:55.869421 kubelet[2238]: E0130 15:45:55.869299 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-f-c7edc085f7.novalocal\" not found" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:56.032831 kubelet[2238]: I0130 15:45:56.032211 2238 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:56.425941 kubelet[2238]: E0130 15:45:56.425873 2238 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-f-c7edc085f7.novalocal\" not found" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:56.478881 kubelet[2238]: E0130 15:45:56.478487 2238 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-f-c7edc085f7.novalocal.181f82f10047670f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-f-c7edc085f7.novalocal,UID:ci-4081-3-0-f-c7edc085f7.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-f-c7edc085f7.novalocal,},FirstTimestamp:2025-01-30 15:45:52.791709455 +0000 UTC m=+0.985452293,LastTimestamp:2025-01-30 15:45:52.791709455 +0000 UTC m=+0.985452293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-f-c7edc085f7.novalocal,}" Jan 30 15:45:56.530014 kubelet[2238]: I0130 15:45:56.529894 2238 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:56.548042 kubelet[2238]: E0130 15:45:56.547854 2238 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-f-c7edc085f7.novalocal.181f82f102ea416d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-f-c7edc085f7.novalocal,UID:ci-4081-3-0-f-c7edc085f7.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-f-c7edc085f7.novalocal,},FirstTimestamp:2025-01-30 15:45:52.835936621 +0000 UTC m=+1.029679439,LastTimestamp:2025-01-30 15:45:52.835936621 +0000 UTC m=+1.029679439,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-f-c7edc085f7.novalocal,}" Jan 30 15:45:56.605055 kubelet[2238]: E0130 15:45:56.604864 2238 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-f-c7edc085f7.novalocal.181f82f10399ef81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-f-c7edc085f7.novalocal,UID:ci-4081-3-0-f-c7edc085f7.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081-3-0-f-c7edc085f7.novalocal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-f-c7edc085f7.novalocal,},FirstTimestamp:2025-01-30 15:45:52.847449985 +0000 UTC m=+1.041192783,LastTimestamp:2025-01-30 15:45:52.847449985 +0000 UTC m=+1.041192783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-f-c7edc085f7.novalocal,}" Jan 30 15:45:56.610984 kubelet[2238]: I0130 15:45:56.610931 2238 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:56.620694 kubelet[2238]: E0130 15:45:56.620625 2238 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:56.620694 kubelet[2238]: I0130 15:45:56.620663 2238 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:56.623217 kubelet[2238]: E0130 15:45:56.623051 2238 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:56.623217 kubelet[2238]: I0130 15:45:56.623077 2238 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:56.624955 kubelet[2238]: E0130 15:45:56.624921 2238 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:56.662542 kubelet[2238]: E0130 15:45:56.662284 2238 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-f-c7edc085f7.novalocal.181f82f1039a007d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-f-c7edc085f7.novalocal,UID:ci-4081-3-0-f-c7edc085f7.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4081-3-0-f-c7edc085f7.novalocal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-f-c7edc085f7.novalocal,},FirstTimestamp:2025-01-30 15:45:52.847454333 +0000 UTC m=+1.041197121,LastTimestamp:2025-01-30 15:45:52.847454333 +0000 UTC m=+1.041197121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-f-c7edc085f7.novalocal,}" Jan 30 15:45:56.718475 kubelet[2238]: E0130 15:45:56.717544 2238 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-f-c7edc085f7.novalocal.181f82f1039a1685 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-f-c7edc085f7.novalocal,UID:ci-4081-3-0-f-c7edc085f7.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ci-4081-3-0-f-c7edc085f7.novalocal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-f-c7edc085f7.novalocal,},FirstTimestamp:2025-01-30 15:45:52.847459973 +0000 UTC m=+1.041202771,LastTimestamp:2025-01-30 15:45:52.847459973 +0000 UTC m=+1.041202771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-f-c7edc085f7.novalocal,}" Jan 30 15:45:56.781669 kubelet[2238]: I0130 15:45:56.780881 2238 apiserver.go:52] "Watching apiserver" Jan 30 15:45:56.814871 kubelet[2238]: I0130 15:45:56.814825 2238 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 15:45:58.932243 kubelet[2238]: I0130 15:45:58.932191 2238 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:45:58.940799 kubelet[2238]: W0130 15:45:58.940553 2238 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:45:59.656949 systemd[1]: Reloading requested from client PID 2512 ('systemctl') (unit session-11.scope)... Jan 30 15:45:59.657557 systemd[1]: Reloading... Jan 30 15:45:59.772800 zram_generator::config[2551]: No configuration found. Jan 30 15:45:59.919236 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:46:00.025513 systemd[1]: Reloading finished in 367 ms. Jan 30 15:46:00.068736 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:46:00.084879 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 15:46:00.085083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:46:00.085129 systemd[1]: kubelet.service: Consumed 1.404s CPU time, 126.4M memory peak, 0B memory swap peak. Jan 30 15:46:00.091025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:46:00.219753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:46:00.230233 (kubelet)[2615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 15:46:00.483950 kubelet[2615]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:46:00.483950 kubelet[2615]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 15:46:00.483950 kubelet[2615]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:46:00.484302 kubelet[2615]: I0130 15:46:00.484098 2615 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:46:00.498340 kubelet[2615]: I0130 15:46:00.498015 2615 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 15:46:00.498340 kubelet[2615]: I0130 15:46:00.498048 2615 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:46:00.498551 kubelet[2615]: I0130 15:46:00.498499 2615 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 15:46:00.501453 kubelet[2615]: I0130 15:46:00.500749 2615 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 15:46:00.505044 kubelet[2615]: I0130 15:46:00.503234 2615 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:46:00.510936 kubelet[2615]: E0130 15:46:00.510875 2615 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 15:46:00.511107 kubelet[2615]: I0130 15:46:00.510935 2615 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 15:46:00.515436 kubelet[2615]: I0130 15:46:00.515401 2615 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 15:46:00.515652 kubelet[2615]: I0130 15:46:00.515601 2615 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:46:00.515830 kubelet[2615]: I0130 15:46:00.515635 2615 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-f-c7edc085f7.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 15:46:00.515830 kubelet[2615]: I0130 15:46:00.515833 2615 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:46:00.515830 kubelet[2615]: I0130 15:46:00.515843 2615 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 15:46:00.515830 kubelet[2615]: I0130 15:46:00.515878 2615 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:46:00.516191 kubelet[2615]: I0130 15:46:00.516075 2615 kubelet.go:446] "Attempting to sync node with API server" Jan 30 15:46:00.516191 kubelet[2615]: I0130 15:46:00.516102 2615 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:46:00.519690 kubelet[2615]: I0130 15:46:00.516521 2615 kubelet.go:352] "Adding apiserver pod source" Jan 30 15:46:00.519690 kubelet[2615]: I0130 15:46:00.516542 2615 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:46:00.520048 kubelet[2615]: I0130 15:46:00.520028 2615 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 15:46:00.520551 kubelet[2615]: I0130 15:46:00.520536 2615 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:46:00.521585 kubelet[2615]: I0130 15:46:00.521122 2615 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 15:46:00.521798 kubelet[2615]: I0130 15:46:00.521726 2615 server.go:1287] "Started kubelet" Jan 30 15:46:00.523446 kubelet[2615]: I0130 15:46:00.522816 2615 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:46:00.523446 kubelet[2615]: I0130 15:46:00.523132 2615 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:46:00.527166 kubelet[2615]: I0130 15:46:00.525918 2615 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:46:00.528105 kubelet[2615]: I0130 15:46:00.527923 2615 server.go:490] "Adding debug handlers to kubelet server" Jan 30 15:46:00.532598 kubelet[2615]: I0130 15:46:00.532580 2615 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:46:00.538898 kubelet[2615]: I0130 15:46:00.538872 2615 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 15:46:00.543686 kubelet[2615]: I0130 15:46:00.541302 2615 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 15:46:00.545850 kubelet[2615]: E0130 15:46:00.545824 2615 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-f-c7edc085f7.novalocal\" not found" Jan 30 15:46:00.555739 kubelet[2615]: I0130 15:46:00.555693 2615 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:46:00.556042 kubelet[2615]: I0130 15:46:00.555819 2615 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 15:46:00.556354 kubelet[2615]: I0130 15:46:00.556324 2615 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 15:46:00.556354 kubelet[2615]: E0130 15:46:00.541811 2615 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 15:46:00.556497 kubelet[2615]: I0130 15:46:00.556441 2615 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:46:00.562498 kubelet[2615]: I0130 15:46:00.561249 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:46:00.564699 kubelet[2615]: I0130 15:46:00.563503 2615 factory.go:221] Registration of the containerd container factory successfully Jan 30 15:46:00.564699 kubelet[2615]: I0130 15:46:00.563633 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:46:00.564699 kubelet[2615]: I0130 15:46:00.563659 2615 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 15:46:00.564699 kubelet[2615]: I0130 15:46:00.563703 2615 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 15:46:00.564699 kubelet[2615]: I0130 15:46:00.563712 2615 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 15:46:00.564699 kubelet[2615]: E0130 15:46:00.563754 2615 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 15:46:00.626168 kubelet[2615]: I0130 15:46:00.625860 2615 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 15:46:00.626168 kubelet[2615]: I0130 15:46:00.625877 2615 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 15:46:00.626168 kubelet[2615]: I0130 15:46:00.625893 2615 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:46:00.626168 kubelet[2615]: I0130 15:46:00.626087 2615 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 15:46:00.626168 kubelet[2615]: I0130 15:46:00.626099 2615 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 15:46:00.626168 kubelet[2615]: I0130 15:46:00.626117 2615 policy_none.go:49] "None policy: Start" Jan 30 15:46:00.626168 kubelet[2615]: I0130 15:46:00.626126 2615 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 15:46:00.626168 kubelet[2615]: I0130 15:46:00.626136 2615 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:46:00.626577 kubelet[2615]: I0130 15:46:00.626564 2615 state_mem.go:75] "Updated machine memory state" Jan 30 15:46:00.633919 kubelet[2615]: I0130 15:46:00.633899 2615 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:46:00.634922 kubelet[2615]: I0130 15:46:00.634909 2615 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 15:46:00.636749 kubelet[2615]: I0130 15:46:00.635827 2615 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:46:00.639152 kubelet[2615]: I0130 15:46:00.639122 2615 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:46:00.639820 kubelet[2615]: E0130 15:46:00.637788 2615 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 15:46:00.665570 kubelet[2615]: I0130 15:46:00.665513 2615 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:00.665767 kubelet[2615]: I0130 15:46:00.665653 2615 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:00.666687 kubelet[2615]: I0130 15:46:00.666049 2615 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:00.675736 kubelet[2615]: W0130 15:46:00.675705 2615 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:46:00.677916 kubelet[2615]: W0130 15:46:00.677814 2615 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:46:00.679089 kubelet[2615]: W0130 15:46:00.679060 2615 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:46:00.679138 kubelet[2615]: E0130 15:46:00.679121 2615 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:00.741692 kubelet[2615]: I0130 15:46:00.741449 2615 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:00.752416 kubelet[2615]: I0130 15:46:00.752267 2615 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:00.752416 kubelet[2615]: I0130 15:46:00.752375 2615 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:00.757623 kubelet[2615]: I0130 15:46:00.757556 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/311074102b6989260c1d31a5f2783bb1-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"311074102b6989260c1d31a5f2783bb1\") " pod="kube-system/kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:00.757742 kubelet[2615]: I0130 15:46:00.757593 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/311074102b6989260c1d31a5f2783bb1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"311074102b6989260c1d31a5f2783bb1\") " pod="kube-system/kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:00.757742 kubelet[2615]: I0130 15:46:00.757666 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c1f936e102fefa4264e5965eb7b63a7-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"6c1f936e102fefa4264e5965eb7b63a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:00.757742 kubelet[2615]: I0130 15:46:00.757717 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c1f936e102fefa4264e5965eb7b63a7-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"6c1f936e102fefa4264e5965eb7b63a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:00.757742 kubelet[2615]: I0130 15:46:00.757737 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/18ff90b85bef74bfdc0d387333573f01-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"18ff90b85bef74bfdc0d387333573f01\") " pod="kube-system/kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:00.757878 kubelet[2615]: I0130 15:46:00.757770 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/311074102b6989260c1d31a5f2783bb1-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"311074102b6989260c1d31a5f2783bb1\") " pod="kube-system/kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:00.757878 kubelet[2615]: I0130 15:46:00.757795 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6c1f936e102fefa4264e5965eb7b63a7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"6c1f936e102fefa4264e5965eb7b63a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:00.757878 kubelet[2615]: I0130 15:46:00.757814 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c1f936e102fefa4264e5965eb7b63a7-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"6c1f936e102fefa4264e5965eb7b63a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:00.757878 kubelet[2615]: I0130 15:46:00.757834 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c1f936e102fefa4264e5965eb7b63a7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal\" (UID: \"6c1f936e102fefa4264e5965eb7b63a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:01.517577 kubelet[2615]: I0130 15:46:01.517509 2615 apiserver.go:52] "Watching apiserver" Jan 30 15:46:01.557627 kubelet[2615]: I0130 15:46:01.557517 2615 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 15:46:01.610024 kubelet[2615]: I0130 15:46:01.609537 2615 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:01.610561 kubelet[2615]: I0130 15:46:01.610457 2615 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:01.626859 kubelet[2615]: W0130 15:46:01.625204 2615 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:46:01.626859 kubelet[2615]: E0130 15:46:01.625307 2615 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:01.645004 kubelet[2615]: W0130 15:46:01.644940 2615 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:46:01.645216 kubelet[2615]: E0130 15:46:01.645042 2615 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:01.672058 kubelet[2615]: I0130 15:46:01.671969 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-f-c7edc085f7.novalocal" podStartSLOduration=1.671954367 podStartE2EDuration="1.671954367s" podCreationTimestamp="2025-01-30 15:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:46:01.667571526 +0000 UTC m=+1.427791541" watchObservedRunningTime="2025-01-30 15:46:01.671954367 +0000 UTC m=+1.432174373" Jan 30 15:46:01.689383 kubelet[2615]: I0130 15:46:01.689339 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-f-c7edc085f7.novalocal" podStartSLOduration=1.689302627 podStartE2EDuration="1.689302627s" podCreationTimestamp="2025-01-30 15:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:46:01.67891317 +0000 UTC m=+1.439133185" watchObservedRunningTime="2025-01-30 15:46:01.689302627 +0000 UTC m=+1.449522632" Jan 30 15:46:01.690783 kubelet[2615]: I0130 15:46:01.690750 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-f-c7edc085f7.novalocal" podStartSLOduration=3.690738929 podStartE2EDuration="3.690738929s" podCreationTimestamp="2025-01-30 15:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:46:01.690011871 +0000 UTC m=+1.450231886" watchObservedRunningTime="2025-01-30 15:46:01.690738929 +0000 UTC m=+1.450958934" Jan 30 15:46:04.461055 systemd[1]: Created slice kubepods-besteffort-podc932c1c6_2ef2_4bf8_b34a_857647ac6c19.slice - libcontainer container kubepods-besteffort-podc932c1c6_2ef2_4bf8_b34a_857647ac6c19.slice. Jan 30 15:46:04.488787 kubelet[2615]: I0130 15:46:04.488503 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c932c1c6-2ef2-4bf8-b34a-857647ac6c19-kube-proxy\") pod \"kube-proxy-mr62j\" (UID: \"c932c1c6-2ef2-4bf8-b34a-857647ac6c19\") " pod="kube-system/kube-proxy-mr62j" Jan 30 15:46:04.490809 kubelet[2615]: I0130 15:46:04.490732 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c932c1c6-2ef2-4bf8-b34a-857647ac6c19-xtables-lock\") pod \"kube-proxy-mr62j\" (UID: \"c932c1c6-2ef2-4bf8-b34a-857647ac6c19\") " pod="kube-system/kube-proxy-mr62j" Jan 30 15:46:04.490809 kubelet[2615]: I0130 15:46:04.490762 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c932c1c6-2ef2-4bf8-b34a-857647ac6c19-lib-modules\") pod \"kube-proxy-mr62j\" (UID: \"c932c1c6-2ef2-4bf8-b34a-857647ac6c19\") " pod="kube-system/kube-proxy-mr62j" Jan 30 15:46:04.490980 kubelet[2615]: I0130 15:46:04.490873 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d92rl\" (UniqueName: \"kubernetes.io/projected/c932c1c6-2ef2-4bf8-b34a-857647ac6c19-kube-api-access-d92rl\") pod \"kube-proxy-mr62j\" (UID: \"c932c1c6-2ef2-4bf8-b34a-857647ac6c19\") " pod="kube-system/kube-proxy-mr62j" Jan 30 15:46:04.550199 kubelet[2615]: I0130 15:46:04.549990 2615 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 15:46:04.551135 kubelet[2615]: I0130 15:46:04.550715 2615 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 15:46:04.551246 containerd[1471]: time="2025-01-30T15:46:04.550366074Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 15:46:04.614567 kubelet[2615]: E0130 15:46:04.614289 2615 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 15:46:04.614567 kubelet[2615]: E0130 15:46:04.614357 2615 projected.go:194] Error preparing data for projected volume kube-api-access-d92rl for pod kube-system/kube-proxy-mr62j: configmap "kube-root-ca.crt" not found Jan 30 15:46:04.614567 kubelet[2615]: E0130 15:46:04.614470 2615 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c932c1c6-2ef2-4bf8-b34a-857647ac6c19-kube-api-access-d92rl podName:c932c1c6-2ef2-4bf8-b34a-857647ac6c19 nodeName:}" failed. No retries permitted until 2025-01-30 15:46:05.114432152 +0000 UTC m=+4.874652207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d92rl" (UniqueName: "kubernetes.io/projected/c932c1c6-2ef2-4bf8-b34a-857647ac6c19-kube-api-access-d92rl") pod "kube-proxy-mr62j" (UID: "c932c1c6-2ef2-4bf8-b34a-857647ac6c19") : configmap "kube-root-ca.crt" not found Jan 30 15:46:05.197065 kubelet[2615]: E0130 15:46:05.196998 2615 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 15:46:05.197065 kubelet[2615]: E0130 15:46:05.197057 2615 projected.go:194] Error preparing data for projected volume kube-api-access-d92rl for pod kube-system/kube-proxy-mr62j: configmap "kube-root-ca.crt" not found Jan 30 15:46:05.197316 kubelet[2615]: E0130 15:46:05.197167 2615 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c932c1c6-2ef2-4bf8-b34a-857647ac6c19-kube-api-access-d92rl podName:c932c1c6-2ef2-4bf8-b34a-857647ac6c19 nodeName:}" failed. No retries permitted until 2025-01-30 15:46:06.197135623 +0000 UTC m=+5.957355688 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d92rl" (UniqueName: "kubernetes.io/projected/c932c1c6-2ef2-4bf8-b34a-857647ac6c19-kube-api-access-d92rl") pod "kube-proxy-mr62j" (UID: "c932c1c6-2ef2-4bf8-b34a-857647ac6c19") : configmap "kube-root-ca.crt" not found Jan 30 15:46:05.615596 systemd[1]: Created slice kubepods-besteffort-pod074ab307_3b88_40db_a212_459a48037079.slice - libcontainer container kubepods-besteffort-pod074ab307_3b88_40db_a212_459a48037079.slice. Jan 30 15:46:05.699432 kubelet[2615]: I0130 15:46:05.699308 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/074ab307-3b88-40db-a212-459a48037079-var-lib-calico\") pod \"tigera-operator-7d68577dc5-dmjzg\" (UID: \"074ab307-3b88-40db-a212-459a48037079\") " pod="tigera-operator/tigera-operator-7d68577dc5-dmjzg" Jan 30 15:46:05.699432 kubelet[2615]: I0130 15:46:05.699407 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zjd7\" (UniqueName: \"kubernetes.io/projected/074ab307-3b88-40db-a212-459a48037079-kube-api-access-8zjd7\") pod \"tigera-operator-7d68577dc5-dmjzg\" (UID: \"074ab307-3b88-40db-a212-459a48037079\") " pod="tigera-operator/tigera-operator-7d68577dc5-dmjzg" Jan 30 15:46:05.922062 containerd[1471]: time="2025-01-30T15:46:05.921863691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-dmjzg,Uid:074ab307-3b88-40db-a212-459a48037079,Namespace:tigera-operator,Attempt:0,}" Jan 30 15:46:05.994581 containerd[1471]: time="2025-01-30T15:46:05.994015866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:05.994581 containerd[1471]: time="2025-01-30T15:46:05.994132339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:05.994581 containerd[1471]: time="2025-01-30T15:46:05.994193372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:05.994581 containerd[1471]: time="2025-01-30T15:46:05.994379003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:06.028860 systemd[1]: Started cri-containerd-77fea4519b5775b63f3eaca99196bca87087c617e11fb38e28413089a2467049.scope - libcontainer container 77fea4519b5775b63f3eaca99196bca87087c617e11fb38e28413089a2467049. Jan 30 15:46:06.066993 containerd[1471]: time="2025-01-30T15:46:06.066890716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-dmjzg,Uid:074ab307-3b88-40db-a212-459a48037079,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"77fea4519b5775b63f3eaca99196bca87087c617e11fb38e28413089a2467049\"" Jan 30 15:46:06.070208 containerd[1471]: time="2025-01-30T15:46:06.070182051Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 15:46:06.273863 containerd[1471]: time="2025-01-30T15:46:06.273584316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mr62j,Uid:c932c1c6-2ef2-4bf8-b34a-857647ac6c19,Namespace:kube-system,Attempt:0,}" Jan 30 15:46:06.323105 containerd[1471]: time="2025-01-30T15:46:06.322777538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:06.323105 containerd[1471]: time="2025-01-30T15:46:06.322904551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:06.323721 containerd[1471]: time="2025-01-30T15:46:06.322950917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:06.326855 containerd[1471]: time="2025-01-30T15:46:06.324135896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:06.364123 systemd[1]: Started cri-containerd-4798d7bb055f942331b2051af3eb2f39f5587d9e231782f512234ff19e49fb4f.scope - libcontainer container 4798d7bb055f942331b2051af3eb2f39f5587d9e231782f512234ff19e49fb4f. Jan 30 15:46:06.401262 containerd[1471]: time="2025-01-30T15:46:06.401215428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mr62j,Uid:c932c1c6-2ef2-4bf8-b34a-857647ac6c19,Namespace:kube-system,Attempt:0,} returns sandbox id \"4798d7bb055f942331b2051af3eb2f39f5587d9e231782f512234ff19e49fb4f\"" Jan 30 15:46:06.406135 containerd[1471]: time="2025-01-30T15:46:06.406079324Z" level=info msg="CreateContainer within sandbox \"4798d7bb055f942331b2051af3eb2f39f5587d9e231782f512234ff19e49fb4f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 15:46:06.430471 containerd[1471]: time="2025-01-30T15:46:06.430346218Z" level=info msg="CreateContainer within sandbox \"4798d7bb055f942331b2051af3eb2f39f5587d9e231782f512234ff19e49fb4f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6397e99080f253e66cce5fb81481b2691b099f31ccbc659df407922acbb49b03\"" Jan 30 15:46:06.432160 containerd[1471]: time="2025-01-30T15:46:06.432100234Z" level=info msg="StartContainer for \"6397e99080f253e66cce5fb81481b2691b099f31ccbc659df407922acbb49b03\"" Jan 30 15:46:06.473875 systemd[1]: Started cri-containerd-6397e99080f253e66cce5fb81481b2691b099f31ccbc659df407922acbb49b03.scope - libcontainer container 6397e99080f253e66cce5fb81481b2691b099f31ccbc659df407922acbb49b03. Jan 30 15:46:06.511715 containerd[1471]: time="2025-01-30T15:46:06.511635743Z" level=info msg="StartContainer for \"6397e99080f253e66cce5fb81481b2691b099f31ccbc659df407922acbb49b03\" returns successfully" Jan 30 15:46:07.063225 sudo[1726]: pam_unix(sudo:session): session closed for user root Jan 30 15:46:07.206217 sshd[1723]: pam_unix(sshd:session): session closed for user core Jan 30 15:46:07.213413 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Jan 30 15:46:07.213665 systemd[1]: sshd@8-172.24.4.138:22-172.24.4.1:48166.service: Deactivated successfully. Jan 30 15:46:07.218879 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 15:46:07.219534 systemd[1]: session-11.scope: Consumed 7.323s CPU time, 158.4M memory peak, 0B memory swap peak. Jan 30 15:46:07.223746 systemd-logind[1453]: Removed session 11. Jan 30 15:46:08.287648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3508688336.mount: Deactivated successfully. Jan 30 15:46:09.359976 containerd[1471]: time="2025-01-30T15:46:09.359911714Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:09.361262 containerd[1471]: time="2025-01-30T15:46:09.361202537Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 15:46:09.362715 containerd[1471]: time="2025-01-30T15:46:09.362644468Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:09.365782 containerd[1471]: time="2025-01-30T15:46:09.365742385Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:09.366782 containerd[1471]: time="2025-01-30T15:46:09.366639932Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.296386731s" Jan 30 15:46:09.366782 containerd[1471]: time="2025-01-30T15:46:09.366686268Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 15:46:09.370753 containerd[1471]: time="2025-01-30T15:46:09.370716506Z" level=info msg="CreateContainer within sandbox \"77fea4519b5775b63f3eaca99196bca87087c617e11fb38e28413089a2467049\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 15:46:09.383220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681080616.mount: Deactivated successfully. Jan 30 15:46:09.392898 containerd[1471]: time="2025-01-30T15:46:09.392789496Z" level=info msg="CreateContainer within sandbox \"77fea4519b5775b63f3eaca99196bca87087c617e11fb38e28413089a2467049\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"88f49bca497f7d13b3b4486b75d259db9b10ccd206e542c4376a4214ba2c98fb\"" Jan 30 15:46:09.393324 containerd[1471]: time="2025-01-30T15:46:09.393288066Z" level=info msg="StartContainer for \"88f49bca497f7d13b3b4486b75d259db9b10ccd206e542c4376a4214ba2c98fb\"" Jan 30 15:46:09.429820 systemd[1]: Started cri-containerd-88f49bca497f7d13b3b4486b75d259db9b10ccd206e542c4376a4214ba2c98fb.scope - libcontainer container 88f49bca497f7d13b3b4486b75d259db9b10ccd206e542c4376a4214ba2c98fb. Jan 30 15:46:09.461050 containerd[1471]: time="2025-01-30T15:46:09.460917246Z" level=info msg="StartContainer for \"88f49bca497f7d13b3b4486b75d259db9b10ccd206e542c4376a4214ba2c98fb\" returns successfully" Jan 30 15:46:09.658169 kubelet[2615]: I0130 15:46:09.657846 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mr62j" podStartSLOduration=5.657772206 podStartE2EDuration="5.657772206s" podCreationTimestamp="2025-01-30 15:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:46:06.639435103 +0000 UTC m=+6.399655108" watchObservedRunningTime="2025-01-30 15:46:09.657772206 +0000 UTC m=+9.417992271" Jan 30 15:46:09.755981 kubelet[2615]: I0130 15:46:09.755863 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-dmjzg" podStartSLOduration=1.456728917 podStartE2EDuration="4.755822093s" podCreationTimestamp="2025-01-30 15:46:05 +0000 UTC" firstStartedPulling="2025-01-30 15:46:06.068641938 +0000 UTC m=+5.828861943" lastFinishedPulling="2025-01-30 15:46:09.367735114 +0000 UTC m=+9.127955119" observedRunningTime="2025-01-30 15:46:09.661705645 +0000 UTC m=+9.421925700" watchObservedRunningTime="2025-01-30 15:46:09.755822093 +0000 UTC m=+9.516042148" Jan 30 15:46:13.080947 kubelet[2615]: W0130 15:46:13.080538 2615 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081-3-0-f-c7edc085f7.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-0-f-c7edc085f7.novalocal' and this object Jan 30 15:46:13.080947 kubelet[2615]: E0130 15:46:13.080595 2615 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4081-3-0-f-c7edc085f7.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-0-f-c7edc085f7.novalocal' and this object" logger="UnhandledError" Jan 30 15:46:13.087079 systemd[1]: Created slice kubepods-besteffort-pod5fa61052_3253_48f6_875f_7ea89421bdff.slice - libcontainer container kubepods-besteffort-pod5fa61052_3253_48f6_875f_7ea89421bdff.slice. Jan 30 15:46:13.146281 kubelet[2615]: I0130 15:46:13.146239 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fa61052-3253-48f6-875f-7ea89421bdff-tigera-ca-bundle\") pod \"calico-typha-7f69d88c4-xd5j9\" (UID: \"5fa61052-3253-48f6-875f-7ea89421bdff\") " pod="calico-system/calico-typha-7f69d88c4-xd5j9" Jan 30 15:46:13.146569 kubelet[2615]: I0130 15:46:13.146471 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nftlk\" (UniqueName: \"kubernetes.io/projected/5fa61052-3253-48f6-875f-7ea89421bdff-kube-api-access-nftlk\") pod \"calico-typha-7f69d88c4-xd5j9\" (UID: \"5fa61052-3253-48f6-875f-7ea89421bdff\") " pod="calico-system/calico-typha-7f69d88c4-xd5j9" Jan 30 15:46:13.146569 kubelet[2615]: I0130 15:46:13.146509 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5fa61052-3253-48f6-875f-7ea89421bdff-typha-certs\") pod \"calico-typha-7f69d88c4-xd5j9\" (UID: \"5fa61052-3253-48f6-875f-7ea89421bdff\") " pod="calico-system/calico-typha-7f69d88c4-xd5j9" Jan 30 15:46:13.226147 systemd[1]: Created slice kubepods-besteffort-pod119fc95e_dec5_4afb_9826_0c9f9e00568c.slice - libcontainer container kubepods-besteffort-pod119fc95e_dec5_4afb_9826_0c9f9e00568c.slice. Jan 30 15:46:13.248731 kubelet[2615]: I0130 15:46:13.247490 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/119fc95e-dec5-4afb-9826-0c9f9e00568c-xtables-lock\") pod \"calico-node-rw5rw\" (UID: \"119fc95e-dec5-4afb-9826-0c9f9e00568c\") " pod="calico-system/calico-node-rw5rw" Jan 30 15:46:13.248731 kubelet[2615]: I0130 15:46:13.247534 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/119fc95e-dec5-4afb-9826-0c9f9e00568c-cni-log-dir\") pod \"calico-node-rw5rw\" (UID: \"119fc95e-dec5-4afb-9826-0c9f9e00568c\") " pod="calico-system/calico-node-rw5rw" Jan 30 15:46:13.248731 kubelet[2615]: I0130 15:46:13.247576 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/119fc95e-dec5-4afb-9826-0c9f9e00568c-flexvol-driver-host\") pod \"calico-node-rw5rw\" (UID: \"119fc95e-dec5-4afb-9826-0c9f9e00568c\") " pod="calico-system/calico-node-rw5rw" Jan 30 15:46:13.248731 kubelet[2615]: I0130 15:46:13.247640 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/119fc95e-dec5-4afb-9826-0c9f9e00568c-policysync\") pod \"calico-node-rw5rw\" (UID: \"119fc95e-dec5-4afb-9826-0c9f9e00568c\") " pod="calico-system/calico-node-rw5rw" Jan 30 15:46:13.248731 kubelet[2615]: I0130 15:46:13.247659 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/119fc95e-dec5-4afb-9826-0c9f9e00568c-tigera-ca-bundle\") pod \"calico-node-rw5rw\" (UID: \"119fc95e-dec5-4afb-9826-0c9f9e00568c\") " pod="calico-system/calico-node-rw5rw" Jan 30 15:46:13.248971 kubelet[2615]: I0130 15:46:13.247711 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/119fc95e-dec5-4afb-9826-0c9f9e00568c-var-lib-calico\") pod \"calico-node-rw5rw\" (UID: \"119fc95e-dec5-4afb-9826-0c9f9e00568c\") " pod="calico-system/calico-node-rw5rw" Jan 30 15:46:13.248971 kubelet[2615]: I0130 15:46:13.247732 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/119fc95e-dec5-4afb-9826-0c9f9e00568c-var-run-calico\") pod \"calico-node-rw5rw\" (UID: \"119fc95e-dec5-4afb-9826-0c9f9e00568c\") " pod="calico-system/calico-node-rw5rw" Jan 30 15:46:13.248971 kubelet[2615]: I0130 15:46:13.247752 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/119fc95e-dec5-4afb-9826-0c9f9e00568c-cni-bin-dir\") pod \"calico-node-rw5rw\" (UID: \"119fc95e-dec5-4afb-9826-0c9f9e00568c\") " pod="calico-system/calico-node-rw5rw" Jan 30 15:46:13.248971 kubelet[2615]: I0130 15:46:13.247778 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/119fc95e-dec5-4afb-9826-0c9f9e00568c-lib-modules\") pod \"calico-node-rw5rw\" (UID: \"119fc95e-dec5-4afb-9826-0c9f9e00568c\") " pod="calico-system/calico-node-rw5rw" Jan 30 15:46:13.248971 kubelet[2615]: I0130 15:46:13.247799 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/119fc95e-dec5-4afb-9826-0c9f9e00568c-node-certs\") pod \"calico-node-rw5rw\" (UID: \"119fc95e-dec5-4afb-9826-0c9f9e00568c\") " pod="calico-system/calico-node-rw5rw" Jan 30 15:46:13.249097 kubelet[2615]: I0130 15:46:13.247817 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d54gl\" (UniqueName: \"kubernetes.io/projected/119fc95e-dec5-4afb-9826-0c9f9e00568c-kube-api-access-d54gl\") pod \"calico-node-rw5rw\" (UID: \"119fc95e-dec5-4afb-9826-0c9f9e00568c\") " pod="calico-system/calico-node-rw5rw" Jan 30 15:46:13.249097 kubelet[2615]: I0130 15:46:13.247843 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/119fc95e-dec5-4afb-9826-0c9f9e00568c-cni-net-dir\") pod \"calico-node-rw5rw\" (UID: \"119fc95e-dec5-4afb-9826-0c9f9e00568c\") " pod="calico-system/calico-node-rw5rw" Jan 30 15:46:13.345007 kubelet[2615]: E0130 15:46:13.344868 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2brc4" podUID="83a3f814-60c1-47be-8f8b-bd595ad0a1dc" Jan 30 15:46:13.355922 kubelet[2615]: E0130 15:46:13.355890 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.356124 kubelet[2615]: W0130 15:46:13.355935 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.356124 kubelet[2615]: E0130 15:46:13.355962 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.356540 kubelet[2615]: E0130 15:46:13.356428 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.356540 kubelet[2615]: W0130 15:46:13.356444 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.356540 kubelet[2615]: E0130 15:46:13.356486 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.357038 kubelet[2615]: E0130 15:46:13.356977 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.357038 kubelet[2615]: W0130 15:46:13.356994 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.357038 kubelet[2615]: E0130 15:46:13.357015 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.361774 kubelet[2615]: E0130 15:46:13.360333 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.361774 kubelet[2615]: W0130 15:46:13.360363 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.361774 kubelet[2615]: E0130 15:46:13.360415 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.362195 kubelet[2615]: E0130 15:46:13.362169 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.362253 kubelet[2615]: W0130 15:46:13.362213 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.362327 kubelet[2615]: E0130 15:46:13.362303 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.362576 kubelet[2615]: E0130 15:46:13.362555 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.362626 kubelet[2615]: W0130 15:46:13.362575 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.362757 kubelet[2615]: E0130 15:46:13.362708 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.362957 kubelet[2615]: E0130 15:46:13.362914 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.362957 kubelet[2615]: W0130 15:46:13.362928 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.364710 kubelet[2615]: E0130 15:46:13.363028 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.367420 kubelet[2615]: E0130 15:46:13.367391 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.367420 kubelet[2615]: W0130 15:46:13.367416 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.369392 kubelet[2615]: E0130 15:46:13.369360 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.369392 kubelet[2615]: W0130 15:46:13.369382 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.369498 kubelet[2615]: E0130 15:46:13.369416 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.369498 kubelet[2615]: E0130 15:46:13.369475 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.369845 kubelet[2615]: E0130 15:46:13.369821 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.369845 kubelet[2615]: W0130 15:46:13.369840 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.370226 kubelet[2615]: E0130 15:46:13.370199 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.370226 kubelet[2615]: W0130 15:46:13.370216 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.370338 kubelet[2615]: E0130 15:46:13.370231 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.370338 kubelet[2615]: E0130 15:46:13.370299 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.371010 kubelet[2615]: E0130 15:46:13.370528 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.371010 kubelet[2615]: W0130 15:46:13.370543 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.371010 kubelet[2615]: E0130 15:46:13.370554 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.373935 kubelet[2615]: E0130 15:46:13.373902 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.375946 kubelet[2615]: W0130 15:46:13.375818 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.375946 kubelet[2615]: E0130 15:46:13.375865 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.380692 kubelet[2615]: E0130 15:46:13.378969 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.380871 kubelet[2615]: W0130 15:46:13.380795 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.380871 kubelet[2615]: E0130 15:46:13.380823 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.431229 kubelet[2615]: E0130 15:46:13.431175 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.431229 kubelet[2615]: W0130 15:46:13.431201 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.431229 kubelet[2615]: E0130 15:46:13.431227 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.431646 kubelet[2615]: E0130 15:46:13.431479 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.431646 kubelet[2615]: W0130 15:46:13.431514 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.431646 kubelet[2615]: E0130 15:46:13.431529 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.432210 kubelet[2615]: E0130 15:46:13.432192 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.432210 kubelet[2615]: W0130 15:46:13.432208 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.432343 kubelet[2615]: E0130 15:46:13.432225 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.432872 kubelet[2615]: E0130 15:46:13.432812 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.432872 kubelet[2615]: W0130 15:46:13.432836 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.432872 kubelet[2615]: E0130 15:46:13.432852 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.433184 kubelet[2615]: E0130 15:46:13.433162 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.433184 kubelet[2615]: W0130 15:46:13.433176 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.433184 kubelet[2615]: E0130 15:46:13.433190 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.433529 kubelet[2615]: E0130 15:46:13.433339 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.433529 kubelet[2615]: W0130 15:46:13.433349 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.433529 kubelet[2615]: E0130 15:46:13.433358 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.435144 kubelet[2615]: E0130 15:46:13.435123 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.435144 kubelet[2615]: W0130 15:46:13.435139 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.435144 kubelet[2615]: E0130 15:46:13.435153 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.435522 kubelet[2615]: E0130 15:46:13.435502 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.435522 kubelet[2615]: W0130 15:46:13.435517 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.435717 kubelet[2615]: E0130 15:46:13.435529 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.435862 kubelet[2615]: E0130 15:46:13.435847 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.435862 kubelet[2615]: W0130 15:46:13.435858 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.435945 kubelet[2615]: E0130 15:46:13.435870 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.436146 kubelet[2615]: E0130 15:46:13.436092 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.436146 kubelet[2615]: W0130 15:46:13.436104 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.436146 kubelet[2615]: E0130 15:46:13.436116 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.436460 kubelet[2615]: E0130 15:46:13.436429 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.436460 kubelet[2615]: W0130 15:46:13.436444 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.436460 kubelet[2615]: E0130 15:46:13.436455 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.437451 kubelet[2615]: E0130 15:46:13.437430 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.437451 kubelet[2615]: W0130 15:46:13.437446 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.437550 kubelet[2615]: E0130 15:46:13.437459 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.437743 kubelet[2615]: E0130 15:46:13.437716 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.437743 kubelet[2615]: W0130 15:46:13.437727 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.437743 kubelet[2615]: E0130 15:46:13.437737 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.439034 kubelet[2615]: E0130 15:46:13.439013 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.439034 kubelet[2615]: W0130 15:46:13.439031 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.439248 kubelet[2615]: E0130 15:46:13.439045 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.439324 kubelet[2615]: E0130 15:46:13.439287 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.439324 kubelet[2615]: W0130 15:46:13.439299 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.439324 kubelet[2615]: E0130 15:46:13.439310 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.440453 kubelet[2615]: E0130 15:46:13.440415 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.440453 kubelet[2615]: W0130 15:46:13.440431 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.440453 kubelet[2615]: E0130 15:46:13.440443 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.440712 kubelet[2615]: E0130 15:46:13.440689 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.440712 kubelet[2615]: W0130 15:46:13.440703 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.440785 kubelet[2615]: E0130 15:46:13.440718 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.442402 kubelet[2615]: E0130 15:46:13.442287 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.442402 kubelet[2615]: W0130 15:46:13.442311 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.442402 kubelet[2615]: E0130 15:46:13.442335 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.443062 kubelet[2615]: E0130 15:46:13.442955 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.443062 kubelet[2615]: W0130 15:46:13.442968 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.443062 kubelet[2615]: E0130 15:46:13.442980 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.443574 kubelet[2615]: E0130 15:46:13.443474 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.443574 kubelet[2615]: W0130 15:46:13.443489 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.443574 kubelet[2615]: E0130 15:46:13.443502 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.449250 kubelet[2615]: E0130 15:46:13.449207 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.449250 kubelet[2615]: W0130 15:46:13.449240 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.449398 kubelet[2615]: E0130 15:46:13.449267 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.449398 kubelet[2615]: I0130 15:46:13.449312 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbjjw\" (UniqueName: \"kubernetes.io/projected/83a3f814-60c1-47be-8f8b-bd595ad0a1dc-kube-api-access-zbjjw\") pod \"csi-node-driver-2brc4\" (UID: \"83a3f814-60c1-47be-8f8b-bd595ad0a1dc\") " pod="calico-system/csi-node-driver-2brc4" Jan 30 15:46:13.449951 kubelet[2615]: E0130 15:46:13.449926 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.450014 kubelet[2615]: W0130 15:46:13.449954 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.450014 kubelet[2615]: E0130 15:46:13.449986 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.450064 kubelet[2615]: I0130 15:46:13.450019 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/83a3f814-60c1-47be-8f8b-bd595ad0a1dc-registration-dir\") pod \"csi-node-driver-2brc4\" (UID: \"83a3f814-60c1-47be-8f8b-bd595ad0a1dc\") " pod="calico-system/csi-node-driver-2brc4" Jan 30 15:46:13.451268 kubelet[2615]: E0130 15:46:13.451236 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.451268 kubelet[2615]: W0130 15:46:13.451264 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.451350 kubelet[2615]: E0130 15:46:13.451291 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.451350 kubelet[2615]: I0130 15:46:13.451321 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83a3f814-60c1-47be-8f8b-bd595ad0a1dc-kubelet-dir\") pod \"csi-node-driver-2brc4\" (UID: \"83a3f814-60c1-47be-8f8b-bd595ad0a1dc\") " pod="calico-system/csi-node-driver-2brc4" Jan 30 15:46:13.451731 kubelet[2615]: E0130 15:46:13.451707 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.451909 kubelet[2615]: W0130 15:46:13.451732 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.451909 kubelet[2615]: E0130 15:46:13.451808 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.451909 kubelet[2615]: I0130 15:46:13.451853 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/83a3f814-60c1-47be-8f8b-bd595ad0a1dc-socket-dir\") pod \"csi-node-driver-2brc4\" (UID: \"83a3f814-60c1-47be-8f8b-bd595ad0a1dc\") " pod="calico-system/csi-node-driver-2brc4" Jan 30 15:46:13.452785 kubelet[2615]: E0130 15:46:13.452766 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.452785 kubelet[2615]: W0130 15:46:13.452783 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.452922 kubelet[2615]: E0130 15:46:13.452854 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.452991 kubelet[2615]: E0130 15:46:13.452972 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.452991 kubelet[2615]: W0130 15:46:13.452984 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.453120 kubelet[2615]: E0130 15:46:13.453012 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.453329 kubelet[2615]: E0130 15:46:13.453312 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.453329 kubelet[2615]: W0130 15:46:13.453326 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.453441 kubelet[2615]: E0130 15:46:13.453419 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.453853 kubelet[2615]: E0130 15:46:13.453836 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.453853 kubelet[2615]: W0130 15:46:13.453850 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.454005 kubelet[2615]: E0130 15:46:13.453906 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.454005 kubelet[2615]: I0130 15:46:13.453934 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/83a3f814-60c1-47be-8f8b-bd595ad0a1dc-varrun\") pod \"csi-node-driver-2brc4\" (UID: \"83a3f814-60c1-47be-8f8b-bd595ad0a1dc\") " pod="calico-system/csi-node-driver-2brc4" Jan 30 15:46:13.454544 kubelet[2615]: E0130 15:46:13.454520 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.454765 kubelet[2615]: W0130 15:46:13.454534 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.454765 kubelet[2615]: E0130 15:46:13.454794 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.455965 kubelet[2615]: E0130 15:46:13.455953 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.456114 kubelet[2615]: W0130 15:46:13.456028 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.456114 kubelet[2615]: E0130 15:46:13.456046 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.456452 kubelet[2615]: E0130 15:46:13.456354 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.456452 kubelet[2615]: W0130 15:46:13.456366 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.456452 kubelet[2615]: E0130 15:46:13.456391 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.456810 kubelet[2615]: E0130 15:46:13.456738 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.456810 kubelet[2615]: W0130 15:46:13.456749 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.456810 kubelet[2615]: E0130 15:46:13.456759 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.457269 kubelet[2615]: E0130 15:46:13.457105 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.457269 kubelet[2615]: W0130 15:46:13.457117 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.457269 kubelet[2615]: E0130 15:46:13.457127 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.457598 kubelet[2615]: E0130 15:46:13.457483 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.457598 kubelet[2615]: W0130 15:46:13.457503 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.457598 kubelet[2615]: E0130 15:46:13.457513 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.457937 kubelet[2615]: E0130 15:46:13.457838 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.457937 kubelet[2615]: W0130 15:46:13.457850 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.457937 kubelet[2615]: E0130 15:46:13.457859 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.531892 containerd[1471]: time="2025-01-30T15:46:13.531433831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rw5rw,Uid:119fc95e-dec5-4afb-9826-0c9f9e00568c,Namespace:calico-system,Attempt:0,}" Jan 30 15:46:13.559512 kubelet[2615]: E0130 15:46:13.559479 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.559643 kubelet[2615]: W0130 15:46:13.559623 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.559700 kubelet[2615]: E0130 15:46:13.559647 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.560542 kubelet[2615]: E0130 15:46:13.560074 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.560542 kubelet[2615]: W0130 15:46:13.560088 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.560542 kubelet[2615]: E0130 15:46:13.560214 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.560718 kubelet[2615]: E0130 15:46:13.560661 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.560718 kubelet[2615]: W0130 15:46:13.560703 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.561026 kubelet[2615]: E0130 15:46:13.560729 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.561110 kubelet[2615]: E0130 15:46:13.561081 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.561110 kubelet[2615]: W0130 15:46:13.561093 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.561208 kubelet[2615]: E0130 15:46:13.561155 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.561559 kubelet[2615]: E0130 15:46:13.561432 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.561559 kubelet[2615]: W0130 15:46:13.561472 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.561559 kubelet[2615]: E0130 15:46:13.561489 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.562148 kubelet[2615]: E0130 15:46:13.561979 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.562148 kubelet[2615]: W0130 15:46:13.561993 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.562148 kubelet[2615]: E0130 15:46:13.562040 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.562458 kubelet[2615]: E0130 15:46:13.562419 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.562597 kubelet[2615]: W0130 15:46:13.562441 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.562597 kubelet[2615]: E0130 15:46:13.562582 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.563255 kubelet[2615]: E0130 15:46:13.563236 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.563255 kubelet[2615]: W0130 15:46:13.563250 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.563627 kubelet[2615]: E0130 15:46:13.563602 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.563849 kubelet[2615]: E0130 15:46:13.563832 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.563849 kubelet[2615]: W0130 15:46:13.563845 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.563937 kubelet[2615]: E0130 15:46:13.563877 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.565145 kubelet[2615]: E0130 15:46:13.565120 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.565145 kubelet[2615]: W0130 15:46:13.565138 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.565330 kubelet[2615]: E0130 15:46:13.565286 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.565762 kubelet[2615]: E0130 15:46:13.565499 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.565762 kubelet[2615]: W0130 15:46:13.565518 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.565762 kubelet[2615]: E0130 15:46:13.565614 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.566022 kubelet[2615]: E0130 15:46:13.566004 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.566022 kubelet[2615]: W0130 15:46:13.566017 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.566127 kubelet[2615]: E0130 15:46:13.566107 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.566290 kubelet[2615]: E0130 15:46:13.566270 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.566290 kubelet[2615]: W0130 15:46:13.566284 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.566912 kubelet[2615]: E0130 15:46:13.566319 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.566912 kubelet[2615]: E0130 15:46:13.566452 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.566912 kubelet[2615]: W0130 15:46:13.566462 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.566912 kubelet[2615]: E0130 15:46:13.566479 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.566912 kubelet[2615]: E0130 15:46:13.566664 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.566912 kubelet[2615]: W0130 15:46:13.566695 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.566912 kubelet[2615]: E0130 15:46:13.566706 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.567705 kubelet[2615]: E0130 15:46:13.567564 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.567705 kubelet[2615]: W0130 15:46:13.567582 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.567705 kubelet[2615]: E0130 15:46:13.567605 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.568543 kubelet[2615]: E0130 15:46:13.567831 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.568543 kubelet[2615]: W0130 15:46:13.567842 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.568543 kubelet[2615]: E0130 15:46:13.567858 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.568543 kubelet[2615]: E0130 15:46:13.568046 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.568543 kubelet[2615]: W0130 15:46:13.568056 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.568543 kubelet[2615]: E0130 15:46:13.568066 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.568543 kubelet[2615]: E0130 15:46:13.568262 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.568543 kubelet[2615]: W0130 15:46:13.568271 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.569157 kubelet[2615]: E0130 15:46:13.569127 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.569975 kubelet[2615]: E0130 15:46:13.569956 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.570838 kubelet[2615]: W0130 15:46:13.570733 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.571167 kubelet[2615]: E0130 15:46:13.570990 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.571352 kubelet[2615]: E0130 15:46:13.571338 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.571506 kubelet[2615]: W0130 15:46:13.571408 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.571616 kubelet[2615]: E0130 15:46:13.571575 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.572779 kubelet[2615]: E0130 15:46:13.571782 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.572779 kubelet[2615]: W0130 15:46:13.571793 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.572779 kubelet[2615]: E0130 15:46:13.571865 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.573301 kubelet[2615]: E0130 15:46:13.573116 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.573301 kubelet[2615]: W0130 15:46:13.573136 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.573301 kubelet[2615]: E0130 15:46:13.573176 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.573647 kubelet[2615]: E0130 15:46:13.573543 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.573647 kubelet[2615]: W0130 15:46:13.573554 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.573647 kubelet[2615]: E0130 15:46:13.573584 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.573959 kubelet[2615]: E0130 15:46:13.573899 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.573959 kubelet[2615]: W0130 15:46:13.573913 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.573959 kubelet[2615]: E0130 15:46:13.573925 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.581761 containerd[1471]: time="2025-01-30T15:46:13.579546618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:13.581761 containerd[1471]: time="2025-01-30T15:46:13.579973569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:13.581761 containerd[1471]: time="2025-01-30T15:46:13.579988417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:13.581761 containerd[1471]: time="2025-01-30T15:46:13.580115021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:13.590280 kubelet[2615]: E0130 15:46:13.587771 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.590280 kubelet[2615]: W0130 15:46:13.587797 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.590280 kubelet[2615]: E0130 15:46:13.587840 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.611003 systemd[1]: Started cri-containerd-6cdad0cf2d6673bbf014baa413dd7b2c2385660faf991fa2898aeddc73549f38.scope - libcontainer container 6cdad0cf2d6673bbf014baa413dd7b2c2385660faf991fa2898aeddc73549f38. Jan 30 15:46:13.666380 containerd[1471]: time="2025-01-30T15:46:13.666261513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rw5rw,Uid:119fc95e-dec5-4afb-9826-0c9f9e00568c,Namespace:calico-system,Attempt:0,} returns sandbox id \"6cdad0cf2d6673bbf014baa413dd7b2c2385660faf991fa2898aeddc73549f38\"" Jan 30 15:46:13.669541 containerd[1471]: time="2025-01-30T15:46:13.669460579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 15:46:13.912365 kubelet[2615]: E0130 15:46:13.912272 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:46:13.912625 kubelet[2615]: W0130 15:46:13.912517 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:46:13.912625 kubelet[2615]: E0130 15:46:13.912578 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:46:13.998608 containerd[1471]: time="2025-01-30T15:46:13.998446141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f69d88c4-xd5j9,Uid:5fa61052-3253-48f6-875f-7ea89421bdff,Namespace:calico-system,Attempt:0,}" Jan 30 15:46:14.036274 containerd[1471]: time="2025-01-30T15:46:14.035644368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:14.036274 containerd[1471]: time="2025-01-30T15:46:14.035792152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:14.036274 containerd[1471]: time="2025-01-30T15:46:14.035826485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:14.036274 containerd[1471]: time="2025-01-30T15:46:14.035962838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:14.064951 systemd[1]: Started cri-containerd-27f2a45d58b86a1364e15cfb9db97f96e1eaf804826a69ba54d626af49f7ce7f.scope - libcontainer container 27f2a45d58b86a1364e15cfb9db97f96e1eaf804826a69ba54d626af49f7ce7f. Jan 30 15:46:14.115027 containerd[1471]: time="2025-01-30T15:46:14.114815903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f69d88c4-xd5j9,Uid:5fa61052-3253-48f6-875f-7ea89421bdff,Namespace:calico-system,Attempt:0,} returns sandbox id \"27f2a45d58b86a1364e15cfb9db97f96e1eaf804826a69ba54d626af49f7ce7f\"" Jan 30 15:46:15.353648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3676851289.mount: Deactivated successfully. Jan 30 15:46:15.548401 containerd[1471]: time="2025-01-30T15:46:15.548171181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:15.549447 containerd[1471]: time="2025-01-30T15:46:15.549406283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 15:46:15.550775 containerd[1471]: time="2025-01-30T15:46:15.550718447Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:15.553487 containerd[1471]: time="2025-01-30T15:46:15.553441770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:15.554646 containerd[1471]: time="2025-01-30T15:46:15.554144574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.884638009s" Jan 30 15:46:15.554646 containerd[1471]: time="2025-01-30T15:46:15.554192984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 15:46:15.556972 containerd[1471]: time="2025-01-30T15:46:15.556945530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 15:46:15.557881 containerd[1471]: time="2025-01-30T15:46:15.557855588Z" level=info msg="CreateContainer within sandbox \"6cdad0cf2d6673bbf014baa413dd7b2c2385660faf991fa2898aeddc73549f38\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 15:46:15.565287 kubelet[2615]: E0130 15:46:15.565001 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2brc4" podUID="83a3f814-60c1-47be-8f8b-bd595ad0a1dc" Jan 30 15:46:15.597606 containerd[1471]: time="2025-01-30T15:46:15.597488828Z" level=info msg="CreateContainer within sandbox \"6cdad0cf2d6673bbf014baa413dd7b2c2385660faf991fa2898aeddc73549f38\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a94dc034b7a530597e2073528497c10637d8fde250c67b5cbbc1146b370f66ed\"" Jan 30 15:46:15.598776 containerd[1471]: time="2025-01-30T15:46:15.598708680Z" level=info msg="StartContainer for \"a94dc034b7a530597e2073528497c10637d8fde250c67b5cbbc1146b370f66ed\"" Jan 30 15:46:15.627891 systemd[1]: Started cri-containerd-a94dc034b7a530597e2073528497c10637d8fde250c67b5cbbc1146b370f66ed.scope - libcontainer container a94dc034b7a530597e2073528497c10637d8fde250c67b5cbbc1146b370f66ed. Jan 30 15:46:15.658617 containerd[1471]: time="2025-01-30T15:46:15.658420281Z" level=info msg="StartContainer for \"a94dc034b7a530597e2073528497c10637d8fde250c67b5cbbc1146b370f66ed\" returns successfully" Jan 30 15:46:15.668193 systemd[1]: cri-containerd-a94dc034b7a530597e2073528497c10637d8fde250c67b5cbbc1146b370f66ed.scope: Deactivated successfully. Jan 30 15:46:16.180576 containerd[1471]: time="2025-01-30T15:46:16.180468003Z" level=info msg="shim disconnected" id=a94dc034b7a530597e2073528497c10637d8fde250c67b5cbbc1146b370f66ed namespace=k8s.io Jan 30 15:46:16.180576 containerd[1471]: time="2025-01-30T15:46:16.180565413Z" level=warning msg="cleaning up after shim disconnected" id=a94dc034b7a530597e2073528497c10637d8fde250c67b5cbbc1146b370f66ed namespace=k8s.io Jan 30 15:46:16.180576 containerd[1471]: time="2025-01-30T15:46:16.180587495Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:16.219039 containerd[1471]: time="2025-01-30T15:46:16.218966175Z" level=warning msg="cleanup warnings time=\"2025-01-30T15:46:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 15:46:16.291799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a94dc034b7a530597e2073528497c10637d8fde250c67b5cbbc1146b370f66ed-rootfs.mount: Deactivated successfully. Jan 30 15:46:17.565158 kubelet[2615]: E0130 15:46:17.565082 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2brc4" podUID="83a3f814-60c1-47be-8f8b-bd595ad0a1dc" Jan 30 15:46:18.723743 containerd[1471]: time="2025-01-30T15:46:18.722921711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:18.724197 containerd[1471]: time="2025-01-30T15:46:18.724155584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 15:46:18.725555 containerd[1471]: time="2025-01-30T15:46:18.725518118Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:18.733715 containerd[1471]: time="2025-01-30T15:46:18.733492500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:18.739810 containerd[1471]: time="2025-01-30T15:46:18.739605914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.182522829s" Jan 30 15:46:18.739810 containerd[1471]: time="2025-01-30T15:46:18.739655416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 15:46:18.748737 containerd[1471]: time="2025-01-30T15:46:18.747804414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 15:46:18.760162 containerd[1471]: time="2025-01-30T15:46:18.760109216Z" level=info msg="CreateContainer within sandbox \"27f2a45d58b86a1364e15cfb9db97f96e1eaf804826a69ba54d626af49f7ce7f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 15:46:18.785908 containerd[1471]: time="2025-01-30T15:46:18.785637568Z" level=info msg="CreateContainer within sandbox \"27f2a45d58b86a1364e15cfb9db97f96e1eaf804826a69ba54d626af49f7ce7f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fd3bcfd7e4299bae4f05cf8b972242163a4ed121c7f3c36e237590e121d0052d\"" Jan 30 15:46:18.786740 containerd[1471]: time="2025-01-30T15:46:18.786548662Z" level=info msg="StartContainer for \"fd3bcfd7e4299bae4f05cf8b972242163a4ed121c7f3c36e237590e121d0052d\"" Jan 30 15:46:18.823943 systemd[1]: Started cri-containerd-fd3bcfd7e4299bae4f05cf8b972242163a4ed121c7f3c36e237590e121d0052d.scope - libcontainer container fd3bcfd7e4299bae4f05cf8b972242163a4ed121c7f3c36e237590e121d0052d. Jan 30 15:46:18.880382 containerd[1471]: time="2025-01-30T15:46:18.880317219Z" level=info msg="StartContainer for \"fd3bcfd7e4299bae4f05cf8b972242163a4ed121c7f3c36e237590e121d0052d\" returns successfully" Jan 30 15:46:19.564966 kubelet[2615]: E0130 15:46:19.564814 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2brc4" podUID="83a3f814-60c1-47be-8f8b-bd595ad0a1dc" Jan 30 15:46:19.719393 kubelet[2615]: I0130 15:46:19.719283 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f69d88c4-xd5j9" podStartSLOduration=2.090185722 podStartE2EDuration="6.719250857s" podCreationTimestamp="2025-01-30 15:46:13 +0000 UTC" firstStartedPulling="2025-01-30 15:46:14.116325441 +0000 UTC m=+13.876545446" lastFinishedPulling="2025-01-30 15:46:18.745390566 +0000 UTC m=+18.505610581" observedRunningTime="2025-01-30 15:46:19.714809648 +0000 UTC m=+19.475029773" watchObservedRunningTime="2025-01-30 15:46:19.719250857 +0000 UTC m=+19.479470912" Jan 30 15:46:20.694699 kubelet[2615]: I0130 15:46:20.694052 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 15:46:21.564608 kubelet[2615]: E0130 15:46:21.564546 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2brc4" podUID="83a3f814-60c1-47be-8f8b-bd595ad0a1dc" Jan 30 15:46:23.566904 kubelet[2615]: E0130 15:46:23.566539 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2brc4" podUID="83a3f814-60c1-47be-8f8b-bd595ad0a1dc" Jan 30 15:46:24.059361 kubelet[2615]: I0130 15:46:24.058138 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 15:46:24.887993 containerd[1471]: time="2025-01-30T15:46:24.887942968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:24.889350 containerd[1471]: time="2025-01-30T15:46:24.889280402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 15:46:24.890252 containerd[1471]: time="2025-01-30T15:46:24.890205005Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:24.893382 containerd[1471]: time="2025-01-30T15:46:24.893311847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:24.894868 containerd[1471]: time="2025-01-30T15:46:24.894752994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.145149827s" Jan 30 15:46:24.894868 containerd[1471]: time="2025-01-30T15:46:24.894782799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 15:46:24.898502 containerd[1471]: time="2025-01-30T15:46:24.896943818Z" level=info msg="CreateContainer within sandbox \"6cdad0cf2d6673bbf014baa413dd7b2c2385660faf991fa2898aeddc73549f38\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 15:46:24.915219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount990757370.mount: Deactivated successfully. Jan 30 15:46:24.923492 containerd[1471]: time="2025-01-30T15:46:24.923455911Z" level=info msg="CreateContainer within sandbox \"6cdad0cf2d6673bbf014baa413dd7b2c2385660faf991fa2898aeddc73549f38\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4f1939a0ff507e085f8572d1a8859cad7ee1c925df23a8f099a1d35c94182737\"" Jan 30 15:46:24.924169 containerd[1471]: time="2025-01-30T15:46:24.924145256Z" level=info msg="StartContainer for \"4f1939a0ff507e085f8572d1a8859cad7ee1c925df23a8f099a1d35c94182737\"" Jan 30 15:46:24.971995 systemd[1]: Started cri-containerd-4f1939a0ff507e085f8572d1a8859cad7ee1c925df23a8f099a1d35c94182737.scope - libcontainer container 4f1939a0ff507e085f8572d1a8859cad7ee1c925df23a8f099a1d35c94182737. Jan 30 15:46:25.008226 containerd[1471]: time="2025-01-30T15:46:25.008141073Z" level=info msg="StartContainer for \"4f1939a0ff507e085f8572d1a8859cad7ee1c925df23a8f099a1d35c94182737\" returns successfully" Jan 30 15:46:25.564362 kubelet[2615]: E0130 15:46:25.563993 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2brc4" podUID="83a3f814-60c1-47be-8f8b-bd595ad0a1dc" Jan 30 15:46:27.112538 containerd[1471]: time="2025-01-30T15:46:27.112480775Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 15:46:27.116192 systemd[1]: cri-containerd-4f1939a0ff507e085f8572d1a8859cad7ee1c925df23a8f099a1d35c94182737.scope: Deactivated successfully. Jan 30 15:46:27.152514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f1939a0ff507e085f8572d1a8859cad7ee1c925df23a8f099a1d35c94182737-rootfs.mount: Deactivated successfully. Jan 30 15:46:27.175328 kubelet[2615]: I0130 15:46:27.175294 2615 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 15:46:27.580438 systemd[1]: Created slice kubepods-besteffort-pod83a3f814_60c1_47be_8f8b_bd595ad0a1dc.slice - libcontainer container kubepods-besteffort-pod83a3f814_60c1_47be_8f8b_bd595ad0a1dc.slice. Jan 30 15:46:27.589134 containerd[1471]: time="2025-01-30T15:46:27.589065448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2brc4,Uid:83a3f814-60c1-47be-8f8b-bd595ad0a1dc,Namespace:calico-system,Attempt:0,}" Jan 30 15:46:27.878898 systemd[1]: Created slice kubepods-burstable-podf958cc09_74f2_44d1_a296_1688f8e74244.slice - libcontainer container kubepods-burstable-podf958cc09_74f2_44d1_a296_1688f8e74244.slice. Jan 30 15:46:27.912925 kubelet[2615]: W0130 15:46:27.912825 2615 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-0-f-c7edc085f7.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-3-0-f-c7edc085f7.novalocal' and this object Jan 30 15:46:27.912925 kubelet[2615]: E0130 15:46:27.912888 2615 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-0-f-c7edc085f7.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081-3-0-f-c7edc085f7.novalocal' and this object" logger="UnhandledError" Jan 30 15:46:27.928890 systemd[1]: Created slice kubepods-besteffort-pod8fe8c3db_c16d_4689_b438_3e2321856a39.slice - libcontainer container kubepods-besteffort-pod8fe8c3db_c16d_4689_b438_3e2321856a39.slice. Jan 30 15:46:27.948927 systemd[1]: Created slice kubepods-burstable-podee0fa910_3c73_4131_907b_66f4fa4b13bd.slice - libcontainer container kubepods-burstable-podee0fa910_3c73_4131_907b_66f4fa4b13bd.slice. Jan 30 15:46:27.952016 containerd[1471]: time="2025-01-30T15:46:27.951586582Z" level=info msg="shim disconnected" id=4f1939a0ff507e085f8572d1a8859cad7ee1c925df23a8f099a1d35c94182737 namespace=k8s.io Jan 30 15:46:27.952016 containerd[1471]: time="2025-01-30T15:46:27.951738165Z" level=warning msg="cleaning up after shim disconnected" id=4f1939a0ff507e085f8572d1a8859cad7ee1c925df23a8f099a1d35c94182737 namespace=k8s.io Jan 30 15:46:27.952016 containerd[1471]: time="2025-01-30T15:46:27.951764234Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:27.969734 systemd[1]: Created slice kubepods-besteffort-pod9428f27e_8e9f_42ec_b91d_69ade8069655.slice - libcontainer container kubepods-besteffort-pod9428f27e_8e9f_42ec_b91d_69ade8069655.slice. Jan 30 15:46:27.980237 kubelet[2615]: I0130 15:46:27.979837 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hptkd\" (UniqueName: \"kubernetes.io/projected/e043c395-4b2d-4788-a24a-2ff2e7d7cf00-kube-api-access-hptkd\") pod \"calico-kube-controllers-6f9c566f8c-m47dw\" (UID: \"e043c395-4b2d-4788-a24a-2ff2e7d7cf00\") " pod="calico-system/calico-kube-controllers-6f9c566f8c-m47dw" Jan 30 15:46:27.980237 kubelet[2615]: I0130 15:46:27.979883 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vbwq\" (UniqueName: \"kubernetes.io/projected/ee0fa910-3c73-4131-907b-66f4fa4b13bd-kube-api-access-2vbwq\") pod \"coredns-668d6bf9bc-fr6sg\" (UID: \"ee0fa910-3c73-4131-907b-66f4fa4b13bd\") " pod="kube-system/coredns-668d6bf9bc-fr6sg" Jan 30 15:46:27.980237 kubelet[2615]: I0130 15:46:27.979913 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8fe8c3db-c16d-4689-b438-3e2321856a39-calico-apiserver-certs\") pod \"calico-apiserver-5d449c9595-lkpj9\" (UID: \"8fe8c3db-c16d-4689-b438-3e2321856a39\") " pod="calico-apiserver/calico-apiserver-5d449c9595-lkpj9" Jan 30 15:46:27.980237 kubelet[2615]: I0130 15:46:27.979936 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9428f27e-8e9f-42ec-b91d-69ade8069655-calico-apiserver-certs\") pod \"calico-apiserver-5d449c9595-22mfp\" (UID: \"9428f27e-8e9f-42ec-b91d-69ade8069655\") " pod="calico-apiserver/calico-apiserver-5d449c9595-22mfp" Jan 30 15:46:27.980237 kubelet[2615]: I0130 15:46:27.979956 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f958cc09-74f2-44d1-a296-1688f8e74244-config-volume\") pod \"coredns-668d6bf9bc-wzgwd\" (UID: \"f958cc09-74f2-44d1-a296-1688f8e74244\") " pod="kube-system/coredns-668d6bf9bc-wzgwd" Jan 30 15:46:27.980463 kubelet[2615]: I0130 15:46:27.979976 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee0fa910-3c73-4131-907b-66f4fa4b13bd-config-volume\") pod \"coredns-668d6bf9bc-fr6sg\" (UID: \"ee0fa910-3c73-4131-907b-66f4fa4b13bd\") " pod="kube-system/coredns-668d6bf9bc-fr6sg" Jan 30 15:46:27.980463 kubelet[2615]: I0130 15:46:27.979996 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dllzp\" (UniqueName: \"kubernetes.io/projected/f958cc09-74f2-44d1-a296-1688f8e74244-kube-api-access-dllzp\") pod \"coredns-668d6bf9bc-wzgwd\" (UID: \"f958cc09-74f2-44d1-a296-1688f8e74244\") " pod="kube-system/coredns-668d6bf9bc-wzgwd" Jan 30 15:46:27.980463 kubelet[2615]: I0130 15:46:27.980018 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv922\" (UniqueName: \"kubernetes.io/projected/9428f27e-8e9f-42ec-b91d-69ade8069655-kube-api-access-wv922\") pod \"calico-apiserver-5d449c9595-22mfp\" (UID: \"9428f27e-8e9f-42ec-b91d-69ade8069655\") " pod="calico-apiserver/calico-apiserver-5d449c9595-22mfp" Jan 30 15:46:27.980463 kubelet[2615]: I0130 15:46:27.980036 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgx4r\" (UniqueName: \"kubernetes.io/projected/8fe8c3db-c16d-4689-b438-3e2321856a39-kube-api-access-hgx4r\") pod \"calico-apiserver-5d449c9595-lkpj9\" (UID: \"8fe8c3db-c16d-4689-b438-3e2321856a39\") " pod="calico-apiserver/calico-apiserver-5d449c9595-lkpj9" Jan 30 15:46:27.980463 kubelet[2615]: I0130 15:46:27.980057 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e043c395-4b2d-4788-a24a-2ff2e7d7cf00-tigera-ca-bundle\") pod \"calico-kube-controllers-6f9c566f8c-m47dw\" (UID: \"e043c395-4b2d-4788-a24a-2ff2e7d7cf00\") " pod="calico-system/calico-kube-controllers-6f9c566f8c-m47dw" Jan 30 15:46:27.982446 systemd[1]: Created slice kubepods-besteffort-pode043c395_4b2d_4788_a24a_2ff2e7d7cf00.slice - libcontainer container kubepods-besteffort-pode043c395_4b2d_4788_a24a_2ff2e7d7cf00.slice. Jan 30 15:46:28.217738 containerd[1471]: time="2025-01-30T15:46:28.215344142Z" level=error msg="Failed to destroy network for sandbox \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.217515 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3-shm.mount: Deactivated successfully. Jan 30 15:46:28.218772 containerd[1471]: time="2025-01-30T15:46:28.218029275Z" level=error msg="encountered an error cleaning up failed sandbox \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.218772 containerd[1471]: time="2025-01-30T15:46:28.218126957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2brc4,Uid:83a3f814-60c1-47be-8f8b-bd595ad0a1dc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.219750 kubelet[2615]: E0130 15:46:28.219097 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.219750 kubelet[2615]: E0130 15:46:28.219208 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2brc4" Jan 30 15:46:28.219750 kubelet[2615]: E0130 15:46:28.219259 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2brc4" Jan 30 15:46:28.220183 kubelet[2615]: E0130 15:46:28.219329 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2brc4_calico-system(83a3f814-60c1-47be-8f8b-bd595ad0a1dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2brc4_calico-system(83a3f814-60c1-47be-8f8b-bd595ad0a1dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2brc4" podUID="83a3f814-60c1-47be-8f8b-bd595ad0a1dc" Jan 30 15:46:28.290734 containerd[1471]: time="2025-01-30T15:46:28.289002042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9c566f8c-m47dw,Uid:e043c395-4b2d-4788-a24a-2ff2e7d7cf00,Namespace:calico-system,Attempt:0,}" Jan 30 15:46:28.391293 containerd[1471]: time="2025-01-30T15:46:28.391244654Z" level=error msg="Failed to destroy network for sandbox \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.391823 containerd[1471]: time="2025-01-30T15:46:28.391797656Z" level=error msg="encountered an error cleaning up failed sandbox \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.391984 containerd[1471]: time="2025-01-30T15:46:28.391945462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9c566f8c-m47dw,Uid:e043c395-4b2d-4788-a24a-2ff2e7d7cf00,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.392387 kubelet[2615]: E0130 15:46:28.392313 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.392451 kubelet[2615]: E0130 15:46:28.392407 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f9c566f8c-m47dw" Jan 30 15:46:28.392451 kubelet[2615]: E0130 15:46:28.392433 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f9c566f8c-m47dw" Jan 30 15:46:28.392570 kubelet[2615]: E0130 15:46:28.392484 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f9c566f8c-m47dw_calico-system(e043c395-4b2d-4788-a24a-2ff2e7d7cf00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f9c566f8c-m47dw_calico-system(e043c395-4b2d-4788-a24a-2ff2e7d7cf00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f9c566f8c-m47dw" podUID="e043c395-4b2d-4788-a24a-2ff2e7d7cf00" Jan 30 15:46:28.485407 containerd[1471]: time="2025-01-30T15:46:28.485042801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wzgwd,Uid:f958cc09-74f2-44d1-a296-1688f8e74244,Namespace:kube-system,Attempt:0,}" Jan 30 15:46:28.565736 containerd[1471]: time="2025-01-30T15:46:28.565128561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fr6sg,Uid:ee0fa910-3c73-4131-907b-66f4fa4b13bd,Namespace:kube-system,Attempt:0,}" Jan 30 15:46:28.623789 containerd[1471]: time="2025-01-30T15:46:28.623729364Z" level=error msg="Failed to destroy network for sandbox \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.624877 containerd[1471]: time="2025-01-30T15:46:28.624353940Z" level=error msg="encountered an error cleaning up failed sandbox \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.624877 containerd[1471]: time="2025-01-30T15:46:28.624781538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wzgwd,Uid:f958cc09-74f2-44d1-a296-1688f8e74244,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.625327 kubelet[2615]: E0130 15:46:28.625093 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.625327 kubelet[2615]: E0130 15:46:28.625171 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wzgwd" Jan 30 15:46:28.625327 kubelet[2615]: E0130 15:46:28.625195 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wzgwd" Jan 30 15:46:28.626204 kubelet[2615]: E0130 15:46:28.625245 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wzgwd_kube-system(f958cc09-74f2-44d1-a296-1688f8e74244)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wzgwd_kube-system(f958cc09-74f2-44d1-a296-1688f8e74244)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wzgwd" podUID="f958cc09-74f2-44d1-a296-1688f8e74244" Jan 30 15:46:28.655693 containerd[1471]: time="2025-01-30T15:46:28.655588717Z" level=error msg="Failed to destroy network for sandbox \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.656344 containerd[1471]: time="2025-01-30T15:46:28.656179601Z" level=error msg="encountered an error cleaning up failed sandbox \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.656344 containerd[1471]: time="2025-01-30T15:46:28.656234823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fr6sg,Uid:ee0fa910-3c73-4131-907b-66f4fa4b13bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.658399 kubelet[2615]: E0130 15:46:28.656582 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.658399 kubelet[2615]: E0130 15:46:28.656652 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fr6sg" Jan 30 15:46:28.658399 kubelet[2615]: E0130 15:46:28.656704 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fr6sg" Jan 30 15:46:28.658646 kubelet[2615]: E0130 15:46:28.656762 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fr6sg_kube-system(ee0fa910-3c73-4131-907b-66f4fa4b13bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fr6sg_kube-system(ee0fa910-3c73-4131-907b-66f4fa4b13bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fr6sg" podUID="ee0fa910-3c73-4131-907b-66f4fa4b13bd" Jan 30 15:46:28.734926 containerd[1471]: time="2025-01-30T15:46:28.733725880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 15:46:28.737072 kubelet[2615]: I0130 15:46:28.735502 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Jan 30 15:46:28.743657 containerd[1471]: time="2025-01-30T15:46:28.739347073Z" level=info msg="StopPodSandbox for \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\"" Jan 30 15:46:28.743657 containerd[1471]: time="2025-01-30T15:46:28.739803625Z" level=info msg="Ensure that sandbox 8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3 in task-service has been cleanup successfully" Jan 30 15:46:28.752400 kubelet[2615]: I0130 15:46:28.752214 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Jan 30 15:46:28.758203 containerd[1471]: time="2025-01-30T15:46:28.756624596Z" level=info msg="StopPodSandbox for \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\"" Jan 30 15:46:28.758203 containerd[1471]: time="2025-01-30T15:46:28.757023009Z" level=info msg="Ensure that sandbox 8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226 in task-service has been cleanup successfully" Jan 30 15:46:28.765426 kubelet[2615]: I0130 15:46:28.765302 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Jan 30 15:46:28.775398 containerd[1471]: time="2025-01-30T15:46:28.775314315Z" level=info msg="StopPodSandbox for \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\"" Jan 30 15:46:28.777102 containerd[1471]: time="2025-01-30T15:46:28.775718349Z" level=info msg="Ensure that sandbox 4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca in task-service has been cleanup successfully" Jan 30 15:46:28.792115 kubelet[2615]: I0130 15:46:28.792085 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Jan 30 15:46:28.794925 containerd[1471]: time="2025-01-30T15:46:28.794791964Z" level=info msg="StopPodSandbox for \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\"" Jan 30 15:46:28.797691 containerd[1471]: time="2025-01-30T15:46:28.797634891Z" level=info msg="Ensure that sandbox 0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f in task-service has been cleanup successfully" Jan 30 15:46:28.861934 containerd[1471]: time="2025-01-30T15:46:28.861839896Z" level=error msg="StopPodSandbox for \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\" failed" error="failed to destroy network for sandbox \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.862251 kubelet[2615]: E0130 15:46:28.862124 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Jan 30 15:46:28.862312 kubelet[2615]: E0130 15:46:28.862243 2615 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca"} Jan 30 15:46:28.862359 kubelet[2615]: E0130 15:46:28.862327 2615 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f958cc09-74f2-44d1-a296-1688f8e74244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 15:46:28.862462 kubelet[2615]: E0130 15:46:28.862357 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f958cc09-74f2-44d1-a296-1688f8e74244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wzgwd" podUID="f958cc09-74f2-44d1-a296-1688f8e74244" Jan 30 15:46:28.869702 containerd[1471]: time="2025-01-30T15:46:28.869624458Z" level=error msg="StopPodSandbox for \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\" failed" error="failed to destroy network for sandbox \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.870109 kubelet[2615]: E0130 15:46:28.869851 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Jan 30 15:46:28.870109 kubelet[2615]: E0130 15:46:28.869915 2615 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226"} Jan 30 15:46:28.870109 kubelet[2615]: E0130 15:46:28.869951 2615 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ee0fa910-3c73-4131-907b-66f4fa4b13bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 15:46:28.870109 kubelet[2615]: E0130 15:46:28.869980 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ee0fa910-3c73-4131-907b-66f4fa4b13bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fr6sg" podUID="ee0fa910-3c73-4131-907b-66f4fa4b13bd" Jan 30 15:46:28.873221 containerd[1471]: time="2025-01-30T15:46:28.873168033Z" level=error msg="StopPodSandbox for \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\" failed" error="failed to destroy network for sandbox \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.873432 kubelet[2615]: E0130 15:46:28.873396 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Jan 30 15:46:28.873493 kubelet[2615]: E0130 15:46:28.873447 2615 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3"} Jan 30 15:46:28.873525 kubelet[2615]: E0130 15:46:28.873488 2615 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83a3f814-60c1-47be-8f8b-bd595ad0a1dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 15:46:28.873525 kubelet[2615]: E0130 15:46:28.873515 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83a3f814-60c1-47be-8f8b-bd595ad0a1dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2brc4" podUID="83a3f814-60c1-47be-8f8b-bd595ad0a1dc" Jan 30 15:46:28.880578 containerd[1471]: time="2025-01-30T15:46:28.880052214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d449c9595-22mfp,Uid:9428f27e-8e9f-42ec-b91d-69ade8069655,Namespace:calico-apiserver,Attempt:0,}" Jan 30 15:46:28.882994 containerd[1471]: time="2025-01-30T15:46:28.882948701Z" level=error msg="StopPodSandbox for \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\" failed" error="failed to destroy network for sandbox \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.883331 kubelet[2615]: E0130 15:46:28.883294 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Jan 30 15:46:28.883403 kubelet[2615]: E0130 15:46:28.883345 2615 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f"} Jan 30 15:46:28.883403 kubelet[2615]: E0130 15:46:28.883385 2615 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e043c395-4b2d-4788-a24a-2ff2e7d7cf00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 15:46:28.883546 kubelet[2615]: E0130 15:46:28.883410 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e043c395-4b2d-4788-a24a-2ff2e7d7cf00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f9c566f8c-m47dw" podUID="e043c395-4b2d-4788-a24a-2ff2e7d7cf00" Jan 30 15:46:28.975448 containerd[1471]: time="2025-01-30T15:46:28.975381901Z" level=error msg="Failed to destroy network for sandbox \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.976762 containerd[1471]: time="2025-01-30T15:46:28.976644077Z" level=error msg="encountered an error cleaning up failed sandbox \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.976957 containerd[1471]: time="2025-01-30T15:46:28.976857806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d449c9595-22mfp,Uid:9428f27e-8e9f-42ec-b91d-69ade8069655,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.977199 kubelet[2615]: E0130 15:46:28.977135 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:28.977280 kubelet[2615]: E0130 15:46:28.977237 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d449c9595-22mfp" Jan 30 15:46:28.977314 kubelet[2615]: E0130 15:46:28.977286 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d449c9595-22mfp" Jan 30 15:46:28.977735 kubelet[2615]: E0130 15:46:28.977380 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d449c9595-22mfp_calico-apiserver(9428f27e-8e9f-42ec-b91d-69ade8069655)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d449c9595-22mfp_calico-apiserver(9428f27e-8e9f-42ec-b91d-69ade8069655)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d449c9595-22mfp" podUID="9428f27e-8e9f-42ec-b91d-69ade8069655" Jan 30 15:46:29.136328 containerd[1471]: time="2025-01-30T15:46:29.136124039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d449c9595-lkpj9,Uid:8fe8c3db-c16d-4689-b438-3e2321856a39,Namespace:calico-apiserver,Attempt:0,}" Jan 30 15:46:29.251635 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f-shm.mount: Deactivated successfully. Jan 30 15:46:29.290874 containerd[1471]: time="2025-01-30T15:46:29.290757886Z" level=error msg="Failed to destroy network for sandbox \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:29.293183 containerd[1471]: time="2025-01-30T15:46:29.291728879Z" level=error msg="encountered an error cleaning up failed sandbox \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:29.293183 containerd[1471]: time="2025-01-30T15:46:29.291788831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d449c9595-lkpj9,Uid:8fe8c3db-c16d-4689-b438-3e2321856a39,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:29.294350 kubelet[2615]: E0130 15:46:29.293395 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:29.294350 kubelet[2615]: E0130 15:46:29.293453 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d449c9595-lkpj9" Jan 30 15:46:29.294350 kubelet[2615]: E0130 15:46:29.293480 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d449c9595-lkpj9" Jan 30 15:46:29.293646 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61-shm.mount: Deactivated successfully. Jan 30 15:46:29.294765 kubelet[2615]: E0130 15:46:29.293518 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d449c9595-lkpj9_calico-apiserver(8fe8c3db-c16d-4689-b438-3e2321856a39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d449c9595-lkpj9_calico-apiserver(8fe8c3db-c16d-4689-b438-3e2321856a39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d449c9595-lkpj9" podUID="8fe8c3db-c16d-4689-b438-3e2321856a39" Jan 30 15:46:29.798705 kubelet[2615]: I0130 15:46:29.798616 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Jan 30 15:46:29.801497 containerd[1471]: time="2025-01-30T15:46:29.800535400Z" level=info msg="StopPodSandbox for \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\"" Jan 30 15:46:29.801497 containerd[1471]: time="2025-01-30T15:46:29.800944694Z" level=info msg="Ensure that sandbox 98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61 in task-service has been cleanup successfully" Jan 30 15:46:29.805500 kubelet[2615]: I0130 15:46:29.805452 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Jan 30 15:46:29.812238 containerd[1471]: time="2025-01-30T15:46:29.812125242Z" level=info msg="StopPodSandbox for \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\"" Jan 30 15:46:29.813479 containerd[1471]: time="2025-01-30T15:46:29.813391256Z" level=info msg="Ensure that sandbox 17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616 in task-service has been cleanup successfully" Jan 30 15:46:29.887357 containerd[1471]: time="2025-01-30T15:46:29.887122684Z" level=error msg="StopPodSandbox for \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\" failed" error="failed to destroy network for sandbox \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:29.887749 kubelet[2615]: E0130 15:46:29.887564 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Jan 30 15:46:29.887749 kubelet[2615]: E0130 15:46:29.887618 2615 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61"} Jan 30 15:46:29.887749 kubelet[2615]: E0130 15:46:29.887653 2615 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8fe8c3db-c16d-4689-b438-3e2321856a39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 15:46:29.887749 kubelet[2615]: E0130 15:46:29.887701 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8fe8c3db-c16d-4689-b438-3e2321856a39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d449c9595-lkpj9" podUID="8fe8c3db-c16d-4689-b438-3e2321856a39" Jan 30 15:46:29.891183 containerd[1471]: time="2025-01-30T15:46:29.891145757Z" level=error msg="StopPodSandbox for \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\" failed" error="failed to destroy network for sandbox \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:46:29.891487 kubelet[2615]: E0130 15:46:29.891443 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Jan 30 15:46:29.891559 kubelet[2615]: E0130 15:46:29.891503 2615 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616"} Jan 30 15:46:29.891559 kubelet[2615]: E0130 15:46:29.891545 2615 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9428f27e-8e9f-42ec-b91d-69ade8069655\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 15:46:29.891655 kubelet[2615]: E0130 15:46:29.891581 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9428f27e-8e9f-42ec-b91d-69ade8069655\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d449c9595-22mfp" podUID="9428f27e-8e9f-42ec-b91d-69ade8069655" Jan 30 15:46:38.188388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3407274878.mount: Deactivated successfully. Jan 30 15:46:38.252404 containerd[1471]: time="2025-01-30T15:46:38.252336690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:38.253707 containerd[1471]: time="2025-01-30T15:46:38.253662130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 15:46:38.254781 containerd[1471]: time="2025-01-30T15:46:38.254758184Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:38.257687 containerd[1471]: time="2025-01-30T15:46:38.257644952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:38.258289 containerd[1471]: time="2025-01-30T15:46:38.258246646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.524455484s" Jan 30 15:46:38.258336 containerd[1471]: time="2025-01-30T15:46:38.258287233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 15:46:38.272447 containerd[1471]: time="2025-01-30T15:46:38.272150959Z" level=info msg="CreateContainer within sandbox \"6cdad0cf2d6673bbf014baa413dd7b2c2385660faf991fa2898aeddc73549f38\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 15:46:38.296666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2623531439.mount: Deactivated successfully. Jan 30 15:46:38.301760 containerd[1471]: time="2025-01-30T15:46:38.301712277Z" level=info msg="CreateContainer within sandbox \"6cdad0cf2d6673bbf014baa413dd7b2c2385660faf991fa2898aeddc73549f38\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3b025479eb8ec238728a43e3575c04bdb818f427d38111ee09efef9ec90a4faa\"" Jan 30 15:46:38.303127 containerd[1471]: time="2025-01-30T15:46:38.302982782Z" level=info msg="StartContainer for \"3b025479eb8ec238728a43e3575c04bdb818f427d38111ee09efef9ec90a4faa\"" Jan 30 15:46:38.333249 systemd[1]: Started cri-containerd-3b025479eb8ec238728a43e3575c04bdb818f427d38111ee09efef9ec90a4faa.scope - libcontainer container 3b025479eb8ec238728a43e3575c04bdb818f427d38111ee09efef9ec90a4faa. Jan 30 15:46:38.374810 containerd[1471]: time="2025-01-30T15:46:38.374756167Z" level=info msg="StartContainer for \"3b025479eb8ec238728a43e3575c04bdb818f427d38111ee09efef9ec90a4faa\" returns successfully" Jan 30 15:46:38.461283 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 15:46:38.461427 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 15:46:38.870629 kubelet[2615]: I0130 15:46:38.870494 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rw5rw" podStartSLOduration=1.27912545 podStartE2EDuration="25.870476702s" podCreationTimestamp="2025-01-30 15:46:13 +0000 UTC" firstStartedPulling="2025-01-30 15:46:13.66844063 +0000 UTC m=+13.428660646" lastFinishedPulling="2025-01-30 15:46:38.259791883 +0000 UTC m=+38.020011898" observedRunningTime="2025-01-30 15:46:38.869837707 +0000 UTC m=+38.630057722" watchObservedRunningTime="2025-01-30 15:46:38.870476702 +0000 UTC m=+38.630696717" Jan 30 15:46:40.232947 kernel: bpftool[3888]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 15:46:40.548271 systemd-networkd[1376]: vxlan.calico: Link UP Jan 30 15:46:40.548287 systemd-networkd[1376]: vxlan.calico: Gained carrier Jan 30 15:46:40.574007 containerd[1471]: time="2025-01-30T15:46:40.573723257Z" level=info msg="StopPodSandbox for \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\"" Jan 30 15:46:40.736667 containerd[1471]: 2025-01-30 15:46:40.695 [INFO][3936] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Jan 30 15:46:40.736667 containerd[1471]: 2025-01-30 15:46:40.695 [INFO][3936] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" iface="eth0" netns="/var/run/netns/cni-95d9fbcd-6dbd-f482-e217-8ea44788c600" Jan 30 15:46:40.736667 containerd[1471]: 2025-01-30 15:46:40.696 [INFO][3936] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" iface="eth0" netns="/var/run/netns/cni-95d9fbcd-6dbd-f482-e217-8ea44788c600" Jan 30 15:46:40.736667 containerd[1471]: 2025-01-30 15:46:40.697 [INFO][3936] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" iface="eth0" netns="/var/run/netns/cni-95d9fbcd-6dbd-f482-e217-8ea44788c600" Jan 30 15:46:40.736667 containerd[1471]: 2025-01-30 15:46:40.697 [INFO][3936] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Jan 30 15:46:40.736667 containerd[1471]: 2025-01-30 15:46:40.697 [INFO][3936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Jan 30 15:46:40.736667 containerd[1471]: 2025-01-30 15:46:40.721 [INFO][3944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" HandleID="k8s-pod-network.0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:46:40.736667 containerd[1471]: 2025-01-30 15:46:40.721 [INFO][3944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:46:40.736667 containerd[1471]: 2025-01-30 15:46:40.721 [INFO][3944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:46:40.736667 containerd[1471]: 2025-01-30 15:46:40.731 [WARNING][3944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" HandleID="k8s-pod-network.0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:46:40.736667 containerd[1471]: 2025-01-30 15:46:40.731 [INFO][3944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" HandleID="k8s-pod-network.0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:46:40.736667 containerd[1471]: 2025-01-30 15:46:40.733 [INFO][3944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:46:40.736667 containerd[1471]: 2025-01-30 15:46:40.734 [INFO][3936] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Jan 30 15:46:40.738087 containerd[1471]: time="2025-01-30T15:46:40.736820032Z" level=info msg="TearDown network for sandbox \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\" successfully" Jan 30 15:46:40.738087 containerd[1471]: time="2025-01-30T15:46:40.736858014Z" level=info msg="StopPodSandbox for \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\" returns successfully" Jan 30 15:46:40.739309 containerd[1471]: time="2025-01-30T15:46:40.739269836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9c566f8c-m47dw,Uid:e043c395-4b2d-4788-a24a-2ff2e7d7cf00,Namespace:calico-system,Attempt:1,}" Jan 30 15:46:40.743923 systemd[1]: run-netns-cni\x2d95d9fbcd\x2d6dbd\x2df482\x2de217\x2d8ea44788c600.mount: Deactivated successfully. Jan 30 15:46:41.011985 systemd-networkd[1376]: cali8370c3db44d: Link UP Jan 30 15:46:41.012516 systemd-networkd[1376]: cali8370c3db44d: Gained carrier Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:40.811 [INFO][3951] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0 calico-kube-controllers-6f9c566f8c- calico-system e043c395-4b2d-4788-a24a-2ff2e7d7cf00 771 0 2025-01-30 15:46:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f9c566f8c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-f-c7edc085f7.novalocal calico-kube-controllers-6f9c566f8c-m47dw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8370c3db44d [] []}} ContainerID="f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" Namespace="calico-system" Pod="calico-kube-controllers-6f9c566f8c-m47dw" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-" Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:40.811 [INFO][3951] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" Namespace="calico-system" Pod="calico-kube-controllers-6f9c566f8c-m47dw" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:40.859 [INFO][3962] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" HandleID="k8s-pod-network.f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:40.873 [INFO][3962] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" HandleID="k8s-pod-network.f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290830), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-f-c7edc085f7.novalocal", "pod":"calico-kube-controllers-6f9c566f8c-m47dw", "timestamp":"2025-01-30 15:46:40.859541697 +0000 UTC"}, Hostname:"ci-4081-3-0-f-c7edc085f7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:40.873 [INFO][3962] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:40.874 [INFO][3962] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:40.874 [INFO][3962] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-f-c7edc085f7.novalocal' Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:40.883 [INFO][3962] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:40.971 [INFO][3962] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:40.981 [INFO][3962] ipam/ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:40.986 [INFO][3962] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:40.989 [INFO][3962] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:40.989 [INFO][3962] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:40.991 [INFO][3962] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156 Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:40.997 [INFO][3962] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:41.005 [INFO][3962] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.129/26] block=192.168.112.128/26 handle="k8s-pod-network.f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:41.005 [INFO][3962] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.129/26] handle="k8s-pod-network.f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:41.005 [INFO][3962] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:46:41.038573 containerd[1471]: 2025-01-30 15:46:41.005 [INFO][3962] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.129/26] IPv6=[] ContainerID="f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" HandleID="k8s-pod-network.f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:46:41.040338 containerd[1471]: 2025-01-30 15:46:41.007 [INFO][3951] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" Namespace="calico-system" Pod="calico-kube-controllers-6f9c566f8c-m47dw" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0", GenerateName:"calico-kube-controllers-6f9c566f8c-", Namespace:"calico-system", SelfLink:"", UID:"e043c395-4b2d-4788-a24a-2ff2e7d7cf00", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f9c566f8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"", Pod:"calico-kube-controllers-6f9c566f8c-m47dw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8370c3db44d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:46:41.040338 containerd[1471]: 2025-01-30 15:46:41.007 [INFO][3951] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.129/32] ContainerID="f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" Namespace="calico-system" Pod="calico-kube-controllers-6f9c566f8c-m47dw" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:46:41.040338 containerd[1471]: 2025-01-30 15:46:41.007 [INFO][3951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8370c3db44d ContainerID="f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" Namespace="calico-system" Pod="calico-kube-controllers-6f9c566f8c-m47dw" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:46:41.040338 containerd[1471]: 2025-01-30 15:46:41.013 [INFO][3951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" Namespace="calico-system" Pod="calico-kube-controllers-6f9c566f8c-m47dw" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:46:41.040338 containerd[1471]: 2025-01-30 15:46:41.013 [INFO][3951] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" Namespace="calico-system" Pod="calico-kube-controllers-6f9c566f8c-m47dw" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0", GenerateName:"calico-kube-controllers-6f9c566f8c-", Namespace:"calico-system", SelfLink:"", UID:"e043c395-4b2d-4788-a24a-2ff2e7d7cf00", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f9c566f8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156", Pod:"calico-kube-controllers-6f9c566f8c-m47dw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8370c3db44d", MAC:"12:e2:8a:28:4f:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:46:41.040338 containerd[1471]: 2025-01-30 15:46:41.034 [INFO][3951] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156" Namespace="calico-system" Pod="calico-kube-controllers-6f9c566f8c-m47dw" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:46:41.070037 containerd[1471]: time="2025-01-30T15:46:41.069739305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:41.070037 containerd[1471]: time="2025-01-30T15:46:41.069789630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:41.070037 containerd[1471]: time="2025-01-30T15:46:41.069804398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:41.070037 containerd[1471]: time="2025-01-30T15:46:41.069880443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:41.094349 systemd[1]: run-containerd-runc-k8s.io-f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156-runc.kF8gzu.mount: Deactivated successfully. Jan 30 15:46:41.101843 systemd[1]: Started cri-containerd-f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156.scope - libcontainer container f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156. Jan 30 15:46:41.145360 containerd[1471]: time="2025-01-30T15:46:41.145322651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9c566f8c-m47dw,Uid:e043c395-4b2d-4788-a24a-2ff2e7d7cf00,Namespace:calico-system,Attempt:1,} returns sandbox id \"f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156\"" Jan 30 15:46:41.148238 containerd[1471]: time="2025-01-30T15:46:41.147328761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 15:46:41.566276 containerd[1471]: time="2025-01-30T15:46:41.566070834Z" level=info msg="StopPodSandbox for \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\"" Jan 30 15:46:41.777098 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Jan 30 15:46:41.884421 containerd[1471]: 2025-01-30 15:46:41.795 [INFO][4068] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Jan 30 15:46:41.884421 containerd[1471]: 2025-01-30 15:46:41.795 [INFO][4068] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" iface="eth0" netns="/var/run/netns/cni-59619567-f458-2719-2fd1-6416bc320e30" Jan 30 15:46:41.884421 containerd[1471]: 2025-01-30 15:46:41.796 [INFO][4068] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" iface="eth0" netns="/var/run/netns/cni-59619567-f458-2719-2fd1-6416bc320e30" Jan 30 15:46:41.884421 containerd[1471]: 2025-01-30 15:46:41.799 [INFO][4068] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" iface="eth0" netns="/var/run/netns/cni-59619567-f458-2719-2fd1-6416bc320e30" Jan 30 15:46:41.884421 containerd[1471]: 2025-01-30 15:46:41.800 [INFO][4068] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Jan 30 15:46:41.884421 containerd[1471]: 2025-01-30 15:46:41.800 [INFO][4068] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Jan 30 15:46:41.884421 containerd[1471]: 2025-01-30 15:46:41.865 [INFO][4074] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" HandleID="k8s-pod-network.8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:46:41.884421 containerd[1471]: 2025-01-30 15:46:41.865 [INFO][4074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:46:41.884421 containerd[1471]: 2025-01-30 15:46:41.866 [INFO][4074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:46:41.884421 containerd[1471]: 2025-01-30 15:46:41.876 [WARNING][4074] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" HandleID="k8s-pod-network.8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:46:41.884421 containerd[1471]: 2025-01-30 15:46:41.876 [INFO][4074] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" HandleID="k8s-pod-network.8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:46:41.884421 containerd[1471]: 2025-01-30 15:46:41.879 [INFO][4074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:46:41.884421 containerd[1471]: 2025-01-30 15:46:41.882 [INFO][4068] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Jan 30 15:46:41.889806 containerd[1471]: time="2025-01-30T15:46:41.887863284Z" level=info msg="TearDown network for sandbox \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\" successfully" Jan 30 15:46:41.889806 containerd[1471]: time="2025-01-30T15:46:41.887924651Z" level=info msg="StopPodSandbox for \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\" returns successfully" Jan 30 15:46:41.889806 containerd[1471]: time="2025-01-30T15:46:41.888803920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fr6sg,Uid:ee0fa910-3c73-4131-907b-66f4fa4b13bd,Namespace:kube-system,Attempt:1,}" Jan 30 15:46:41.893377 systemd[1]: run-netns-cni\x2d59619567\x2df458\x2d2719\x2d2fd1\x2d6416bc320e30.mount: Deactivated successfully. Jan 30 15:46:42.043708 systemd-networkd[1376]: cali09656b8426b: Link UP Jan 30 15:46:42.044192 systemd-networkd[1376]: cali09656b8426b: Gained carrier Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:41.949 [INFO][4084] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0 coredns-668d6bf9bc- kube-system ee0fa910-3c73-4131-907b-66f4fa4b13bd 780 0 2025-01-30 15:46:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-f-c7edc085f7.novalocal coredns-668d6bf9bc-fr6sg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali09656b8426b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" Namespace="kube-system" Pod="coredns-668d6bf9bc-fr6sg" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-" Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:41.949 [INFO][4084] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" Namespace="kube-system" Pod="coredns-668d6bf9bc-fr6sg" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:41.985 [INFO][4095] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" HandleID="k8s-pod-network.3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.000 [INFO][4095] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" HandleID="k8s-pod-network.3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334f80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-f-c7edc085f7.novalocal", "pod":"coredns-668d6bf9bc-fr6sg", "timestamp":"2025-01-30 15:46:41.985559013 +0000 UTC"}, Hostname:"ci-4081-3-0-f-c7edc085f7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.000 [INFO][4095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.000 [INFO][4095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.000 [INFO][4095] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-f-c7edc085f7.novalocal' Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.005 [INFO][4095] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.011 [INFO][4095] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.018 [INFO][4095] ipam/ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.021 [INFO][4095] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.024 [INFO][4095] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.024 [INFO][4095] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.027 [INFO][4095] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4 Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.033 [INFO][4095] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.040 [INFO][4095] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.130/26] block=192.168.112.128/26 handle="k8s-pod-network.3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.040 [INFO][4095] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.130/26] handle="k8s-pod-network.3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.040 [INFO][4095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:46:42.060433 containerd[1471]: 2025-01-30 15:46:42.040 [INFO][4095] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.130/26] IPv6=[] ContainerID="3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" HandleID="k8s-pod-network.3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:46:42.061743 containerd[1471]: 2025-01-30 15:46:42.041 [INFO][4084] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" Namespace="kube-system" Pod="coredns-668d6bf9bc-fr6sg" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ee0fa910-3c73-4131-907b-66f4fa4b13bd", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-fr6sg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09656b8426b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:46:42.061743 containerd[1471]: 2025-01-30 15:46:42.041 [INFO][4084] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.130/32] ContainerID="3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" Namespace="kube-system" Pod="coredns-668d6bf9bc-fr6sg" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:46:42.061743 containerd[1471]: 2025-01-30 15:46:42.041 [INFO][4084] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09656b8426b ContainerID="3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" Namespace="kube-system" Pod="coredns-668d6bf9bc-fr6sg" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:46:42.061743 containerd[1471]: 2025-01-30 15:46:42.044 [INFO][4084] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" Namespace="kube-system" Pod="coredns-668d6bf9bc-fr6sg" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:46:42.061743 containerd[1471]: 2025-01-30 15:46:42.044 [INFO][4084] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" Namespace="kube-system" Pod="coredns-668d6bf9bc-fr6sg" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ee0fa910-3c73-4131-907b-66f4fa4b13bd", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4", Pod:"coredns-668d6bf9bc-fr6sg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09656b8426b", MAC:"8a:4c:ba:ee:7d:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:46:42.061743 containerd[1471]: 2025-01-30 15:46:42.057 [INFO][4084] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4" Namespace="kube-system" Pod="coredns-668d6bf9bc-fr6sg" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:46:42.085856 containerd[1471]: time="2025-01-30T15:46:42.085624490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:42.085856 containerd[1471]: time="2025-01-30T15:46:42.085790495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:42.086120 containerd[1471]: time="2025-01-30T15:46:42.085831663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:42.086230 containerd[1471]: time="2025-01-30T15:46:42.086202066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:42.112863 systemd[1]: Started cri-containerd-3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4.scope - libcontainer container 3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4. Jan 30 15:46:42.154153 containerd[1471]: time="2025-01-30T15:46:42.154047706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fr6sg,Uid:ee0fa910-3c73-4131-907b-66f4fa4b13bd,Namespace:kube-system,Attempt:1,} returns sandbox id \"3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4\"" Jan 30 15:46:42.159555 containerd[1471]: time="2025-01-30T15:46:42.159525835Z" level=info msg="CreateContainer within sandbox \"3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 15:46:42.186842 containerd[1471]: time="2025-01-30T15:46:42.186790536Z" level=info msg="CreateContainer within sandbox \"3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ccc86854aaf47e5fc20d40caa30e94f26414eeb5f44513c46434997b487d68b3\"" Jan 30 15:46:42.187421 containerd[1471]: time="2025-01-30T15:46:42.187307096Z" level=info msg="StartContainer for \"ccc86854aaf47e5fc20d40caa30e94f26414eeb5f44513c46434997b487d68b3\"" Jan 30 15:46:42.221891 systemd[1]: Started cri-containerd-ccc86854aaf47e5fc20d40caa30e94f26414eeb5f44513c46434997b487d68b3.scope - libcontainer container ccc86854aaf47e5fc20d40caa30e94f26414eeb5f44513c46434997b487d68b3. Jan 30 15:46:42.252605 containerd[1471]: time="2025-01-30T15:46:42.252566947Z" level=info msg="StartContainer for \"ccc86854aaf47e5fc20d40caa30e94f26414eeb5f44513c46434997b487d68b3\" returns successfully" Jan 30 15:46:42.570071 containerd[1471]: time="2025-01-30T15:46:42.568641501Z" level=info msg="StopPodSandbox for \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\"" Jan 30 15:46:42.757937 containerd[1471]: 2025-01-30 15:46:42.697 [INFO][4209] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Jan 30 15:46:42.757937 containerd[1471]: 2025-01-30 15:46:42.697 [INFO][4209] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" iface="eth0" netns="/var/run/netns/cni-e9068003-bbcb-d530-4967-d924a4b114bd" Jan 30 15:46:42.757937 containerd[1471]: 2025-01-30 15:46:42.699 [INFO][4209] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" iface="eth0" netns="/var/run/netns/cni-e9068003-bbcb-d530-4967-d924a4b114bd" Jan 30 15:46:42.757937 containerd[1471]: 2025-01-30 15:46:42.699 [INFO][4209] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" iface="eth0" netns="/var/run/netns/cni-e9068003-bbcb-d530-4967-d924a4b114bd" Jan 30 15:46:42.757937 containerd[1471]: 2025-01-30 15:46:42.699 [INFO][4209] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Jan 30 15:46:42.757937 containerd[1471]: 2025-01-30 15:46:42.699 [INFO][4209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Jan 30 15:46:42.757937 containerd[1471]: 2025-01-30 15:46:42.744 [INFO][4215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" HandleID="k8s-pod-network.4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:46:42.757937 containerd[1471]: 2025-01-30 15:46:42.744 [INFO][4215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:46:42.757937 containerd[1471]: 2025-01-30 15:46:42.744 [INFO][4215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:46:42.757937 containerd[1471]: 2025-01-30 15:46:42.753 [WARNING][4215] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" HandleID="k8s-pod-network.4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:46:42.757937 containerd[1471]: 2025-01-30 15:46:42.753 [INFO][4215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" HandleID="k8s-pod-network.4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:46:42.757937 containerd[1471]: 2025-01-30 15:46:42.755 [INFO][4215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:46:42.757937 containerd[1471]: 2025-01-30 15:46:42.756 [INFO][4209] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Jan 30 15:46:42.758575 containerd[1471]: time="2025-01-30T15:46:42.758383302Z" level=info msg="TearDown network for sandbox \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\" successfully" Jan 30 15:46:42.758575 containerd[1471]: time="2025-01-30T15:46:42.758411074Z" level=info msg="StopPodSandbox for \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\" returns successfully" Jan 30 15:46:42.759244 containerd[1471]: time="2025-01-30T15:46:42.759221653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wzgwd,Uid:f958cc09-74f2-44d1-a296-1688f8e74244,Namespace:kube-system,Attempt:1,}" Jan 30 15:46:42.872114 kubelet[2615]: I0130 15:46:42.871526 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fr6sg" podStartSLOduration=37.871510595 podStartE2EDuration="37.871510595s" podCreationTimestamp="2025-01-30 15:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:46:42.870847385 +0000 UTC m=+42.631067410" watchObservedRunningTime="2025-01-30 15:46:42.871510595 +0000 UTC m=+42.631730610" Jan 30 15:46:42.901080 systemd[1]: run-netns-cni\x2de9068003\x2dbbcb\x2dd530\x2d4967\x2dd924a4b114bd.mount: Deactivated successfully. Jan 30 15:46:42.928809 systemd-networkd[1376]: cali8370c3db44d: Gained IPv6LL Jan 30 15:46:42.990489 systemd-networkd[1376]: calib0df61a6ae4: Link UP Jan 30 15:46:42.990909 systemd-networkd[1376]: calib0df61a6ae4: Gained carrier Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.812 [INFO][4221] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0 coredns-668d6bf9bc- kube-system f958cc09-74f2-44d1-a296-1688f8e74244 789 0 2025-01-30 15:46:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-f-c7edc085f7.novalocal coredns-668d6bf9bc-wzgwd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib0df61a6ae4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzgwd" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-" Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.812 [INFO][4221] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzgwd" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.843 [INFO][4232] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" HandleID="k8s-pod-network.fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.858 [INFO][4232] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" HandleID="k8s-pod-network.fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290810), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-f-c7edc085f7.novalocal", "pod":"coredns-668d6bf9bc-wzgwd", "timestamp":"2025-01-30 15:46:42.843583056 +0000 UTC"}, Hostname:"ci-4081-3-0-f-c7edc085f7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.858 [INFO][4232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.859 [INFO][4232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.859 [INFO][4232] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-f-c7edc085f7.novalocal' Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.863 [INFO][4232] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.957 [INFO][4232] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.963 [INFO][4232] ipam/ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.965 [INFO][4232] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.968 [INFO][4232] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.968 [INFO][4232] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.969 [INFO][4232] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.978 [INFO][4232] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.984 [INFO][4232] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.131/26] block=192.168.112.128/26 handle="k8s-pod-network.fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.984 [INFO][4232] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.131/26] handle="k8s-pod-network.fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.984 [INFO][4232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:46:43.015703 containerd[1471]: 2025-01-30 15:46:42.984 [INFO][4232] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.131/26] IPv6=[] ContainerID="fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" HandleID="k8s-pod-network.fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:46:43.017874 containerd[1471]: 2025-01-30 15:46:42.986 [INFO][4221] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzgwd" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f958cc09-74f2-44d1-a296-1688f8e74244", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-wzgwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0df61a6ae4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:46:43.017874 containerd[1471]: 2025-01-30 15:46:42.986 [INFO][4221] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.131/32] ContainerID="fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzgwd" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:46:43.017874 containerd[1471]: 2025-01-30 15:46:42.986 [INFO][4221] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0df61a6ae4 ContainerID="fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzgwd" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:46:43.017874 containerd[1471]: 2025-01-30 15:46:42.992 [INFO][4221] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzgwd" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:46:43.017874 containerd[1471]: 2025-01-30 15:46:42.993 [INFO][4221] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzgwd" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f958cc09-74f2-44d1-a296-1688f8e74244", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e", Pod:"coredns-668d6bf9bc-wzgwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0df61a6ae4", MAC:"c2:ce:54:1f:47:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:46:43.017874 containerd[1471]: 2025-01-30 15:46:43.012 [INFO][4221] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzgwd" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:46:43.041555 containerd[1471]: time="2025-01-30T15:46:43.041256349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:43.041555 containerd[1471]: time="2025-01-30T15:46:43.041309901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:43.041555 containerd[1471]: time="2025-01-30T15:46:43.041334638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:43.041820 containerd[1471]: time="2025-01-30T15:46:43.041442713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:43.071854 systemd[1]: Started cri-containerd-fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e.scope - libcontainer container fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e. Jan 30 15:46:43.124295 containerd[1471]: time="2025-01-30T15:46:43.122881989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wzgwd,Uid:f958cc09-74f2-44d1-a296-1688f8e74244,Namespace:kube-system,Attempt:1,} returns sandbox id \"fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e\"" Jan 30 15:46:43.128547 containerd[1471]: time="2025-01-30T15:46:43.128501524Z" level=info msg="CreateContainer within sandbox \"fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 15:46:43.565340 containerd[1471]: time="2025-01-30T15:46:43.564987334Z" level=info msg="StopPodSandbox for \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\"" Jan 30 15:46:43.811902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1451111905.mount: Deactivated successfully. Jan 30 15:46:43.842661 containerd[1471]: time="2025-01-30T15:46:43.842534136Z" level=info msg="CreateContainer within sandbox \"fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe6fc9c37588fe7903b59b0272e7232c353eaf42db78234235384506e7a12db0\"" Jan 30 15:46:43.846150 containerd[1471]: time="2025-01-30T15:46:43.845186480Z" level=info msg="StartContainer for \"fe6fc9c37588fe7903b59b0272e7232c353eaf42db78234235384506e7a12db0\"" Jan 30 15:46:43.892362 systemd[1]: Started cri-containerd-fe6fc9c37588fe7903b59b0272e7232c353eaf42db78234235384506e7a12db0.scope - libcontainer container fe6fc9c37588fe7903b59b0272e7232c353eaf42db78234235384506e7a12db0. Jan 30 15:46:43.896866 systemd-networkd[1376]: cali09656b8426b: Gained IPv6LL Jan 30 15:46:43.941646 containerd[1471]: time="2025-01-30T15:46:43.941584917Z" level=info msg="StartContainer for \"fe6fc9c37588fe7903b59b0272e7232c353eaf42db78234235384506e7a12db0\" returns successfully" Jan 30 15:46:43.945230 containerd[1471]: 2025-01-30 15:46:43.828 [INFO][4313] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Jan 30 15:46:43.945230 containerd[1471]: 2025-01-30 15:46:43.828 [INFO][4313] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" iface="eth0" netns="/var/run/netns/cni-cf45b641-8b2f-5cc0-f522-4b1e1573db57" Jan 30 15:46:43.945230 containerd[1471]: 2025-01-30 15:46:43.829 [INFO][4313] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" iface="eth0" netns="/var/run/netns/cni-cf45b641-8b2f-5cc0-f522-4b1e1573db57" Jan 30 15:46:43.945230 containerd[1471]: 2025-01-30 15:46:43.831 [INFO][4313] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" iface="eth0" netns="/var/run/netns/cni-cf45b641-8b2f-5cc0-f522-4b1e1573db57" Jan 30 15:46:43.945230 containerd[1471]: 2025-01-30 15:46:43.831 [INFO][4313] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Jan 30 15:46:43.945230 containerd[1471]: 2025-01-30 15:46:43.831 [INFO][4313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Jan 30 15:46:43.945230 containerd[1471]: 2025-01-30 15:46:43.914 [INFO][4320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" HandleID="k8s-pod-network.8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:46:43.945230 containerd[1471]: 2025-01-30 15:46:43.914 [INFO][4320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:46:43.945230 containerd[1471]: 2025-01-30 15:46:43.914 [INFO][4320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:46:43.945230 containerd[1471]: 2025-01-30 15:46:43.934 [WARNING][4320] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" HandleID="k8s-pod-network.8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:46:43.945230 containerd[1471]: 2025-01-30 15:46:43.934 [INFO][4320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" HandleID="k8s-pod-network.8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:46:43.945230 containerd[1471]: 2025-01-30 15:46:43.938 [INFO][4320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:46:43.945230 containerd[1471]: 2025-01-30 15:46:43.942 [INFO][4313] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Jan 30 15:46:43.946735 containerd[1471]: time="2025-01-30T15:46:43.946330954Z" level=info msg="TearDown network for sandbox \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\" successfully" Jan 30 15:46:43.946735 containerd[1471]: time="2025-01-30T15:46:43.946358315Z" level=info msg="StopPodSandbox for \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\" returns successfully" Jan 30 15:46:43.950179 containerd[1471]: time="2025-01-30T15:46:43.950040264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2brc4,Uid:83a3f814-60c1-47be-8f8b-bd595ad0a1dc,Namespace:calico-system,Attempt:1,}" Jan 30 15:46:43.951832 systemd[1]: run-netns-cni\x2dcf45b641\x2d8b2f\x2d5cc0\x2df522\x2d4b1e1573db57.mount: Deactivated successfully. Jan 30 15:46:44.151031 systemd-networkd[1376]: cali6182ef6b425: Link UP Jan 30 15:46:44.153162 systemd-networkd[1376]: cali6182ef6b425: Gained carrier Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.039 [INFO][4364] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0 csi-node-driver- calico-system 83a3f814-60c1-47be-8f8b-bd595ad0a1dc 805 0 2025-01-30 15:46:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-f-c7edc085f7.novalocal csi-node-driver-2brc4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6182ef6b425 [] []}} ContainerID="789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" Namespace="calico-system" Pod="csi-node-driver-2brc4" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-" Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.039 [INFO][4364] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" Namespace="calico-system" Pod="csi-node-driver-2brc4" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.074 [INFO][4374] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" HandleID="k8s-pod-network.789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.096 [INFO][4374] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" HandleID="k8s-pod-network.789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332db0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-f-c7edc085f7.novalocal", "pod":"csi-node-driver-2brc4", "timestamp":"2025-01-30 15:46:44.074630659 +0000 UTC"}, Hostname:"ci-4081-3-0-f-c7edc085f7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.096 [INFO][4374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.096 [INFO][4374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.096 [INFO][4374] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-f-c7edc085f7.novalocal' Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.099 [INFO][4374] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.103 [INFO][4374] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.111 [INFO][4374] ipam/ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.113 [INFO][4374] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.118 [INFO][4374] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.118 [INFO][4374] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.121 [INFO][4374] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.131 [INFO][4374] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.144 [INFO][4374] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.132/26] block=192.168.112.128/26 handle="k8s-pod-network.789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.144 [INFO][4374] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.132/26] handle="k8s-pod-network.789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.144 [INFO][4374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:46:44.176740 containerd[1471]: 2025-01-30 15:46:44.144 [INFO][4374] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.132/26] IPv6=[] ContainerID="789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" HandleID="k8s-pod-network.789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:46:44.178554 containerd[1471]: 2025-01-30 15:46:44.147 [INFO][4364] cni-plugin/k8s.go 386: Populated endpoint ContainerID="789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" Namespace="calico-system" Pod="csi-node-driver-2brc4" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83a3f814-60c1-47be-8f8b-bd595ad0a1dc", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"", Pod:"csi-node-driver-2brc4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6182ef6b425", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:46:44.178554 containerd[1471]: 2025-01-30 15:46:44.147 [INFO][4364] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.132/32] ContainerID="789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" Namespace="calico-system" Pod="csi-node-driver-2brc4" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:46:44.178554 containerd[1471]: 2025-01-30 15:46:44.147 [INFO][4364] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6182ef6b425 ContainerID="789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" Namespace="calico-system" Pod="csi-node-driver-2brc4" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:46:44.178554 containerd[1471]: 2025-01-30 15:46:44.152 [INFO][4364] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" Namespace="calico-system" Pod="csi-node-driver-2brc4" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:46:44.178554 containerd[1471]: 2025-01-30 15:46:44.152 [INFO][4364] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" Namespace="calico-system" Pod="csi-node-driver-2brc4" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83a3f814-60c1-47be-8f8b-bd595ad0a1dc", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a", Pod:"csi-node-driver-2brc4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6182ef6b425", MAC:"86:38:ef:8c:3f:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:46:44.178554 containerd[1471]: 2025-01-30 15:46:44.173 [INFO][4364] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a" Namespace="calico-system" Pod="csi-node-driver-2brc4" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:46:44.220223 containerd[1471]: time="2025-01-30T15:46:44.219386213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:44.220223 containerd[1471]: time="2025-01-30T15:46:44.220143710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:44.220431 containerd[1471]: time="2025-01-30T15:46:44.220330845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:44.221976 containerd[1471]: time="2025-01-30T15:46:44.221407668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:44.245825 systemd[1]: Started cri-containerd-789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a.scope - libcontainer container 789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a. Jan 30 15:46:44.274947 containerd[1471]: time="2025-01-30T15:46:44.274791329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2brc4,Uid:83a3f814-60c1-47be-8f8b-bd595ad0a1dc,Namespace:calico-system,Attempt:1,} returns sandbox id \"789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a\"" Jan 30 15:46:44.566639 containerd[1471]: time="2025-01-30T15:46:44.566503954Z" level=info msg="StopPodSandbox for \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\"" Jan 30 15:46:44.711821 containerd[1471]: 2025-01-30 15:46:44.664 [INFO][4448] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Jan 30 15:46:44.711821 containerd[1471]: 2025-01-30 15:46:44.666 [INFO][4448] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" iface="eth0" netns="/var/run/netns/cni-bbb175b1-e74b-4d5b-eeb7-838a124567ea" Jan 30 15:46:44.711821 containerd[1471]: 2025-01-30 15:46:44.666 [INFO][4448] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" iface="eth0" netns="/var/run/netns/cni-bbb175b1-e74b-4d5b-eeb7-838a124567ea" Jan 30 15:46:44.711821 containerd[1471]: 2025-01-30 15:46:44.667 [INFO][4448] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" iface="eth0" netns="/var/run/netns/cni-bbb175b1-e74b-4d5b-eeb7-838a124567ea" Jan 30 15:46:44.711821 containerd[1471]: 2025-01-30 15:46:44.667 [INFO][4448] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Jan 30 15:46:44.711821 containerd[1471]: 2025-01-30 15:46:44.667 [INFO][4448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Jan 30 15:46:44.711821 containerd[1471]: 2025-01-30 15:46:44.697 [INFO][4454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" HandleID="k8s-pod-network.98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:46:44.711821 containerd[1471]: 2025-01-30 15:46:44.698 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:46:44.711821 containerd[1471]: 2025-01-30 15:46:44.698 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:46:44.711821 containerd[1471]: 2025-01-30 15:46:44.706 [WARNING][4454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" HandleID="k8s-pod-network.98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:46:44.711821 containerd[1471]: 2025-01-30 15:46:44.706 [INFO][4454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" HandleID="k8s-pod-network.98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:46:44.711821 containerd[1471]: 2025-01-30 15:46:44.708 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:46:44.711821 containerd[1471]: 2025-01-30 15:46:44.710 [INFO][4448] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Jan 30 15:46:44.712871 containerd[1471]: time="2025-01-30T15:46:44.712074223Z" level=info msg="TearDown network for sandbox \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\" successfully" Jan 30 15:46:44.712871 containerd[1471]: time="2025-01-30T15:46:44.712102106Z" level=info msg="StopPodSandbox for \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\" returns successfully" Jan 30 15:46:44.712937 containerd[1471]: time="2025-01-30T15:46:44.712916681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d449c9595-lkpj9,Uid:8fe8c3db-c16d-4689-b438-3e2321856a39,Namespace:calico-apiserver,Attempt:1,}" Jan 30 15:46:44.785061 systemd-networkd[1376]: calib0df61a6ae4: Gained IPv6LL Jan 30 15:46:44.900649 systemd[1]: run-netns-cni\x2dbbb175b1\x2de74b\x2d4d5b\x2deeb7\x2d838a124567ea.mount: Deactivated successfully. Jan 30 15:46:44.910502 kubelet[2615]: I0130 15:46:44.910434 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wzgwd" podStartSLOduration=39.910414689 podStartE2EDuration="39.910414689s" podCreationTimestamp="2025-01-30 15:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:46:44.909498631 +0000 UTC m=+44.669718636" watchObservedRunningTime="2025-01-30 15:46:44.910414689 +0000 UTC m=+44.670634704" Jan 30 15:46:45.061533 systemd-networkd[1376]: calia0efdb1a712: Link UP Jan 30 15:46:45.062633 systemd-networkd[1376]: calia0efdb1a712: Gained carrier Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:44.795 [INFO][4460] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0 calico-apiserver-5d449c9595- calico-apiserver 8fe8c3db-c16d-4689-b438-3e2321856a39 814 0 2025-01-30 15:46:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d449c9595 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-f-c7edc085f7.novalocal calico-apiserver-5d449c9595-lkpj9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia0efdb1a712 [] []}} ContainerID="392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-lkpj9" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-" Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:44.795 [INFO][4460] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-lkpj9" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:44.876 [INFO][4471] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" HandleID="k8s-pod-network.392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.011 [INFO][4471] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" HandleID="k8s-pod-network.392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319c80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-f-c7edc085f7.novalocal", "pod":"calico-apiserver-5d449c9595-lkpj9", "timestamp":"2025-01-30 15:46:44.876704374 +0000 UTC"}, Hostname:"ci-4081-3-0-f-c7edc085f7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.011 [INFO][4471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.011 [INFO][4471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.011 [INFO][4471] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-f-c7edc085f7.novalocal' Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.014 [INFO][4471] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.022 [INFO][4471] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.030 [INFO][4471] ipam/ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.033 [INFO][4471] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.037 [INFO][4471] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.037 [INFO][4471] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.040 [INFO][4471] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.045 [INFO][4471] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.054 [INFO][4471] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.133/26] block=192.168.112.128/26 handle="k8s-pod-network.392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.054 [INFO][4471] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.133/26] handle="k8s-pod-network.392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.054 [INFO][4471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:46:45.082912 containerd[1471]: 2025-01-30 15:46:45.054 [INFO][4471] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.133/26] IPv6=[] ContainerID="392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" HandleID="k8s-pod-network.392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:46:45.083788 containerd[1471]: 2025-01-30 15:46:45.056 [INFO][4460] cni-plugin/k8s.go 386: Populated endpoint ContainerID="392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-lkpj9" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0", GenerateName:"calico-apiserver-5d449c9595-", Namespace:"calico-apiserver", SelfLink:"", UID:"8fe8c3db-c16d-4689-b438-3e2321856a39", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d449c9595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"", Pod:"calico-apiserver-5d449c9595-lkpj9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia0efdb1a712", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:46:45.083788 containerd[1471]: 2025-01-30 15:46:45.056 [INFO][4460] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.133/32] ContainerID="392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-lkpj9" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:46:45.083788 containerd[1471]: 2025-01-30 15:46:45.056 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0efdb1a712 ContainerID="392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-lkpj9" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:46:45.083788 containerd[1471]: 2025-01-30 15:46:45.061 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-lkpj9" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:46:45.083788 containerd[1471]: 2025-01-30 15:46:45.063 [INFO][4460] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-lkpj9" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0", GenerateName:"calico-apiserver-5d449c9595-", Namespace:"calico-apiserver", SelfLink:"", UID:"8fe8c3db-c16d-4689-b438-3e2321856a39", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d449c9595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd", Pod:"calico-apiserver-5d449c9595-lkpj9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia0efdb1a712", MAC:"ca:54:87:5b:ab:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:46:45.083788 containerd[1471]: 2025-01-30 15:46:45.080 [INFO][4460] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-lkpj9" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:46:45.122245 containerd[1471]: time="2025-01-30T15:46:45.121525084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:45.122245 containerd[1471]: time="2025-01-30T15:46:45.121774688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:45.122245 containerd[1471]: time="2025-01-30T15:46:45.121819844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:45.122245 containerd[1471]: time="2025-01-30T15:46:45.122024893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:45.158867 systemd[1]: Started cri-containerd-392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd.scope - libcontainer container 392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd. Jan 30 15:46:45.212323 containerd[1471]: time="2025-01-30T15:46:45.212270768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d449c9595-lkpj9,Uid:8fe8c3db-c16d-4689-b438-3e2321856a39,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd\"" Jan 30 15:46:45.572476 containerd[1471]: time="2025-01-30T15:46:45.572370803Z" level=info msg="StopPodSandbox for \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\"" Jan 30 15:46:45.681731 systemd-networkd[1376]: cali6182ef6b425: Gained IPv6LL Jan 30 15:46:45.743149 containerd[1471]: 2025-01-30 15:46:45.667 [INFO][4548] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Jan 30 15:46:45.743149 containerd[1471]: 2025-01-30 15:46:45.667 [INFO][4548] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" iface="eth0" netns="/var/run/netns/cni-36b5b7bc-3aeb-7eab-27e7-21909881c510" Jan 30 15:46:45.743149 containerd[1471]: 2025-01-30 15:46:45.668 [INFO][4548] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" iface="eth0" netns="/var/run/netns/cni-36b5b7bc-3aeb-7eab-27e7-21909881c510" Jan 30 15:46:45.743149 containerd[1471]: 2025-01-30 15:46:45.668 [INFO][4548] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" iface="eth0" netns="/var/run/netns/cni-36b5b7bc-3aeb-7eab-27e7-21909881c510" Jan 30 15:46:45.743149 containerd[1471]: 2025-01-30 15:46:45.668 [INFO][4548] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Jan 30 15:46:45.743149 containerd[1471]: 2025-01-30 15:46:45.668 [INFO][4548] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Jan 30 15:46:45.743149 containerd[1471]: 2025-01-30 15:46:45.724 [INFO][4554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" HandleID="k8s-pod-network.17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:46:45.743149 containerd[1471]: 2025-01-30 15:46:45.724 [INFO][4554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:46:45.743149 containerd[1471]: 2025-01-30 15:46:45.724 [INFO][4554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:46:45.743149 containerd[1471]: 2025-01-30 15:46:45.734 [WARNING][4554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" HandleID="k8s-pod-network.17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:46:45.743149 containerd[1471]: 2025-01-30 15:46:45.734 [INFO][4554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" HandleID="k8s-pod-network.17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:46:45.743149 containerd[1471]: 2025-01-30 15:46:45.739 [INFO][4554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:46:45.743149 containerd[1471]: 2025-01-30 15:46:45.741 [INFO][4548] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Jan 30 15:46:45.743149 containerd[1471]: time="2025-01-30T15:46:45.742927561Z" level=info msg="TearDown network for sandbox \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\" successfully" Jan 30 15:46:45.743149 containerd[1471]: time="2025-01-30T15:46:45.742995219Z" level=info msg="StopPodSandbox for \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\" returns successfully" Jan 30 15:46:45.746101 containerd[1471]: time="2025-01-30T15:46:45.745715308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d449c9595-22mfp,Uid:9428f27e-8e9f-42ec-b91d-69ade8069655,Namespace:calico-apiserver,Attempt:1,}" Jan 30 15:46:45.747175 systemd[1]: run-netns-cni\x2d36b5b7bc\x2d3aeb\x2d7eab\x2d27e7\x2d21909881c510.mount: Deactivated successfully. Jan 30 15:46:45.924007 systemd-networkd[1376]: calid98c74dc10a: Link UP Jan 30 15:46:45.924664 systemd-networkd[1376]: calid98c74dc10a: Gained carrier Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.817 [INFO][4561] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0 calico-apiserver-5d449c9595- calico-apiserver 9428f27e-8e9f-42ec-b91d-69ade8069655 830 0 2025-01-30 15:46:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d449c9595 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-f-c7edc085f7.novalocal calico-apiserver-5d449c9595-22mfp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid98c74dc10a [] []}} ContainerID="b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-22mfp" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-" Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.817 [INFO][4561] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-22mfp" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.857 [INFO][4571] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" HandleID="k8s-pod-network.b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.869 [INFO][4571] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" HandleID="k8s-pod-network.b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edd20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-f-c7edc085f7.novalocal", "pod":"calico-apiserver-5d449c9595-22mfp", "timestamp":"2025-01-30 15:46:45.85722328 +0000 UTC"}, Hostname:"ci-4081-3-0-f-c7edc085f7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.869 [INFO][4571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.869 [INFO][4571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.869 [INFO][4571] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-f-c7edc085f7.novalocal' Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.872 [INFO][4571] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.880 [INFO][4571] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.894 [INFO][4571] ipam/ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.897 [INFO][4571] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.902 [INFO][4571] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.902 [INFO][4571] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.905 [INFO][4571] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.910 [INFO][4571] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.918 [INFO][4571] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.134/26] block=192.168.112.128/26 handle="k8s-pod-network.b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.918 [INFO][4571] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.134/26] handle="k8s-pod-network.b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" host="ci-4081-3-0-f-c7edc085f7.novalocal" Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.918 [INFO][4571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:46:45.954869 containerd[1471]: 2025-01-30 15:46:45.918 [INFO][4571] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.134/26] IPv6=[] ContainerID="b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" HandleID="k8s-pod-network.b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:46:45.956129 containerd[1471]: 2025-01-30 15:46:45.920 [INFO][4561] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-22mfp" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0", GenerateName:"calico-apiserver-5d449c9595-", Namespace:"calico-apiserver", SelfLink:"", UID:"9428f27e-8e9f-42ec-b91d-69ade8069655", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d449c9595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"", Pod:"calico-apiserver-5d449c9595-22mfp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid98c74dc10a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:46:45.956129 containerd[1471]: 2025-01-30 15:46:45.920 [INFO][4561] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.134/32] ContainerID="b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-22mfp" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:46:45.956129 containerd[1471]: 2025-01-30 15:46:45.920 [INFO][4561] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid98c74dc10a ContainerID="b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-22mfp" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:46:45.956129 containerd[1471]: 2025-01-30 15:46:45.925 [INFO][4561] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-22mfp" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:46:45.956129 containerd[1471]: 2025-01-30 15:46:45.927 [INFO][4561] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-22mfp" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0", GenerateName:"calico-apiserver-5d449c9595-", Namespace:"calico-apiserver", SelfLink:"", UID:"9428f27e-8e9f-42ec-b91d-69ade8069655", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d449c9595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda", Pod:"calico-apiserver-5d449c9595-22mfp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid98c74dc10a", MAC:"a2:13:f9:6d:55:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:46:45.956129 containerd[1471]: 2025-01-30 15:46:45.951 [INFO][4561] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda" Namespace="calico-apiserver" Pod="calico-apiserver-5d449c9595-22mfp" WorkloadEndpoint="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:46:45.987000 containerd[1471]: time="2025-01-30T15:46:45.986632214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:45.987189 containerd[1471]: time="2025-01-30T15:46:45.986952542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:45.987189 containerd[1471]: time="2025-01-30T15:46:45.986974152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:45.987189 containerd[1471]: time="2025-01-30T15:46:45.987064624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:46.021879 systemd[1]: Started cri-containerd-b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda.scope - libcontainer container b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda. Jan 30 15:46:46.078778 containerd[1471]: time="2025-01-30T15:46:46.078733415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d449c9595-22mfp,Uid:9428f27e-8e9f-42ec-b91d-69ade8069655,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda\"" Jan 30 15:46:46.193907 systemd-networkd[1376]: calia0efdb1a712: Gained IPv6LL Jan 30 15:46:47.839726 containerd[1471]: time="2025-01-30T15:46:47.839638921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:47.842938 containerd[1471]: time="2025-01-30T15:46:47.842656400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 15:46:47.844871 containerd[1471]: time="2025-01-30T15:46:47.844802068Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:47.848122 containerd[1471]: time="2025-01-30T15:46:47.848076224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:47.849402 containerd[1471]: time="2025-01-30T15:46:47.848859749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 6.701499749s" Jan 30 15:46:47.849402 containerd[1471]: time="2025-01-30T15:46:47.848900376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 15:46:47.856199 containerd[1471]: time="2025-01-30T15:46:47.855737095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 15:46:47.871857 containerd[1471]: time="2025-01-30T15:46:47.871820623Z" level=info msg="CreateContainer within sandbox \"f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 15:46:47.899655 containerd[1471]: time="2025-01-30T15:46:47.899618135Z" level=info msg="CreateContainer within sandbox \"f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"34bcf8d8e53f5c38735393740e361ade71b1938661f5709b0bf704ff7f17d921\"" Jan 30 15:46:47.900737 containerd[1471]: time="2025-01-30T15:46:47.900582183Z" level=info msg="StartContainer for \"34bcf8d8e53f5c38735393740e361ade71b1938661f5709b0bf704ff7f17d921\"" Jan 30 15:46:47.921774 systemd-networkd[1376]: calid98c74dc10a: Gained IPv6LL Jan 30 15:46:47.934834 systemd[1]: Started cri-containerd-34bcf8d8e53f5c38735393740e361ade71b1938661f5709b0bf704ff7f17d921.scope - libcontainer container 34bcf8d8e53f5c38735393740e361ade71b1938661f5709b0bf704ff7f17d921. Jan 30 15:46:48.212128 containerd[1471]: time="2025-01-30T15:46:48.211805888Z" level=info msg="StartContainer for \"34bcf8d8e53f5c38735393740e361ade71b1938661f5709b0bf704ff7f17d921\" returns successfully" Jan 30 15:46:49.083971 kubelet[2615]: I0130 15:46:49.083830 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f9c566f8c-m47dw" podStartSLOduration=29.375208562 podStartE2EDuration="36.083813551s" podCreationTimestamp="2025-01-30 15:46:13 +0000 UTC" firstStartedPulling="2025-01-30 15:46:41.146939321 +0000 UTC m=+40.907159336" lastFinishedPulling="2025-01-30 15:46:47.85554431 +0000 UTC m=+47.615764325" observedRunningTime="2025-01-30 15:46:49.013448661 +0000 UTC m=+48.773668727" watchObservedRunningTime="2025-01-30 15:46:49.083813551 +0000 UTC m=+48.844033566" Jan 30 15:46:50.305969 containerd[1471]: time="2025-01-30T15:46:50.305921459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:50.307295 containerd[1471]: time="2025-01-30T15:46:50.307240807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 15:46:50.308652 containerd[1471]: time="2025-01-30T15:46:50.308603468Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:50.311330 containerd[1471]: time="2025-01-30T15:46:50.311281100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:50.311963 containerd[1471]: time="2025-01-30T15:46:50.311909200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.456140374s" Jan 30 15:46:50.312023 containerd[1471]: time="2025-01-30T15:46:50.311964034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 15:46:50.314986 containerd[1471]: time="2025-01-30T15:46:50.314548990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 15:46:50.316183 containerd[1471]: time="2025-01-30T15:46:50.316137940Z" level=info msg="CreateContainer within sandbox \"789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 15:46:50.340212 containerd[1471]: time="2025-01-30T15:46:50.340105034Z" level=info msg="CreateContainer within sandbox \"789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"826ccdf707372ac72dfae251bdb9b3e5da915348f9e548806efa9bad72320f0f\"" Jan 30 15:46:50.341133 containerd[1471]: time="2025-01-30T15:46:50.340971756Z" level=info msg="StartContainer for \"826ccdf707372ac72dfae251bdb9b3e5da915348f9e548806efa9bad72320f0f\"" Jan 30 15:46:50.381831 systemd[1]: Started cri-containerd-826ccdf707372ac72dfae251bdb9b3e5da915348f9e548806efa9bad72320f0f.scope - libcontainer container 826ccdf707372ac72dfae251bdb9b3e5da915348f9e548806efa9bad72320f0f. Jan 30 15:46:50.417542 containerd[1471]: time="2025-01-30T15:46:50.417490988Z" level=info msg="StartContainer for \"826ccdf707372ac72dfae251bdb9b3e5da915348f9e548806efa9bad72320f0f\" returns successfully" Jan 30 15:46:53.754355 containerd[1471]: time="2025-01-30T15:46:53.754298171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:53.755702 containerd[1471]: time="2025-01-30T15:46:53.755626736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 15:46:53.757082 containerd[1471]: time="2025-01-30T15:46:53.757035793Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:53.760126 containerd[1471]: time="2025-01-30T15:46:53.760052293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:53.761354 containerd[1471]: time="2025-01-30T15:46:53.760786502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.446189802s" Jan 30 15:46:53.761354 containerd[1471]: time="2025-01-30T15:46:53.760819875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 15:46:53.782209 containerd[1471]: time="2025-01-30T15:46:53.782101624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 15:46:53.790120 containerd[1471]: time="2025-01-30T15:46:53.790071899Z" level=info msg="CreateContainer within sandbox \"392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 15:46:53.822725 containerd[1471]: time="2025-01-30T15:46:53.822231207Z" level=info msg="CreateContainer within sandbox \"392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dca18dda98b480ef9a25fd964acfecc4ab8b90692335111060e48fc3e239f039\"" Jan 30 15:46:53.824325 containerd[1471]: time="2025-01-30T15:46:53.824298620Z" level=info msg="StartContainer for \"dca18dda98b480ef9a25fd964acfecc4ab8b90692335111060e48fc3e239f039\"" Jan 30 15:46:53.861890 systemd[1]: run-containerd-runc-k8s.io-dca18dda98b480ef9a25fd964acfecc4ab8b90692335111060e48fc3e239f039-runc.Fs6qwf.mount: Deactivated successfully. Jan 30 15:46:53.868818 systemd[1]: Started cri-containerd-dca18dda98b480ef9a25fd964acfecc4ab8b90692335111060e48fc3e239f039.scope - libcontainer container dca18dda98b480ef9a25fd964acfecc4ab8b90692335111060e48fc3e239f039. Jan 30 15:46:53.910662 containerd[1471]: time="2025-01-30T15:46:53.910619138Z" level=info msg="StartContainer for \"dca18dda98b480ef9a25fd964acfecc4ab8b90692335111060e48fc3e239f039\" returns successfully" Jan 30 15:46:54.203435 containerd[1471]: time="2025-01-30T15:46:54.203279614Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:54.207595 containerd[1471]: time="2025-01-30T15:46:54.207507545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 15:46:54.213655 containerd[1471]: time="2025-01-30T15:46:54.213589663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 431.413738ms" Jan 30 15:46:54.213767 containerd[1471]: time="2025-01-30T15:46:54.213664966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 15:46:54.216538 containerd[1471]: time="2025-01-30T15:46:54.216042314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 15:46:54.220570 containerd[1471]: time="2025-01-30T15:46:54.220509237Z" level=info msg="CreateContainer within sandbox \"b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 15:46:54.253014 containerd[1471]: time="2025-01-30T15:46:54.252939804Z" level=info msg="CreateContainer within sandbox \"b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4766e58be34ba6b91347a0970be66ce7718dc8041a5d82df26f088c13280dc79\"" Jan 30 15:46:54.255344 containerd[1471]: time="2025-01-30T15:46:54.255292947Z" level=info msg="StartContainer for \"4766e58be34ba6b91347a0970be66ce7718dc8041a5d82df26f088c13280dc79\"" Jan 30 15:46:54.295972 systemd[1]: Started cri-containerd-4766e58be34ba6b91347a0970be66ce7718dc8041a5d82df26f088c13280dc79.scope - libcontainer container 4766e58be34ba6b91347a0970be66ce7718dc8041a5d82df26f088c13280dc79. Jan 30 15:46:54.348612 containerd[1471]: time="2025-01-30T15:46:54.348475071Z" level=info msg="StartContainer for \"4766e58be34ba6b91347a0970be66ce7718dc8041a5d82df26f088c13280dc79\" returns successfully" Jan 30 15:46:54.993798 kubelet[2615]: I0130 15:46:54.993736 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d449c9595-lkpj9" podStartSLOduration=34.426539078 podStartE2EDuration="42.993717324s" podCreationTimestamp="2025-01-30 15:46:12 +0000 UTC" firstStartedPulling="2025-01-30 15:46:45.214765559 +0000 UTC m=+44.974985574" lastFinishedPulling="2025-01-30 15:46:53.781943805 +0000 UTC m=+53.542163820" observedRunningTime="2025-01-30 15:46:54.018284467 +0000 UTC m=+53.778504472" watchObservedRunningTime="2025-01-30 15:46:54.993717324 +0000 UTC m=+54.753937329" Jan 30 15:46:55.364579 kubelet[2615]: I0130 15:46:55.364438 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d449c9595-22mfp" podStartSLOduration=35.228978616 podStartE2EDuration="43.364418043s" podCreationTimestamp="2025-01-30 15:46:12 +0000 UTC" firstStartedPulling="2025-01-30 15:46:46.080375007 +0000 UTC m=+45.840595012" lastFinishedPulling="2025-01-30 15:46:54.215814384 +0000 UTC m=+53.976034439" observedRunningTime="2025-01-30 15:46:54.994277163 +0000 UTC m=+54.754497178" watchObservedRunningTime="2025-01-30 15:46:55.364418043 +0000 UTC m=+55.124638058" Jan 30 15:46:56.381126 containerd[1471]: time="2025-01-30T15:46:56.381066544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:56.383648 containerd[1471]: time="2025-01-30T15:46:56.383614884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 15:46:56.385255 containerd[1471]: time="2025-01-30T15:46:56.385229458Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:56.388072 containerd[1471]: time="2025-01-30T15:46:56.388039323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:56.388956 containerd[1471]: time="2025-01-30T15:46:56.388920339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.172836397s" Jan 30 15:46:56.389054 containerd[1471]: time="2025-01-30T15:46:56.389034325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 15:46:56.391570 containerd[1471]: time="2025-01-30T15:46:56.391520337Z" level=info msg="CreateContainer within sandbox \"789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 15:46:56.420693 containerd[1471]: time="2025-01-30T15:46:56.420286426Z" level=info msg="CreateContainer within sandbox \"789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b2f82d86fe19a8ac74c2b874403be1b6e0f6a36363f623477067bc22fb6f7d4b\"" Jan 30 15:46:56.425257 containerd[1471]: time="2025-01-30T15:46:56.421949441Z" level=info msg="StartContainer for \"b2f82d86fe19a8ac74c2b874403be1b6e0f6a36363f623477067bc22fb6f7d4b\"" Jan 30 15:46:56.472422 systemd[1]: run-containerd-runc-k8s.io-b2f82d86fe19a8ac74c2b874403be1b6e0f6a36363f623477067bc22fb6f7d4b-runc.YsqJ15.mount: Deactivated successfully. Jan 30 15:46:56.479867 systemd[1]: Started cri-containerd-b2f82d86fe19a8ac74c2b874403be1b6e0f6a36363f623477067bc22fb6f7d4b.scope - libcontainer container b2f82d86fe19a8ac74c2b874403be1b6e0f6a36363f623477067bc22fb6f7d4b. Jan 30 15:46:56.512741 containerd[1471]: time="2025-01-30T15:46:56.512611073Z" level=info msg="StartContainer for \"b2f82d86fe19a8ac74c2b874403be1b6e0f6a36363f623477067bc22fb6f7d4b\" returns successfully" Jan 30 15:46:56.679552 kubelet[2615]: I0130 15:46:56.678884 2615 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 15:46:56.679552 kubelet[2615]: I0130 15:46:56.678923 2615 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 15:47:00.545310 containerd[1471]: time="2025-01-30T15:47:00.544961762Z" level=info msg="StopPodSandbox for \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\"" Jan 30 15:47:00.764198 containerd[1471]: 2025-01-30 15:47:00.717 [WARNING][4900] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ee0fa910-3c73-4131-907b-66f4fa4b13bd", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4", Pod:"coredns-668d6bf9bc-fr6sg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09656b8426b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:47:00.764198 containerd[1471]: 2025-01-30 15:47:00.718 [INFO][4900] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Jan 30 15:47:00.764198 containerd[1471]: 2025-01-30 15:47:00.718 [INFO][4900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" iface="eth0" netns="" Jan 30 15:47:00.764198 containerd[1471]: 2025-01-30 15:47:00.718 [INFO][4900] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Jan 30 15:47:00.764198 containerd[1471]: 2025-01-30 15:47:00.718 [INFO][4900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Jan 30 15:47:00.764198 containerd[1471]: 2025-01-30 15:47:00.750 [INFO][4907] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" HandleID="k8s-pod-network.8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:47:00.764198 containerd[1471]: 2025-01-30 15:47:00.752 [INFO][4907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:47:00.764198 containerd[1471]: 2025-01-30 15:47:00.753 [INFO][4907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:47:00.764198 containerd[1471]: 2025-01-30 15:47:00.760 [WARNING][4907] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" HandleID="k8s-pod-network.8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:47:00.764198 containerd[1471]: 2025-01-30 15:47:00.760 [INFO][4907] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" HandleID="k8s-pod-network.8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:47:00.764198 containerd[1471]: 2025-01-30 15:47:00.761 [INFO][4907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:47:00.764198 containerd[1471]: 2025-01-30 15:47:00.762 [INFO][4900] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Jan 30 15:47:00.765369 containerd[1471]: time="2025-01-30T15:47:00.764226872Z" level=info msg="TearDown network for sandbox \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\" successfully" Jan 30 15:47:00.765369 containerd[1471]: time="2025-01-30T15:47:00.764284581Z" level=info msg="StopPodSandbox for \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\" returns successfully" Jan 30 15:47:00.765755 containerd[1471]: time="2025-01-30T15:47:00.765486132Z" level=info msg="RemovePodSandbox for \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\"" Jan 30 15:47:00.765755 containerd[1471]: time="2025-01-30T15:47:00.765537158Z" level=info msg="Forcibly stopping sandbox \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\"" Jan 30 15:47:00.834156 containerd[1471]: 2025-01-30 15:47:00.801 [WARNING][4925] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ee0fa910-3c73-4131-907b-66f4fa4b13bd", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"3a07533d37eeaf1db29f94da8d858c90ce861e76056293e5786a0c48a68ce1a4", Pod:"coredns-668d6bf9bc-fr6sg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09656b8426b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:47:00.834156 containerd[1471]: 2025-01-30 15:47:00.801 [INFO][4925] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Jan 30 15:47:00.834156 containerd[1471]: 2025-01-30 15:47:00.801 [INFO][4925] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" iface="eth0" netns="" Jan 30 15:47:00.834156 containerd[1471]: 2025-01-30 15:47:00.801 [INFO][4925] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Jan 30 15:47:00.834156 containerd[1471]: 2025-01-30 15:47:00.801 [INFO][4925] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Jan 30 15:47:00.834156 containerd[1471]: 2025-01-30 15:47:00.822 [INFO][4931] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" HandleID="k8s-pod-network.8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:47:00.834156 containerd[1471]: 2025-01-30 15:47:00.822 [INFO][4931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:47:00.834156 containerd[1471]: 2025-01-30 15:47:00.822 [INFO][4931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:47:00.834156 containerd[1471]: 2025-01-30 15:47:00.830 [WARNING][4931] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" HandleID="k8s-pod-network.8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:47:00.834156 containerd[1471]: 2025-01-30 15:47:00.830 [INFO][4931] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" HandleID="k8s-pod-network.8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--fr6sg-eth0" Jan 30 15:47:00.834156 containerd[1471]: 2025-01-30 15:47:00.831 [INFO][4931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:47:00.834156 containerd[1471]: 2025-01-30 15:47:00.833 [INFO][4925] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226" Jan 30 15:47:00.834156 containerd[1471]: time="2025-01-30T15:47:00.834102354Z" level=info msg="TearDown network for sandbox \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\" successfully" Jan 30 15:47:00.863718 containerd[1471]: time="2025-01-30T15:47:00.863608989Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:47:00.863718 containerd[1471]: time="2025-01-30T15:47:00.863717564Z" level=info msg="RemovePodSandbox \"8c9819205f8c529b6301039936e5e720a8af83ad81e8811b2df963e45e965226\" returns successfully" Jan 30 15:47:00.864290 containerd[1471]: time="2025-01-30T15:47:00.864225284Z" level=info msg="StopPodSandbox for \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\"" Jan 30 15:47:00.951723 containerd[1471]: 2025-01-30 15:47:00.903 [WARNING][4949] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0", GenerateName:"calico-kube-controllers-6f9c566f8c-", Namespace:"calico-system", SelfLink:"", UID:"e043c395-4b2d-4788-a24a-2ff2e7d7cf00", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f9c566f8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156", Pod:"calico-kube-controllers-6f9c566f8c-m47dw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8370c3db44d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:47:00.951723 containerd[1471]: 2025-01-30 15:47:00.903 [INFO][4949] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Jan 30 15:47:00.951723 containerd[1471]: 2025-01-30 15:47:00.903 [INFO][4949] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" iface="eth0" netns="" Jan 30 15:47:00.951723 containerd[1471]: 2025-01-30 15:47:00.903 [INFO][4949] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Jan 30 15:47:00.951723 containerd[1471]: 2025-01-30 15:47:00.903 [INFO][4949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Jan 30 15:47:00.951723 containerd[1471]: 2025-01-30 15:47:00.933 [INFO][4956] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" HandleID="k8s-pod-network.0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:47:00.951723 containerd[1471]: 2025-01-30 15:47:00.933 [INFO][4956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:47:00.951723 containerd[1471]: 2025-01-30 15:47:00.934 [INFO][4956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:47:00.951723 containerd[1471]: 2025-01-30 15:47:00.941 [WARNING][4956] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" HandleID="k8s-pod-network.0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:47:00.951723 containerd[1471]: 2025-01-30 15:47:00.941 [INFO][4956] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" HandleID="k8s-pod-network.0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:47:00.951723 containerd[1471]: 2025-01-30 15:47:00.944 [INFO][4956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:47:00.951723 containerd[1471]: 2025-01-30 15:47:00.946 [INFO][4949] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Jan 30 15:47:00.951723 containerd[1471]: time="2025-01-30T15:47:00.949573898Z" level=info msg="TearDown network for sandbox \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\" successfully" Jan 30 15:47:00.951723 containerd[1471]: time="2025-01-30T15:47:00.949607652Z" level=info msg="StopPodSandbox for \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\" returns successfully" Jan 30 15:47:00.951723 containerd[1471]: time="2025-01-30T15:47:00.950419176Z" level=info msg="RemovePodSandbox for \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\"" Jan 30 15:47:00.951723 containerd[1471]: time="2025-01-30T15:47:00.950504998Z" level=info msg="Forcibly stopping sandbox \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\"" Jan 30 15:47:01.039367 containerd[1471]: 2025-01-30 15:47:00.990 [WARNING][4977] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0", GenerateName:"calico-kube-controllers-6f9c566f8c-", Namespace:"calico-system", SelfLink:"", UID:"e043c395-4b2d-4788-a24a-2ff2e7d7cf00", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f9c566f8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"f3e91e005e21e0b262698d72d136514f3cee5b24d74936b230820a67de5a3156", Pod:"calico-kube-controllers-6f9c566f8c-m47dw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8370c3db44d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:47:01.039367 containerd[1471]: 2025-01-30 15:47:00.991 [INFO][4977] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Jan 30 15:47:01.039367 containerd[1471]: 2025-01-30 15:47:00.991 [INFO][4977] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" iface="eth0" netns="" Jan 30 15:47:01.039367 containerd[1471]: 2025-01-30 15:47:00.991 [INFO][4977] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Jan 30 15:47:01.039367 containerd[1471]: 2025-01-30 15:47:00.991 [INFO][4977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Jan 30 15:47:01.039367 containerd[1471]: 2025-01-30 15:47:01.028 [INFO][4983] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" HandleID="k8s-pod-network.0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:47:01.039367 containerd[1471]: 2025-01-30 15:47:01.028 [INFO][4983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:47:01.039367 containerd[1471]: 2025-01-30 15:47:01.028 [INFO][4983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:47:01.039367 containerd[1471]: 2025-01-30 15:47:01.035 [WARNING][4983] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" HandleID="k8s-pod-network.0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:47:01.039367 containerd[1471]: 2025-01-30 15:47:01.035 [INFO][4983] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" HandleID="k8s-pod-network.0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--kube--controllers--6f9c566f8c--m47dw-eth0" Jan 30 15:47:01.039367 containerd[1471]: 2025-01-30 15:47:01.036 [INFO][4983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:47:01.039367 containerd[1471]: 2025-01-30 15:47:01.038 [INFO][4977] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f" Jan 30 15:47:01.039844 containerd[1471]: time="2025-01-30T15:47:01.039406654Z" level=info msg="TearDown network for sandbox \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\" successfully" Jan 30 15:47:01.043764 containerd[1471]: time="2025-01-30T15:47:01.043718001Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:47:01.043848 containerd[1471]: time="2025-01-30T15:47:01.043820665Z" level=info msg="RemovePodSandbox \"0299d58359da946cb71f5833f28ec05164fa6d6008b7c004c2e104420f4b776f\" returns successfully" Jan 30 15:47:01.044620 containerd[1471]: time="2025-01-30T15:47:01.044585891Z" level=info msg="StopPodSandbox for \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\"" Jan 30 15:47:01.115344 containerd[1471]: 2025-01-30 15:47:01.084 [WARNING][5007] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f958cc09-74f2-44d1-a296-1688f8e74244", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e", Pod:"coredns-668d6bf9bc-wzgwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0df61a6ae4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:47:01.115344 containerd[1471]: 2025-01-30 15:47:01.084 [INFO][5007] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Jan 30 15:47:01.115344 containerd[1471]: 2025-01-30 15:47:01.084 [INFO][5007] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" iface="eth0" netns="" Jan 30 15:47:01.115344 containerd[1471]: 2025-01-30 15:47:01.084 [INFO][5007] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Jan 30 15:47:01.115344 containerd[1471]: 2025-01-30 15:47:01.084 [INFO][5007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Jan 30 15:47:01.115344 containerd[1471]: 2025-01-30 15:47:01.104 [INFO][5013] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" HandleID="k8s-pod-network.4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:47:01.115344 containerd[1471]: 2025-01-30 15:47:01.104 [INFO][5013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:47:01.115344 containerd[1471]: 2025-01-30 15:47:01.104 [INFO][5013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:47:01.115344 containerd[1471]: 2025-01-30 15:47:01.111 [WARNING][5013] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" HandleID="k8s-pod-network.4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:47:01.115344 containerd[1471]: 2025-01-30 15:47:01.111 [INFO][5013] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" HandleID="k8s-pod-network.4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:47:01.115344 containerd[1471]: 2025-01-30 15:47:01.113 [INFO][5013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:47:01.115344 containerd[1471]: 2025-01-30 15:47:01.114 [INFO][5007] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Jan 30 15:47:01.116756 containerd[1471]: time="2025-01-30T15:47:01.115371798Z" level=info msg="TearDown network for sandbox \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\" successfully" Jan 30 15:47:01.116931 containerd[1471]: time="2025-01-30T15:47:01.116830545Z" level=info msg="StopPodSandbox for \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\" returns successfully" Jan 30 15:47:01.117532 containerd[1471]: time="2025-01-30T15:47:01.117496964Z" level=info msg="RemovePodSandbox for \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\"" Jan 30 15:47:01.117532 containerd[1471]: time="2025-01-30T15:47:01.117529906Z" level=info msg="Forcibly stopping sandbox \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\"" Jan 30 15:47:01.197042 containerd[1471]: 2025-01-30 15:47:01.165 [WARNING][5031] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f958cc09-74f2-44d1-a296-1688f8e74244", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"fba82400c9f581a71542cdff3a7b4278690369ab2ffa8f4a52d007d1ab462a4e", Pod:"coredns-668d6bf9bc-wzgwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0df61a6ae4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:47:01.197042 containerd[1471]: 2025-01-30 15:47:01.165 [INFO][5031] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Jan 30 15:47:01.197042 containerd[1471]: 2025-01-30 15:47:01.165 [INFO][5031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" iface="eth0" netns="" Jan 30 15:47:01.197042 containerd[1471]: 2025-01-30 15:47:01.165 [INFO][5031] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Jan 30 15:47:01.197042 containerd[1471]: 2025-01-30 15:47:01.165 [INFO][5031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Jan 30 15:47:01.197042 containerd[1471]: 2025-01-30 15:47:01.185 [INFO][5037] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" HandleID="k8s-pod-network.4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:47:01.197042 containerd[1471]: 2025-01-30 15:47:01.185 [INFO][5037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:47:01.197042 containerd[1471]: 2025-01-30 15:47:01.185 [INFO][5037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:47:01.197042 containerd[1471]: 2025-01-30 15:47:01.192 [WARNING][5037] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" HandleID="k8s-pod-network.4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:47:01.197042 containerd[1471]: 2025-01-30 15:47:01.192 [INFO][5037] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" HandleID="k8s-pod-network.4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-coredns--668d6bf9bc--wzgwd-eth0" Jan 30 15:47:01.197042 containerd[1471]: 2025-01-30 15:47:01.194 [INFO][5037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:47:01.197042 containerd[1471]: 2025-01-30 15:47:01.195 [INFO][5031] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca" Jan 30 15:47:01.197502 containerd[1471]: time="2025-01-30T15:47:01.197086346Z" level=info msg="TearDown network for sandbox \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\" successfully" Jan 30 15:47:01.200770 containerd[1471]: time="2025-01-30T15:47:01.200728839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:47:01.200823 containerd[1471]: time="2025-01-30T15:47:01.200789523Z" level=info msg="RemovePodSandbox \"4060839c341abb629a12e4fae8fc2c9b40b5b398b87fe1f5de99dcf8460cecca\" returns successfully" Jan 30 15:47:01.201535 containerd[1471]: time="2025-01-30T15:47:01.201267146Z" level=info msg="StopPodSandbox for \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\"" Jan 30 15:47:01.273799 containerd[1471]: 2025-01-30 15:47:01.238 [WARNING][5055] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0", GenerateName:"calico-apiserver-5d449c9595-", Namespace:"calico-apiserver", SelfLink:"", UID:"8fe8c3db-c16d-4689-b438-3e2321856a39", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d449c9595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd", Pod:"calico-apiserver-5d449c9595-lkpj9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia0efdb1a712", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:47:01.273799 containerd[1471]: 2025-01-30 15:47:01.239 [INFO][5055] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Jan 30 15:47:01.273799 containerd[1471]: 2025-01-30 15:47:01.239 [INFO][5055] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" iface="eth0" netns="" Jan 30 15:47:01.273799 containerd[1471]: 2025-01-30 15:47:01.239 [INFO][5055] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Jan 30 15:47:01.273799 containerd[1471]: 2025-01-30 15:47:01.239 [INFO][5055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Jan 30 15:47:01.273799 containerd[1471]: 2025-01-30 15:47:01.261 [INFO][5061] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" HandleID="k8s-pod-network.98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:47:01.273799 containerd[1471]: 2025-01-30 15:47:01.261 [INFO][5061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:47:01.273799 containerd[1471]: 2025-01-30 15:47:01.261 [INFO][5061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:47:01.273799 containerd[1471]: 2025-01-30 15:47:01.270 [WARNING][5061] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" HandleID="k8s-pod-network.98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:47:01.273799 containerd[1471]: 2025-01-30 15:47:01.270 [INFO][5061] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" HandleID="k8s-pod-network.98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:47:01.273799 containerd[1471]: 2025-01-30 15:47:01.271 [INFO][5061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:47:01.273799 containerd[1471]: 2025-01-30 15:47:01.272 [INFO][5055] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Jan 30 15:47:01.274738 containerd[1471]: time="2025-01-30T15:47:01.273827075Z" level=info msg="TearDown network for sandbox \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\" successfully" Jan 30 15:47:01.274738 containerd[1471]: time="2025-01-30T15:47:01.273851693Z" level=info msg="StopPodSandbox for \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\" returns successfully" Jan 30 15:47:01.274738 containerd[1471]: time="2025-01-30T15:47:01.274267437Z" level=info msg="RemovePodSandbox for \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\"" Jan 30 15:47:01.274738 containerd[1471]: time="2025-01-30T15:47:01.274300701Z" level=info msg="Forcibly stopping sandbox \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\"" Jan 30 15:47:01.349804 containerd[1471]: 2025-01-30 15:47:01.314 [WARNING][5079] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0", GenerateName:"calico-apiserver-5d449c9595-", Namespace:"calico-apiserver", SelfLink:"", UID:"8fe8c3db-c16d-4689-b438-3e2321856a39", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d449c9595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"392abd808cbf7df0a6022ea9e3a70ee38a622bf2d046e6e7d01a3e6167f88bfd", Pod:"calico-apiserver-5d449c9595-lkpj9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia0efdb1a712", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:47:01.349804 containerd[1471]: 2025-01-30 15:47:01.314 [INFO][5079] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Jan 30 15:47:01.349804 containerd[1471]: 2025-01-30 15:47:01.314 [INFO][5079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" iface="eth0" netns="" Jan 30 15:47:01.349804 containerd[1471]: 2025-01-30 15:47:01.314 [INFO][5079] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Jan 30 15:47:01.349804 containerd[1471]: 2025-01-30 15:47:01.314 [INFO][5079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Jan 30 15:47:01.349804 containerd[1471]: 2025-01-30 15:47:01.335 [INFO][5085] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" HandleID="k8s-pod-network.98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:47:01.349804 containerd[1471]: 2025-01-30 15:47:01.335 [INFO][5085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:47:01.349804 containerd[1471]: 2025-01-30 15:47:01.335 [INFO][5085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:47:01.349804 containerd[1471]: 2025-01-30 15:47:01.346 [WARNING][5085] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" HandleID="k8s-pod-network.98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:47:01.349804 containerd[1471]: 2025-01-30 15:47:01.346 [INFO][5085] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" HandleID="k8s-pod-network.98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--lkpj9-eth0" Jan 30 15:47:01.349804 containerd[1471]: 2025-01-30 15:47:01.347 [INFO][5085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:47:01.349804 containerd[1471]: 2025-01-30 15:47:01.348 [INFO][5079] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61" Jan 30 15:47:01.350216 containerd[1471]: time="2025-01-30T15:47:01.349847083Z" level=info msg="TearDown network for sandbox \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\" successfully" Jan 30 15:47:01.353776 containerd[1471]: time="2025-01-30T15:47:01.353725832Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:47:01.353837 containerd[1471]: time="2025-01-30T15:47:01.353791647Z" level=info msg="RemovePodSandbox \"98369c34414b7f3923cae10365cb14ff5a038f3b40e249374af1b7f4425b5a61\" returns successfully" Jan 30 15:47:01.354765 containerd[1471]: time="2025-01-30T15:47:01.354439170Z" level=info msg="StopPodSandbox for \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\"" Jan 30 15:47:01.435874 containerd[1471]: 2025-01-30 15:47:01.391 [WARNING][5104] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0", GenerateName:"calico-apiserver-5d449c9595-", Namespace:"calico-apiserver", SelfLink:"", UID:"9428f27e-8e9f-42ec-b91d-69ade8069655", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d449c9595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda", Pod:"calico-apiserver-5d449c9595-22mfp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid98c74dc10a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:47:01.435874 containerd[1471]: 2025-01-30 15:47:01.391 [INFO][5104] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Jan 30 15:47:01.435874 containerd[1471]: 2025-01-30 15:47:01.391 [INFO][5104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" iface="eth0" netns="" Jan 30 15:47:01.435874 containerd[1471]: 2025-01-30 15:47:01.391 [INFO][5104] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Jan 30 15:47:01.435874 containerd[1471]: 2025-01-30 15:47:01.391 [INFO][5104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Jan 30 15:47:01.435874 containerd[1471]: 2025-01-30 15:47:01.419 [INFO][5110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" HandleID="k8s-pod-network.17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:47:01.435874 containerd[1471]: 2025-01-30 15:47:01.419 [INFO][5110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:47:01.435874 containerd[1471]: 2025-01-30 15:47:01.419 [INFO][5110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:47:01.435874 containerd[1471]: 2025-01-30 15:47:01.427 [WARNING][5110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" HandleID="k8s-pod-network.17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:47:01.435874 containerd[1471]: 2025-01-30 15:47:01.427 [INFO][5110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" HandleID="k8s-pod-network.17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:47:01.435874 containerd[1471]: 2025-01-30 15:47:01.431 [INFO][5110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:47:01.435874 containerd[1471]: 2025-01-30 15:47:01.433 [INFO][5104] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Jan 30 15:47:01.436731 containerd[1471]: time="2025-01-30T15:47:01.436504351Z" level=info msg="TearDown network for sandbox \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\" successfully" Jan 30 15:47:01.436731 containerd[1471]: time="2025-01-30T15:47:01.436543987Z" level=info msg="StopPodSandbox for \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\" returns successfully" Jan 30 15:47:01.437189 containerd[1471]: time="2025-01-30T15:47:01.437158167Z" level=info msg="RemovePodSandbox for \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\"" Jan 30 15:47:01.437229 containerd[1471]: time="2025-01-30T15:47:01.437195608Z" level=info msg="Forcibly stopping sandbox \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\"" Jan 30 15:47:02.080626 containerd[1471]: 2025-01-30 15:47:01.475 [WARNING][5128] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0", GenerateName:"calico-apiserver-5d449c9595-", Namespace:"calico-apiserver", SelfLink:"", UID:"9428f27e-8e9f-42ec-b91d-69ade8069655", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d449c9595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"b5a6f313b848f84ea4130d7024bf82d6a46a8189882b296e9bffbfc8974c1bda", Pod:"calico-apiserver-5d449c9595-22mfp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid98c74dc10a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:47:02.080626 containerd[1471]: 2025-01-30 15:47:01.476 [INFO][5128] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Jan 30 15:47:02.080626 containerd[1471]: 2025-01-30 15:47:01.476 [INFO][5128] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" iface="eth0" netns="" Jan 30 15:47:02.080626 containerd[1471]: 2025-01-30 15:47:01.476 [INFO][5128] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Jan 30 15:47:02.080626 containerd[1471]: 2025-01-30 15:47:01.476 [INFO][5128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Jan 30 15:47:02.080626 containerd[1471]: 2025-01-30 15:47:01.502 [INFO][5134] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" HandleID="k8s-pod-network.17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:47:02.080626 containerd[1471]: 2025-01-30 15:47:01.502 [INFO][5134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:47:02.080626 containerd[1471]: 2025-01-30 15:47:01.502 [INFO][5134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:47:02.080626 containerd[1471]: 2025-01-30 15:47:01.510 [WARNING][5134] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" HandleID="k8s-pod-network.17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:47:02.080626 containerd[1471]: 2025-01-30 15:47:01.512 [INFO][5134] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" HandleID="k8s-pod-network.17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-calico--apiserver--5d449c9595--22mfp-eth0" Jan 30 15:47:02.080626 containerd[1471]: 2025-01-30 15:47:01.514 [INFO][5134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:47:02.080626 containerd[1471]: 2025-01-30 15:47:02.078 [INFO][5128] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616" Jan 30 15:47:02.082374 containerd[1471]: time="2025-01-30T15:47:02.080823663Z" level=info msg="TearDown network for sandbox \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\" successfully" Jan 30 15:47:02.350131 containerd[1471]: time="2025-01-30T15:47:02.349257396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:47:02.350131 containerd[1471]: time="2025-01-30T15:47:02.349334582Z" level=info msg="RemovePodSandbox \"17e72247690d41d96df072288f81d5abff866bb95c5784d57583c07f939e6616\" returns successfully" Jan 30 15:47:02.350131 containerd[1471]: time="2025-01-30T15:47:02.349931551Z" level=info msg="StopPodSandbox for \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\"" Jan 30 15:47:02.456630 containerd[1471]: 2025-01-30 15:47:02.398 [WARNING][5154] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83a3f814-60c1-47be-8f8b-bd595ad0a1dc", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a", Pod:"csi-node-driver-2brc4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6182ef6b425", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:47:02.456630 containerd[1471]: 2025-01-30 15:47:02.398 [INFO][5154] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Jan 30 15:47:02.456630 containerd[1471]: 2025-01-30 15:47:02.399 [INFO][5154] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" iface="eth0" netns="" Jan 30 15:47:02.456630 containerd[1471]: 2025-01-30 15:47:02.399 [INFO][5154] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Jan 30 15:47:02.456630 containerd[1471]: 2025-01-30 15:47:02.399 [INFO][5154] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Jan 30 15:47:02.456630 containerd[1471]: 2025-01-30 15:47:02.424 [INFO][5160] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" HandleID="k8s-pod-network.8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:47:02.456630 containerd[1471]: 2025-01-30 15:47:02.425 [INFO][5160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:47:02.456630 containerd[1471]: 2025-01-30 15:47:02.425 [INFO][5160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:47:02.456630 containerd[1471]: 2025-01-30 15:47:02.440 [WARNING][5160] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" HandleID="k8s-pod-network.8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:47:02.456630 containerd[1471]: 2025-01-30 15:47:02.440 [INFO][5160] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" HandleID="k8s-pod-network.8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:47:02.456630 containerd[1471]: 2025-01-30 15:47:02.452 [INFO][5160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:47:02.456630 containerd[1471]: 2025-01-30 15:47:02.455 [INFO][5154] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Jan 30 15:47:02.457576 containerd[1471]: time="2025-01-30T15:47:02.456943856Z" level=info msg="TearDown network for sandbox \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\" successfully" Jan 30 15:47:02.457576 containerd[1471]: time="2025-01-30T15:47:02.457026792Z" level=info msg="StopPodSandbox for \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\" returns successfully" Jan 30 15:47:02.458214 containerd[1471]: time="2025-01-30T15:47:02.458143662Z" level=info msg="RemovePodSandbox for \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\"" Jan 30 15:47:02.458313 containerd[1471]: time="2025-01-30T15:47:02.458266604Z" level=info msg="Forcibly stopping sandbox \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\"" Jan 30 15:47:02.552326 containerd[1471]: 2025-01-30 15:47:02.506 [WARNING][5178] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83a3f814-60c1-47be-8f8b-bd595ad0a1dc", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 46, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-f-c7edc085f7.novalocal", ContainerID:"789c5d3bc7c24e4b088e93c4b4a3a239cf52b565996456074bd7c5ada070b87a", Pod:"csi-node-driver-2brc4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6182ef6b425", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:47:02.552326 containerd[1471]: 2025-01-30 15:47:02.506 [INFO][5178] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Jan 30 15:47:02.552326 containerd[1471]: 2025-01-30 15:47:02.506 [INFO][5178] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" iface="eth0" netns="" Jan 30 15:47:02.552326 containerd[1471]: 2025-01-30 15:47:02.506 [INFO][5178] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Jan 30 15:47:02.552326 containerd[1471]: 2025-01-30 15:47:02.506 [INFO][5178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Jan 30 15:47:02.552326 containerd[1471]: 2025-01-30 15:47:02.541 [INFO][5185] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" HandleID="k8s-pod-network.8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:47:02.552326 containerd[1471]: 2025-01-30 15:47:02.541 [INFO][5185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:47:02.552326 containerd[1471]: 2025-01-30 15:47:02.541 [INFO][5185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:47:02.552326 containerd[1471]: 2025-01-30 15:47:02.548 [WARNING][5185] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" HandleID="k8s-pod-network.8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:47:02.552326 containerd[1471]: 2025-01-30 15:47:02.548 [INFO][5185] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" HandleID="k8s-pod-network.8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Workload="ci--4081--3--0--f--c7edc085f7.novalocal-k8s-csi--node--driver--2brc4-eth0" Jan 30 15:47:02.552326 containerd[1471]: 2025-01-30 15:47:02.550 [INFO][5185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:47:02.552326 containerd[1471]: 2025-01-30 15:47:02.551 [INFO][5178] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3" Jan 30 15:47:02.553201 containerd[1471]: time="2025-01-30T15:47:02.552736254Z" level=info msg="TearDown network for sandbox \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\" successfully" Jan 30 15:47:02.557434 containerd[1471]: time="2025-01-30T15:47:02.557193975Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:47:02.557434 containerd[1471]: time="2025-01-30T15:47:02.557263847Z" level=info msg="RemovePodSandbox \"8288b2e571b45b7de9ba88de394da11c1b55bc82e9977724f6269bba534035c3\" returns successfully" Jan 30 15:47:08.802319 systemd[1]: Started sshd@9-172.24.4.138:22-172.24.4.1:39960.service - OpenSSH per-connection server daemon (172.24.4.1:39960). Jan 30 15:47:09.988247 kubelet[2615]: I0130 15:47:09.988187 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2brc4" podStartSLOduration=44.876406993 podStartE2EDuration="56.988168587s" podCreationTimestamp="2025-01-30 15:46:13 +0000 UTC" firstStartedPulling="2025-01-30 15:46:44.278081792 +0000 UTC m=+44.038301797" lastFinishedPulling="2025-01-30 15:46:56.389843385 +0000 UTC m=+56.150063391" observedRunningTime="2025-01-30 15:46:57.019897685 +0000 UTC m=+56.780117700" watchObservedRunningTime="2025-01-30 15:47:09.988168587 +0000 UTC m=+69.748388592" Jan 30 15:47:10.059823 sshd[5199]: Accepted publickey for core from 172.24.4.1 port 39960 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:47:10.063723 sshd[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:10.075184 systemd-logind[1453]: New session 12 of user core. Jan 30 15:47:10.082033 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 15:47:10.897089 sshd[5199]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:10.904072 systemd[1]: sshd@9-172.24.4.138:22-172.24.4.1:39960.service: Deactivated successfully. Jan 30 15:47:10.909763 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 15:47:10.912087 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Jan 30 15:47:10.914777 systemd-logind[1453]: Removed session 12. Jan 30 15:47:15.921046 systemd[1]: Started sshd@10-172.24.4.138:22-172.24.4.1:57066.service - OpenSSH per-connection server daemon (172.24.4.1:57066). Jan 30 15:47:17.095497 sshd[5238]: Accepted publickey for core from 172.24.4.1 port 57066 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:47:17.097430 sshd[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:17.104011 systemd-logind[1453]: New session 13 of user core. Jan 30 15:47:17.109324 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 15:47:17.869022 sshd[5238]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:17.876599 systemd[1]: sshd@10-172.24.4.138:22-172.24.4.1:57066.service: Deactivated successfully. Jan 30 15:47:17.883362 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 15:47:17.892035 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Jan 30 15:47:17.897972 systemd-logind[1453]: Removed session 13. Jan 30 15:47:18.063165 systemd[1]: run-containerd-runc-k8s.io-34bcf8d8e53f5c38735393740e361ade71b1938661f5709b0bf704ff7f17d921-runc.kZvZag.mount: Deactivated successfully. Jan 30 15:47:22.888239 systemd[1]: Started sshd@11-172.24.4.138:22-172.24.4.1:57080.service - OpenSSH per-connection server daemon (172.24.4.1:57080). Jan 30 15:47:24.164344 sshd[5296]: Accepted publickey for core from 172.24.4.1 port 57080 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:47:24.169910 sshd[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:24.181153 systemd-logind[1453]: New session 14 of user core. Jan 30 15:47:24.188005 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 15:47:24.942065 sshd[5296]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:24.953370 systemd[1]: sshd@11-172.24.4.138:22-172.24.4.1:57080.service: Deactivated successfully. Jan 30 15:47:24.958381 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 15:47:24.962363 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Jan 30 15:47:24.969274 systemd[1]: Started sshd@12-172.24.4.138:22-172.24.4.1:40832.service - OpenSSH per-connection server daemon (172.24.4.1:40832). Jan 30 15:47:24.972494 systemd-logind[1453]: Removed session 14. Jan 30 15:47:26.069735 sshd[5310]: Accepted publickey for core from 172.24.4.1 port 40832 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:47:26.072435 sshd[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:26.082571 systemd-logind[1453]: New session 15 of user core. Jan 30 15:47:26.091958 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 15:47:26.983791 sshd[5310]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:26.990391 systemd[1]: sshd@12-172.24.4.138:22-172.24.4.1:40832.service: Deactivated successfully. Jan 30 15:47:26.992438 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 15:47:26.994228 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Jan 30 15:47:27.000110 systemd[1]: Started sshd@13-172.24.4.138:22-172.24.4.1:40842.service - OpenSSH per-connection server daemon (172.24.4.1:40842). Jan 30 15:47:27.002401 systemd-logind[1453]: Removed session 15. Jan 30 15:47:28.460654 sshd[5323]: Accepted publickey for core from 172.24.4.1 port 40842 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:47:28.464019 sshd[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:28.473945 systemd-logind[1453]: New session 16 of user core. Jan 30 15:47:28.484075 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 15:47:29.268336 sshd[5323]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:29.274396 systemd[1]: sshd@13-172.24.4.138:22-172.24.4.1:40842.service: Deactivated successfully. Jan 30 15:47:29.279184 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 15:47:29.284180 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Jan 30 15:47:29.286989 systemd-logind[1453]: Removed session 16. Jan 30 15:47:34.289329 systemd[1]: Started sshd@14-172.24.4.138:22-172.24.4.1:44432.service - OpenSSH per-connection server daemon (172.24.4.1:44432). Jan 30 15:47:35.651159 sshd[5335]: Accepted publickey for core from 172.24.4.1 port 44432 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:47:35.653927 sshd[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:35.664508 systemd-logind[1453]: New session 17 of user core. Jan 30 15:47:35.675018 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 15:47:36.334161 sshd[5335]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:36.341184 systemd[1]: sshd@14-172.24.4.138:22-172.24.4.1:44432.service: Deactivated successfully. Jan 30 15:47:36.347432 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 15:47:36.349959 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Jan 30 15:47:36.352532 systemd-logind[1453]: Removed session 17. Jan 30 15:47:41.357254 systemd[1]: Started sshd@15-172.24.4.138:22-172.24.4.1:44434.service - OpenSSH per-connection server daemon (172.24.4.1:44434). Jan 30 15:47:42.800434 sshd[5376]: Accepted publickey for core from 172.24.4.1 port 44434 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:47:42.802943 sshd[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:42.820412 systemd-logind[1453]: New session 18 of user core. Jan 30 15:47:42.825943 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 15:47:43.610578 sshd[5376]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:43.617507 systemd[1]: sshd@15-172.24.4.138:22-172.24.4.1:44434.service: Deactivated successfully. Jan 30 15:47:43.620967 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 15:47:43.623458 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Jan 30 15:47:43.625653 systemd-logind[1453]: Removed session 18. Jan 30 15:47:48.635167 systemd[1]: Started sshd@16-172.24.4.138:22-172.24.4.1:38652.service - OpenSSH per-connection server daemon (172.24.4.1:38652). Jan 30 15:47:49.938653 sshd[5389]: Accepted publickey for core from 172.24.4.1 port 38652 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:47:49.941748 sshd[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:49.951625 systemd-logind[1453]: New session 19 of user core. Jan 30 15:47:49.962252 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 15:47:50.636774 sshd[5389]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:50.649656 systemd[1]: sshd@16-172.24.4.138:22-172.24.4.1:38652.service: Deactivated successfully. Jan 30 15:47:50.653045 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 15:47:50.657805 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Jan 30 15:47:50.664157 systemd[1]: Started sshd@17-172.24.4.138:22-172.24.4.1:38666.service - OpenSSH per-connection server daemon (172.24.4.1:38666). Jan 30 15:47:50.671220 systemd-logind[1453]: Removed session 19. Jan 30 15:47:51.969454 sshd[5422]: Accepted publickey for core from 172.24.4.1 port 38666 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:47:51.972363 sshd[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:51.985030 systemd-logind[1453]: New session 20 of user core. Jan 30 15:47:51.990009 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 15:47:53.088507 sshd[5422]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:53.102299 systemd[1]: sshd@17-172.24.4.138:22-172.24.4.1:38666.service: Deactivated successfully. Jan 30 15:47:53.106766 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 15:47:53.111491 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Jan 30 15:47:53.118255 systemd[1]: Started sshd@18-172.24.4.138:22-172.24.4.1:38674.service - OpenSSH per-connection server daemon (172.24.4.1:38674). Jan 30 15:47:53.120819 systemd-logind[1453]: Removed session 20. Jan 30 15:47:54.515327 sshd[5433]: Accepted publickey for core from 172.24.4.1 port 38674 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:47:54.518333 sshd[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:54.529636 systemd-logind[1453]: New session 21 of user core. Jan 30 15:47:54.537957 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 15:47:56.469898 sshd[5433]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:56.477181 systemd[1]: sshd@18-172.24.4.138:22-172.24.4.1:38674.service: Deactivated successfully. Jan 30 15:47:56.478379 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 15:47:56.480620 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Jan 30 15:47:56.488901 systemd[1]: Started sshd@19-172.24.4.138:22-172.24.4.1:40316.service - OpenSSH per-connection server daemon (172.24.4.1:40316). Jan 30 15:47:56.493393 systemd-logind[1453]: Removed session 21. Jan 30 15:47:57.623076 sshd[5460]: Accepted publickey for core from 172.24.4.1 port 40316 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:47:57.625513 sshd[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:57.632655 systemd-logind[1453]: New session 22 of user core. Jan 30 15:47:57.637892 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 15:47:58.589611 sshd[5460]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:58.605859 systemd[1]: sshd@19-172.24.4.138:22-172.24.4.1:40316.service: Deactivated successfully. Jan 30 15:47:58.613208 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 15:47:58.615185 systemd-logind[1453]: Session 22 logged out. Waiting for processes to exit. Jan 30 15:47:58.625488 systemd[1]: Started sshd@20-172.24.4.138:22-172.24.4.1:40328.service - OpenSSH per-connection server daemon (172.24.4.1:40328). Jan 30 15:47:58.630733 systemd-logind[1453]: Removed session 22. Jan 30 15:48:00.095352 sshd[5471]: Accepted publickey for core from 172.24.4.1 port 40328 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:48:00.098427 sshd[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:48:00.110801 systemd-logind[1453]: New session 23 of user core. Jan 30 15:48:00.124997 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 15:48:00.830994 sshd[5471]: pam_unix(sshd:session): session closed for user core Jan 30 15:48:00.837017 systemd[1]: sshd@20-172.24.4.138:22-172.24.4.1:40328.service: Deactivated successfully. Jan 30 15:48:00.842042 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 15:48:00.845894 systemd-logind[1453]: Session 23 logged out. Waiting for processes to exit. Jan 30 15:48:00.848488 systemd-logind[1453]: Removed session 23. Jan 30 15:48:05.851214 systemd[1]: Started sshd@21-172.24.4.138:22-172.24.4.1:52468.service - OpenSSH per-connection server daemon (172.24.4.1:52468). Jan 30 15:48:07.293804 sshd[5493]: Accepted publickey for core from 172.24.4.1 port 52468 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:48:07.296796 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:48:07.308371 systemd-logind[1453]: New session 24 of user core. Jan 30 15:48:07.317998 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 15:48:08.588331 sshd[5493]: pam_unix(sshd:session): session closed for user core Jan 30 15:48:08.596009 systemd[1]: sshd@21-172.24.4.138:22-172.24.4.1:52468.service: Deactivated successfully. Jan 30 15:48:08.601249 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 15:48:08.603940 systemd-logind[1453]: Session 24 logged out. Waiting for processes to exit. Jan 30 15:48:08.606168 systemd-logind[1453]: Removed session 24. Jan 30 15:48:13.605017 systemd[1]: Started sshd@22-172.24.4.138:22-172.24.4.1:43636.service - OpenSSH per-connection server daemon (172.24.4.1:43636). Jan 30 15:48:14.895112 sshd[5532]: Accepted publickey for core from 172.24.4.1 port 43636 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:48:14.900626 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:48:14.920921 systemd-logind[1453]: New session 25 of user core. Jan 30 15:48:14.928967 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 15:48:15.646648 sshd[5532]: pam_unix(sshd:session): session closed for user core Jan 30 15:48:15.655081 systemd[1]: sshd@22-172.24.4.138:22-172.24.4.1:43636.service: Deactivated successfully. Jan 30 15:48:15.656268 systemd-logind[1453]: Session 25 logged out. Waiting for processes to exit. Jan 30 15:48:15.664475 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 15:48:15.675415 systemd-logind[1453]: Removed session 25. Jan 30 15:48:20.676111 systemd[1]: Started sshd@23-172.24.4.138:22-172.24.4.1:43650.service - OpenSSH per-connection server daemon (172.24.4.1:43650). Jan 30 15:48:21.870415 sshd[5598]: Accepted publickey for core from 172.24.4.1 port 43650 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:48:21.873624 sshd[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:48:21.883963 systemd-logind[1453]: New session 26 of user core. Jan 30 15:48:21.896042 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 15:48:22.676026 sshd[5598]: pam_unix(sshd:session): session closed for user core Jan 30 15:48:22.682606 systemd[1]: sshd@23-172.24.4.138:22-172.24.4.1:43650.service: Deactivated successfully. Jan 30 15:48:22.686474 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 15:48:22.690640 systemd-logind[1453]: Session 26 logged out. Waiting for processes to exit. Jan 30 15:48:22.693036 systemd-logind[1453]: Removed session 26. Jan 30 15:48:27.699964 systemd[1]: Started sshd@24-172.24.4.138:22-172.24.4.1:49814.service - OpenSSH per-connection server daemon (172.24.4.1:49814). Jan 30 15:48:29.045397 sshd[5611]: Accepted publickey for core from 172.24.4.1 port 49814 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:48:29.048458 sshd[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:48:29.059106 systemd-logind[1453]: New session 27 of user core. Jan 30 15:48:29.071321 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 15:48:29.763182 sshd[5611]: pam_unix(sshd:session): session closed for user core Jan 30 15:48:29.770978 systemd[1]: sshd@24-172.24.4.138:22-172.24.4.1:49814.service: Deactivated successfully. Jan 30 15:48:29.776500 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 15:48:29.778490 systemd-logind[1453]: Session 27 logged out. Waiting for processes to exit. Jan 30 15:48:29.781585 systemd-logind[1453]: Removed session 27. Jan 30 15:48:34.783234 systemd[1]: Started sshd@25-172.24.4.138:22-172.24.4.1:42496.service - OpenSSH per-connection server daemon (172.24.4.1:42496). Jan 30 15:48:36.183342 sshd[5624]: Accepted publickey for core from 172.24.4.1 port 42496 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:48:36.186087 sshd[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:48:36.196056 systemd-logind[1453]: New session 28 of user core. Jan 30 15:48:36.204999 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 15:48:36.875602 sshd[5624]: pam_unix(sshd:session): session closed for user core Jan 30 15:48:36.882392 systemd[1]: sshd@25-172.24.4.138:22-172.24.4.1:42496.service: Deactivated successfully. Jan 30 15:48:36.889224 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 15:48:36.892003 systemd-logind[1453]: Session 28 logged out. Waiting for processes to exit. Jan 30 15:48:36.894368 systemd-logind[1453]: Removed session 28.