Jan 29 13:04:10.035644 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 13:04:10.035676 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 13:04:10.035687 kernel: BIOS-provided physical RAM map: Jan 29 13:04:10.035697 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 13:04:10.035705 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 13:04:10.035718 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 13:04:10.035728 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 29 13:04:10.035737 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 29 13:04:10.035764 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 13:04:10.035774 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 13:04:10.035783 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 29 13:04:10.035792 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 13:04:10.035800 kernel: NX (Execute Disable) protection: active Jan 29 13:04:10.035809 kernel: APIC: Static calls initialized Jan 29 13:04:10.035823 kernel: SMBIOS 3.0.0 present. Jan 29 13:04:10.035833 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 29 13:04:10.035842 kernel: Hypervisor detected: KVM Jan 29 13:04:10.035851 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 13:04:10.035861 kernel: kvm-clock: using sched offset of 3504165318 cycles Jan 29 13:04:10.035873 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 13:04:10.035883 kernel: tsc: Detected 1996.249 MHz processor Jan 29 13:04:10.035893 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 13:04:10.035903 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 13:04:10.035913 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 29 13:04:10.035923 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 13:04:10.035933 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 13:04:10.035943 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 29 13:04:10.035952 kernel: ACPI: Early table checksum verification disabled Jan 29 13:04:10.035964 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 29 13:04:10.035974 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 13:04:10.035984 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 13:04:10.035994 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 13:04:10.036004 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 29 13:04:10.036014 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 13:04:10.036023 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 13:04:10.036033 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 29 13:04:10.036043 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 29 13:04:10.036055 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 29 13:04:10.036064 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 29 13:04:10.036074 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 29 13:04:10.036088 kernel: No NUMA configuration found Jan 29 13:04:10.036098 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 29 13:04:10.036108 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 29 13:04:10.036120 kernel: Zone ranges: Jan 29 13:04:10.036130 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 13:04:10.036140 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 13:04:10.036150 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 29 13:04:10.036160 kernel: Movable zone start for each node Jan 29 13:04:10.036170 kernel: Early memory node ranges Jan 29 13:04:10.036180 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 13:04:10.036190 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 29 13:04:10.036203 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 29 13:04:10.036213 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 29 13:04:10.036223 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 13:04:10.036234 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 13:04:10.036244 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 29 13:04:10.036254 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 13:04:10.036264 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 13:04:10.036274 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 13:04:10.036284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 13:04:10.036296 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 13:04:10.036307 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 13:04:10.036317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 13:04:10.036327 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 13:04:10.036337 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 13:04:10.036347 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 13:04:10.036358 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 13:04:10.036368 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 29 13:04:10.036378 kernel: Booting paravirtualized kernel on KVM Jan 29 13:04:10.036495 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 13:04:10.036508 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 13:04:10.036518 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 13:04:10.036528 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 13:04:10.036538 kernel: pcpu-alloc: [0] 0 1 Jan 29 13:04:10.036548 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 13:04:10.036560 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 13:04:10.036571 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 13:04:10.036584 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 13:04:10.036594 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 13:04:10.036605 kernel: Fallback order for Node 0: 0 Jan 29 13:04:10.036615 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 29 13:04:10.036625 kernel: Policy zone: Normal Jan 29 13:04:10.036635 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 13:04:10.036645 kernel: software IO TLB: area num 2. Jan 29 13:04:10.036656 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 227308K reserved, 0K cma-reserved) Jan 29 13:04:10.036666 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 13:04:10.036678 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 13:04:10.036688 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 13:04:10.036699 kernel: Dynamic Preempt: voluntary Jan 29 13:04:10.036709 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 13:04:10.036720 kernel: rcu: RCU event tracing is enabled. Jan 29 13:04:10.036730 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 13:04:10.036740 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 13:04:10.036751 kernel: Rude variant of Tasks RCU enabled. Jan 29 13:04:10.036761 kernel: Tracing variant of Tasks RCU enabled. Jan 29 13:04:10.036773 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 13:04:10.036783 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 13:04:10.036794 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 13:04:10.036804 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 13:04:10.036814 kernel: Console: colour VGA+ 80x25 Jan 29 13:04:10.036824 kernel: printk: console [tty0] enabled Jan 29 13:04:10.036834 kernel: printk: console [ttyS0] enabled Jan 29 13:04:10.036844 kernel: ACPI: Core revision 20230628 Jan 29 13:04:10.036854 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 13:04:10.036864 kernel: x2apic enabled Jan 29 13:04:10.036877 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 13:04:10.036886 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 13:04:10.036895 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 13:04:10.036904 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 29 13:04:10.036913 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 13:04:10.036922 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 13:04:10.036931 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 13:04:10.036940 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 13:04:10.036949 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 13:04:10.036960 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 13:04:10.036969 kernel: Speculative Store Bypass: Vulnerable Jan 29 13:04:10.036978 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 29 13:04:10.036988 kernel: Freeing SMP alternatives memory: 32K Jan 29 13:04:10.037002 kernel: pid_max: default: 32768 minimum: 301 Jan 29 13:04:10.037013 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 13:04:10.037023 kernel: landlock: Up and running. Jan 29 13:04:10.037032 kernel: SELinux: Initializing. Jan 29 13:04:10.037041 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 13:04:10.037051 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 13:04:10.037060 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 29 13:04:10.037072 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 13:04:10.037082 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 13:04:10.037091 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 13:04:10.037101 kernel: Performance Events: AMD PMU driver. Jan 29 13:04:10.037110 kernel: ... version: 0 Jan 29 13:04:10.037122 kernel: ... bit width: 48 Jan 29 13:04:10.037131 kernel: ... generic registers: 4 Jan 29 13:04:10.037141 kernel: ... value mask: 0000ffffffffffff Jan 29 13:04:10.037150 kernel: ... max period: 00007fffffffffff Jan 29 13:04:10.037160 kernel: ... fixed-purpose events: 0 Jan 29 13:04:10.037169 kernel: ... event mask: 000000000000000f Jan 29 13:04:10.037178 kernel: signal: max sigframe size: 1440 Jan 29 13:04:10.037188 kernel: rcu: Hierarchical SRCU implementation. Jan 29 13:04:10.037197 kernel: rcu: Max phase no-delay instances is 400. Jan 29 13:04:10.037209 kernel: smp: Bringing up secondary CPUs ... Jan 29 13:04:10.037218 kernel: smpboot: x86: Booting SMP configuration: Jan 29 13:04:10.037227 kernel: .... node #0, CPUs: #1 Jan 29 13:04:10.037237 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 13:04:10.037246 kernel: smpboot: Max logical packages: 2 Jan 29 13:04:10.037256 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 29 13:04:10.037265 kernel: devtmpfs: initialized Jan 29 13:04:10.037274 kernel: x86/mm: Memory block size: 128MB Jan 29 13:04:10.037284 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 13:04:10.037294 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 13:04:10.037305 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 13:04:10.037314 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 13:04:10.037324 kernel: audit: initializing netlink subsys (disabled) Jan 29 13:04:10.037333 kernel: audit: type=2000 audit(1738155849.169:1): state=initialized audit_enabled=0 res=1 Jan 29 13:04:10.037343 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 13:04:10.037352 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 13:04:10.037362 kernel: cpuidle: using governor menu Jan 29 13:04:10.037371 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 13:04:10.037380 kernel: dca service started, version 1.12.1 Jan 29 13:04:10.037422 kernel: PCI: Using configuration type 1 for base access Jan 29 13:04:10.037433 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 13:04:10.037442 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 13:04:10.037452 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 13:04:10.037461 kernel: ACPI: Added _OSI(Module Device) Jan 29 13:04:10.037470 kernel: ACPI: Added _OSI(Processor Device) Jan 29 13:04:10.037480 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 13:04:10.037489 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 13:04:10.037498 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 13:04:10.037510 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 13:04:10.037520 kernel: ACPI: Interpreter enabled Jan 29 13:04:10.037529 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 13:04:10.037538 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 13:04:10.037548 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 13:04:10.037557 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 13:04:10.037567 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 13:04:10.037576 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 13:04:10.037732 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 13:04:10.037844 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 13:04:10.037941 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 13:04:10.037955 kernel: acpiphp: Slot [3] registered Jan 29 13:04:10.037965 kernel: acpiphp: Slot [4] registered Jan 29 13:04:10.037974 kernel: acpiphp: Slot [5] registered Jan 29 13:04:10.037984 kernel: acpiphp: Slot [6] registered Jan 29 13:04:10.037993 kernel: acpiphp: Slot [7] registered Jan 29 13:04:10.038006 kernel: acpiphp: Slot [8] registered Jan 29 13:04:10.038015 kernel: acpiphp: Slot [9] registered Jan 29 13:04:10.038024 kernel: acpiphp: Slot [10] registered Jan 29 13:04:10.038034 kernel: acpiphp: Slot [11] registered Jan 29 13:04:10.038043 kernel: acpiphp: Slot [12] registered Jan 29 13:04:10.038052 kernel: acpiphp: Slot [13] registered Jan 29 13:04:10.038062 kernel: acpiphp: Slot [14] registered Jan 29 13:04:10.038071 kernel: acpiphp: Slot [15] registered Jan 29 13:04:10.038080 kernel: acpiphp: Slot [16] registered Jan 29 13:04:10.038091 kernel: acpiphp: Slot [17] registered Jan 29 13:04:10.038100 kernel: acpiphp: Slot [18] registered Jan 29 13:04:10.038110 kernel: acpiphp: Slot [19] registered Jan 29 13:04:10.038119 kernel: acpiphp: Slot [20] registered Jan 29 13:04:10.038128 kernel: acpiphp: Slot [21] registered Jan 29 13:04:10.038138 kernel: acpiphp: Slot [22] registered Jan 29 13:04:10.038147 kernel: acpiphp: Slot [23] registered Jan 29 13:04:10.038156 kernel: acpiphp: Slot [24] registered Jan 29 13:04:10.038165 kernel: acpiphp: Slot [25] registered Jan 29 13:04:10.038175 kernel: acpiphp: Slot [26] registered Jan 29 13:04:10.038186 kernel: acpiphp: Slot [27] registered Jan 29 13:04:10.038195 kernel: acpiphp: Slot [28] registered Jan 29 13:04:10.038205 kernel: acpiphp: Slot [29] registered Jan 29 13:04:10.038214 kernel: acpiphp: Slot [30] registered Jan 29 13:04:10.038223 kernel: acpiphp: Slot [31] registered Jan 29 13:04:10.038233 kernel: PCI host bridge to bus 0000:00 Jan 29 13:04:10.038338 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 13:04:10.038457 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 13:04:10.038554 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 13:04:10.038641 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 13:04:10.038725 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 29 13:04:10.038809 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 13:04:10.038923 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 13:04:10.039030 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 13:04:10.039134 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 13:04:10.039227 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 29 13:04:10.039318 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 13:04:10.041477 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 13:04:10.041597 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 13:04:10.041695 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 13:04:10.041803 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 13:04:10.041906 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 13:04:10.042003 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 13:04:10.042108 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 13:04:10.042207 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 13:04:10.042304 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 29 13:04:10.042435 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 29 13:04:10.042556 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 29 13:04:10.042658 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 13:04:10.042762 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 13:04:10.042859 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 29 13:04:10.042954 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 29 13:04:10.043051 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 29 13:04:10.043146 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 29 13:04:10.043250 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 13:04:10.043353 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 13:04:10.043511 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 29 13:04:10.043611 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 29 13:04:10.043726 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 13:04:10.043845 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 29 13:04:10.043944 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 29 13:04:10.044049 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 13:04:10.044157 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 29 13:04:10.044254 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 29 13:04:10.044349 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 29 13:04:10.044364 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 13:04:10.044374 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 13:04:10.044384 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 13:04:10.044415 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 13:04:10.044427 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 13:04:10.044441 kernel: iommu: Default domain type: Translated Jan 29 13:04:10.044450 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 13:04:10.044460 kernel: PCI: Using ACPI for IRQ routing Jan 29 13:04:10.044469 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 13:04:10.044479 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 13:04:10.044488 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 29 13:04:10.044589 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 13:04:10.044687 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 13:04:10.044790 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 13:04:10.044804 kernel: vgaarb: loaded Jan 29 13:04:10.044814 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 13:04:10.044823 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 13:04:10.044833 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 13:04:10.044842 kernel: pnp: PnP ACPI init Jan 29 13:04:10.044941 kernel: pnp 00:03: [dma 2] Jan 29 13:04:10.044957 kernel: pnp: PnP ACPI: found 5 devices Jan 29 13:04:10.044967 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 13:04:10.044980 kernel: NET: Registered PF_INET protocol family Jan 29 13:04:10.044990 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 13:04:10.045000 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 13:04:10.045009 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 13:04:10.045019 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 13:04:10.045029 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 13:04:10.045038 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 13:04:10.045048 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 13:04:10.045059 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 13:04:10.045069 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 13:04:10.045078 kernel: NET: Registered PF_XDP protocol family Jan 29 13:04:10.045166 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 13:04:10.045252 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 13:04:10.045337 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 13:04:10.047193 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 29 13:04:10.047288 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 29 13:04:10.047407 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 13:04:10.047519 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 13:04:10.047535 kernel: PCI: CLS 0 bytes, default 64 Jan 29 13:04:10.047545 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 13:04:10.047555 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 29 13:04:10.047564 kernel: Initialise system trusted keyrings Jan 29 13:04:10.047574 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 13:04:10.047584 kernel: Key type asymmetric registered Jan 29 13:04:10.047593 kernel: Asymmetric key parser 'x509' registered Jan 29 13:04:10.047606 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 13:04:10.047616 kernel: io scheduler mq-deadline registered Jan 29 13:04:10.047626 kernel: io scheduler kyber registered Jan 29 13:04:10.047635 kernel: io scheduler bfq registered Jan 29 13:04:10.047645 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 13:04:10.047665 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 13:04:10.047675 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 13:04:10.047685 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 13:04:10.047694 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 13:04:10.047706 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 13:04:10.047716 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 13:04:10.047725 kernel: random: crng init done Jan 29 13:04:10.047735 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 13:04:10.047760 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 13:04:10.047770 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 13:04:10.047876 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 13:04:10.047892 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 13:04:10.047979 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 13:04:10.048075 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T13:04:09 UTC (1738155849) Jan 29 13:04:10.048164 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 29 13:04:10.048178 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 13:04:10.048188 kernel: NET: Registered PF_INET6 protocol family Jan 29 13:04:10.048197 kernel: Segment Routing with IPv6 Jan 29 13:04:10.048207 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 13:04:10.048216 kernel: NET: Registered PF_PACKET protocol family Jan 29 13:04:10.048226 kernel: Key type dns_resolver registered Jan 29 13:04:10.048239 kernel: IPI shorthand broadcast: enabled Jan 29 13:04:10.048248 kernel: sched_clock: Marking stable (1052008251, 177229387)->(1269314579, -40076941) Jan 29 13:04:10.048258 kernel: registered taskstats version 1 Jan 29 13:04:10.048267 kernel: Loading compiled-in X.509 certificates Jan 29 13:04:10.048277 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 13:04:10.048287 kernel: Key type .fscrypt registered Jan 29 13:04:10.048296 kernel: Key type fscrypt-provisioning registered Jan 29 13:04:10.048305 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 13:04:10.048315 kernel: ima: Allocated hash algorithm: sha1 Jan 29 13:04:10.048326 kernel: ima: No architecture policies found Jan 29 13:04:10.048336 kernel: clk: Disabling unused clocks Jan 29 13:04:10.048345 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 13:04:10.048355 kernel: Write protecting the kernel read-only data: 36864k Jan 29 13:04:10.048364 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 13:04:10.048374 kernel: Run /init as init process Jan 29 13:04:10.048383 kernel: with arguments: Jan 29 13:04:10.048409 kernel: /init Jan 29 13:04:10.048419 kernel: with environment: Jan 29 13:04:10.048431 kernel: HOME=/ Jan 29 13:04:10.048440 kernel: TERM=linux Jan 29 13:04:10.048449 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 13:04:10.048461 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 13:04:10.048474 systemd[1]: Detected virtualization kvm. Jan 29 13:04:10.048485 systemd[1]: Detected architecture x86-64. Jan 29 13:04:10.048495 systemd[1]: Running in initrd. Jan 29 13:04:10.048507 systemd[1]: No hostname configured, using default hostname. Jan 29 13:04:10.048517 systemd[1]: Hostname set to . Jan 29 13:04:10.048528 systemd[1]: Initializing machine ID from VM UUID. Jan 29 13:04:10.048538 systemd[1]: Queued start job for default target initrd.target. Jan 29 13:04:10.048548 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 13:04:10.048559 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 13:04:10.048570 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 13:04:10.048590 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 13:04:10.048602 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 13:04:10.048613 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 13:04:10.048626 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 13:04:10.048636 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 13:04:10.048649 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 13:04:10.048660 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 13:04:10.048670 systemd[1]: Reached target paths.target - Path Units. Jan 29 13:04:10.048681 systemd[1]: Reached target slices.target - Slice Units. Jan 29 13:04:10.048691 systemd[1]: Reached target swap.target - Swaps. Jan 29 13:04:10.048702 systemd[1]: Reached target timers.target - Timer Units. Jan 29 13:04:10.048712 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 13:04:10.048723 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 13:04:10.048733 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 13:04:10.048746 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 13:04:10.048756 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 13:04:10.048767 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 13:04:10.048778 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 13:04:10.048788 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 13:04:10.048799 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 13:04:10.048809 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 13:04:10.048820 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 13:04:10.048830 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 13:04:10.048843 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 13:04:10.048854 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 13:04:10.048885 systemd-journald[184]: Collecting audit messages is disabled. Jan 29 13:04:10.048910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 13:04:10.048924 systemd-journald[184]: Journal started Jan 29 13:04:10.048947 systemd-journald[184]: Runtime Journal (/run/log/journal/891bc8b40c92423b86fdd6514b956dfd) is 8.0M, max 78.3M, 70.3M free. Jan 29 13:04:10.053465 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 13:04:10.054343 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 13:04:10.055935 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 13:04:10.057349 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 13:04:10.064744 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 13:04:10.069546 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 13:04:10.079187 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 13:04:10.087524 systemd-modules-load[185]: Inserted module 'overlay' Jan 29 13:04:10.135497 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 13:04:10.135523 kernel: Bridge firewalling registered Jan 29 13:04:10.094628 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 13:04:10.121793 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 29 13:04:10.134533 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 13:04:10.139526 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 13:04:10.140211 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 13:04:10.148588 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 13:04:10.152554 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 13:04:10.153282 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 13:04:10.163869 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 13:04:10.167553 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 13:04:10.168301 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 13:04:10.180597 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 13:04:10.194094 dracut-cmdline[217]: dracut-dracut-053 Jan 29 13:04:10.196036 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 13:04:10.214476 systemd-resolved[220]: Positive Trust Anchors: Jan 29 13:04:10.215248 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 13:04:10.216156 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 13:04:10.221305 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 29 13:04:10.222155 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 13:04:10.223106 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 13:04:10.265448 kernel: SCSI subsystem initialized Jan 29 13:04:10.275455 kernel: Loading iSCSI transport class v2.0-870. Jan 29 13:04:10.287803 kernel: iscsi: registered transport (tcp) Jan 29 13:04:10.309531 kernel: iscsi: registered transport (qla4xxx) Jan 29 13:04:10.309586 kernel: QLogic iSCSI HBA Driver Jan 29 13:04:10.359131 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 13:04:10.363636 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 13:04:10.397844 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 13:04:10.397897 kernel: device-mapper: uevent: version 1.0.3 Jan 29 13:04:10.398522 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 13:04:10.460463 kernel: raid6: sse2x4 gen() 4759 MB/s Jan 29 13:04:10.478467 kernel: raid6: sse2x2 gen() 8283 MB/s Jan 29 13:04:10.497112 kernel: raid6: sse2x1 gen() 9590 MB/s Jan 29 13:04:10.497175 kernel: raid6: using algorithm sse2x1 gen() 9590 MB/s Jan 29 13:04:10.515897 kernel: raid6: .... xor() 7183 MB/s, rmw enabled Jan 29 13:04:10.515961 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 13:04:10.538465 kernel: xor: measuring software checksum speed Jan 29 13:04:10.538532 kernel: prefetch64-sse : 17274 MB/sec Jan 29 13:04:10.539492 kernel: generic_sse : 14396 MB/sec Jan 29 13:04:10.542093 kernel: xor: using function: prefetch64-sse (17274 MB/sec) Jan 29 13:04:10.725454 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 13:04:10.742014 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 13:04:10.752658 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 13:04:10.764341 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 29 13:04:10.768262 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 13:04:10.777645 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 13:04:10.805281 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jan 29 13:04:10.851238 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 13:04:10.859659 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 13:04:10.935292 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 13:04:10.947016 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 13:04:11.000881 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 13:04:11.004902 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 13:04:11.005480 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 13:04:11.006000 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 13:04:11.012779 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 13:04:11.034050 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 13:04:11.051451 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 29 13:04:11.076775 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 29 13:04:11.076904 kernel: libata version 3.00 loaded. Jan 29 13:04:11.076919 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 13:04:11.082106 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 13:04:11.082121 kernel: GPT:17805311 != 20971519 Jan 29 13:04:11.082133 kernel: scsi host0: ata_piix Jan 29 13:04:11.082267 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 13:04:11.082286 kernel: GPT:17805311 != 20971519 Jan 29 13:04:11.082297 kernel: scsi host1: ata_piix Jan 29 13:04:11.082430 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 13:04:11.082444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 13:04:11.082455 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 29 13:04:11.082467 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 29 13:04:11.056825 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 13:04:11.056951 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 13:04:11.057992 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 13:04:11.059807 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 13:04:11.059953 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 13:04:11.061700 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 13:04:11.073588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 13:04:11.114804 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Jan 29 13:04:11.115417 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (473) Jan 29 13:04:11.138120 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 13:04:11.163475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 13:04:11.170099 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 13:04:11.175723 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 13:04:11.180262 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 13:04:11.180893 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 13:04:11.193549 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 13:04:11.197576 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 13:04:11.204833 disk-uuid[509]: Primary Header is updated. Jan 29 13:04:11.204833 disk-uuid[509]: Secondary Entries is updated. Jan 29 13:04:11.204833 disk-uuid[509]: Secondary Header is updated. Jan 29 13:04:11.213442 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 13:04:11.219422 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 13:04:11.223050 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 13:04:12.231491 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 13:04:12.233836 disk-uuid[511]: The operation has completed successfully. Jan 29 13:04:12.314777 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 13:04:12.315027 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 13:04:12.339517 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 13:04:12.351561 sh[534]: Success Jan 29 13:04:12.382572 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 29 13:04:12.443094 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 13:04:12.458635 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 13:04:12.468636 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 13:04:12.485806 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 13:04:12.485880 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 13:04:12.487864 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 13:04:12.491221 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 13:04:12.491284 kernel: BTRFS info (device dm-0): using free space tree Jan 29 13:04:12.507876 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 13:04:12.509012 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 13:04:12.518543 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 13:04:12.523110 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 13:04:12.532428 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 13:04:12.532468 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 13:04:12.532481 kernel: BTRFS info (device vda6): using free space tree Jan 29 13:04:12.547416 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 13:04:12.562219 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 13:04:12.563883 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 13:04:12.576988 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 13:04:12.584565 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 13:04:12.665198 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 13:04:12.674578 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 13:04:12.697416 systemd-networkd[716]: lo: Link UP Jan 29 13:04:12.697425 systemd-networkd[716]: lo: Gained carrier Jan 29 13:04:12.699792 systemd-networkd[716]: Enumeration completed Jan 29 13:04:12.700856 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 13:04:12.702321 systemd-networkd[716]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 13:04:12.702325 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 13:04:12.704181 systemd[1]: Reached target network.target - Network. Jan 29 13:04:12.704506 systemd-networkd[716]: eth0: Link UP Jan 29 13:04:12.704509 systemd-networkd[716]: eth0: Gained carrier Jan 29 13:04:12.704517 systemd-networkd[716]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 13:04:12.717552 systemd-networkd[716]: eth0: DHCPv4 address 172.24.4.245/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 13:04:12.734730 ignition[625]: Ignition 2.19.0 Jan 29 13:04:12.735537 ignition[625]: Stage: fetch-offline Jan 29 13:04:12.735576 ignition[625]: no configs at "/usr/lib/ignition/base.d" Jan 29 13:04:12.735586 ignition[625]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 13:04:12.738144 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 13:04:12.735677 ignition[625]: parsed url from cmdline: "" Jan 29 13:04:12.735681 ignition[625]: no config URL provided Jan 29 13:04:12.735687 ignition[625]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 13:04:12.735695 ignition[625]: no config at "/usr/lib/ignition/user.ign" Jan 29 13:04:12.735700 ignition[625]: failed to fetch config: resource requires networking Jan 29 13:04:12.735902 ignition[625]: Ignition finished successfully Jan 29 13:04:12.743596 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 13:04:12.757626 ignition[724]: Ignition 2.19.0 Jan 29 13:04:12.757641 ignition[724]: Stage: fetch Jan 29 13:04:12.757856 ignition[724]: no configs at "/usr/lib/ignition/base.d" Jan 29 13:04:12.757875 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 13:04:12.758005 ignition[724]: parsed url from cmdline: "" Jan 29 13:04:12.758010 ignition[724]: no config URL provided Jan 29 13:04:12.758017 ignition[724]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 13:04:12.758027 ignition[724]: no config at "/usr/lib/ignition/user.ign" Jan 29 13:04:12.758122 ignition[724]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 13:04:12.758148 ignition[724]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 13:04:12.758158 ignition[724]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 13:04:12.951088 ignition[724]: GET result: OK Jan 29 13:04:12.951384 ignition[724]: parsing config with SHA512: 759e0c2e2edc19e2940adc8f3d4db02e64c839804fbfcf42e4db06344028c69969798e7f5ced279b764ed135b67108f21646203adb66c89ae45dace95a509de4 Jan 29 13:04:12.963728 unknown[724]: fetched base config from "system" Jan 29 13:04:12.963776 unknown[724]: fetched base config from "system" Jan 29 13:04:12.964858 ignition[724]: fetch: fetch complete Jan 29 13:04:12.963791 unknown[724]: fetched user config from "openstack" Jan 29 13:04:12.964871 ignition[724]: fetch: fetch passed Jan 29 13:04:12.968074 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 13:04:12.964960 ignition[724]: Ignition finished successfully Jan 29 13:04:12.975646 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 13:04:13.017814 ignition[731]: Ignition 2.19.0 Jan 29 13:04:13.017840 ignition[731]: Stage: kargs Jan 29 13:04:13.018243 ignition[731]: no configs at "/usr/lib/ignition/base.d" Jan 29 13:04:13.018271 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 13:04:13.024953 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 13:04:13.020625 ignition[731]: kargs: kargs passed Jan 29 13:04:13.020727 ignition[731]: Ignition finished successfully Jan 29 13:04:13.034736 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 13:04:13.076098 ignition[737]: Ignition 2.19.0 Jan 29 13:04:13.076124 ignition[737]: Stage: disks Jan 29 13:04:13.076556 ignition[737]: no configs at "/usr/lib/ignition/base.d" Jan 29 13:04:13.076586 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 13:04:13.080954 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 13:04:13.078816 ignition[737]: disks: disks passed Jan 29 13:04:13.084627 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 13:04:13.078918 ignition[737]: Ignition finished successfully Jan 29 13:04:13.086579 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 13:04:13.089095 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 13:04:13.092094 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 13:04:13.094597 systemd[1]: Reached target basic.target - Basic System. Jan 29 13:04:13.105818 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 13:04:13.137105 systemd-fsck[745]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 13:04:13.147297 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 13:04:13.156559 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 13:04:13.332462 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 13:04:13.332491 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 13:04:13.333556 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 13:04:13.342612 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 13:04:13.345968 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 13:04:13.349796 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 13:04:13.368466 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (753) Jan 29 13:04:13.368519 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 13:04:13.368550 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 13:04:13.368578 kernel: BTRFS info (device vda6): using free space tree Jan 29 13:04:13.368621 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 13:04:13.362382 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 13:04:13.369913 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 13:04:13.369947 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 13:04:13.373607 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 13:04:13.390820 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 13:04:13.395061 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 13:04:13.490874 initrd-setup-root[781]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 13:04:13.496984 initrd-setup-root[788]: cut: /sysroot/etc/group: No such file or directory Jan 29 13:04:13.505224 initrd-setup-root[796]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 13:04:13.513728 initrd-setup-root[803]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 13:04:13.609154 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 13:04:13.614484 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 13:04:13.616515 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 13:04:13.623603 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 13:04:13.625001 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 13:04:13.647025 ignition[871]: INFO : Ignition 2.19.0 Jan 29 13:04:13.648561 ignition[871]: INFO : Stage: mount Jan 29 13:04:13.648561 ignition[871]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 13:04:13.648561 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 13:04:13.650483 ignition[871]: INFO : mount: mount passed Jan 29 13:04:13.650483 ignition[871]: INFO : Ignition finished successfully Jan 29 13:04:13.652150 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 13:04:13.655084 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 13:04:13.826700 systemd-networkd[716]: eth0: Gained IPv6LL Jan 29 13:04:20.594882 coreos-metadata[768]: Jan 29 13:04:20.594 WARN failed to locate config-drive, using the metadata service API instead Jan 29 13:04:20.635578 coreos-metadata[768]: Jan 29 13:04:20.635 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 13:04:20.651483 coreos-metadata[768]: Jan 29 13:04:20.651 INFO Fetch successful Jan 29 13:04:20.653001 coreos-metadata[768]: Jan 29 13:04:20.652 INFO wrote hostname ci-4081-3-0-e-f5d4e76a77.novalocal to /sysroot/etc/hostname Jan 29 13:04:20.655585 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 13:04:20.655843 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 13:04:20.665583 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 13:04:20.702807 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 13:04:20.719458 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (888) Jan 29 13:04:20.731166 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 13:04:20.731234 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 13:04:20.735236 kernel: BTRFS info (device vda6): using free space tree Jan 29 13:04:20.747473 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 13:04:20.753101 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 13:04:20.803781 ignition[906]: INFO : Ignition 2.19.0 Jan 29 13:04:20.803781 ignition[906]: INFO : Stage: files Jan 29 13:04:20.806735 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 13:04:20.806735 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 13:04:20.806735 ignition[906]: DEBUG : files: compiled without relabeling support, skipping Jan 29 13:04:20.812582 ignition[906]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 13:04:20.812582 ignition[906]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 13:04:20.816644 ignition[906]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 13:04:20.816644 ignition[906]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 13:04:20.820535 ignition[906]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 13:04:20.820535 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 13:04:20.820535 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 13:04:20.816692 unknown[906]: wrote ssh authorized keys file for user: core Jan 29 13:04:20.902892 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 13:04:21.194783 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 13:04:21.194783 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 13:04:21.199770 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 13:04:21.199770 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 13:04:21.199770 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 13:04:21.199770 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 13:04:21.199770 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 13:04:21.199770 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 13:04:21.199770 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 13:04:21.199770 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 13:04:21.199770 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 13:04:21.199770 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 13:04:21.199770 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 13:04:21.199770 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 13:04:21.199770 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 13:04:21.757629 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 13:04:23.342306 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 13:04:23.342306 ignition[906]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 13:04:23.350767 ignition[906]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 13:04:23.350767 ignition[906]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 13:04:23.350767 ignition[906]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 13:04:23.350767 ignition[906]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 13:04:23.350767 ignition[906]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 13:04:23.350767 ignition[906]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 13:04:23.350767 ignition[906]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 13:04:23.350767 ignition[906]: INFO : files: files passed Jan 29 13:04:23.350767 ignition[906]: INFO : Ignition finished successfully Jan 29 13:04:23.346227 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 13:04:23.357796 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 13:04:23.362224 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 13:04:23.373788 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 13:04:23.373977 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 13:04:23.385940 initrd-setup-root-after-ignition[938]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 13:04:23.387562 initrd-setup-root-after-ignition[934]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 13:04:23.387562 initrd-setup-root-after-ignition[934]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 13:04:23.388712 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 13:04:23.390945 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 13:04:23.398585 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 13:04:23.438888 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 13:04:23.439148 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 13:04:23.441520 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 13:04:23.446990 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 13:04:23.448954 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 13:04:23.453642 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 13:04:23.471370 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 13:04:23.483534 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 13:04:23.502809 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 13:04:23.506015 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 13:04:23.507953 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 13:04:23.509662 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 13:04:23.510056 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 13:04:23.512685 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 13:04:23.514834 systemd[1]: Stopped target basic.target - Basic System. Jan 29 13:04:23.516902 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 13:04:23.519177 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 13:04:23.521560 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 13:04:23.523931 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 13:04:23.526026 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 13:04:23.528534 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 13:04:23.530808 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 13:04:23.532970 systemd[1]: Stopped target swap.target - Swaps. Jan 29 13:04:23.534837 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 13:04:23.535228 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 13:04:23.537536 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 13:04:23.539779 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 13:04:23.541904 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 13:04:23.542260 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 13:04:23.543998 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 13:04:23.544264 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 13:04:23.546656 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 13:04:23.546959 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 13:04:23.548716 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 13:04:23.548913 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 13:04:23.560896 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 13:04:23.561495 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 13:04:23.561655 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 13:04:23.564600 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 13:04:23.565117 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 13:04:23.565286 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 13:04:23.565990 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 13:04:23.567933 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 13:04:23.574866 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 13:04:23.574955 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 13:04:23.582152 ignition[958]: INFO : Ignition 2.19.0 Jan 29 13:04:23.582152 ignition[958]: INFO : Stage: umount Jan 29 13:04:23.585238 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 13:04:23.585238 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 13:04:23.585238 ignition[958]: INFO : umount: umount passed Jan 29 13:04:23.585238 ignition[958]: INFO : Ignition finished successfully Jan 29 13:04:23.584888 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 13:04:23.584983 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 13:04:23.585869 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 13:04:23.585912 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 13:04:23.586957 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 13:04:23.586998 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 13:04:23.589224 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 13:04:23.589262 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 13:04:23.590301 systemd[1]: Stopped target network.target - Network. Jan 29 13:04:23.591212 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 13:04:23.591255 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 13:04:23.593311 systemd[1]: Stopped target paths.target - Path Units. Jan 29 13:04:23.594528 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 13:04:23.596556 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 13:04:23.598102 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 13:04:23.600970 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 13:04:23.602107 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 13:04:23.602145 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 13:04:23.603280 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 13:04:23.603315 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 13:04:23.604276 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 13:04:23.604322 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 13:04:23.605449 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 13:04:23.605488 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 13:04:23.606805 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 13:04:23.607944 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 13:04:23.609908 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 13:04:23.610384 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 13:04:23.610488 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 13:04:23.611068 systemd-networkd[716]: eth0: DHCPv6 lease lost Jan 29 13:04:23.612215 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 13:04:23.612285 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 13:04:23.613241 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 13:04:23.613342 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 13:04:23.614562 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 13:04:23.614608 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 13:04:23.621482 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 13:04:23.621966 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 13:04:23.622015 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 13:04:23.622637 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 13:04:23.623554 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 13:04:23.623634 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 13:04:23.628135 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 13:04:23.628204 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 13:04:23.633461 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 13:04:23.633507 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 13:04:23.636093 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 13:04:23.636135 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 13:04:23.637598 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 13:04:23.637723 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 13:04:23.638554 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 13:04:23.638630 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 13:04:23.640088 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 13:04:23.640143 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 13:04:23.641201 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 13:04:23.641232 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 13:04:23.642315 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 13:04:23.642357 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 13:04:23.643941 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 13:04:23.643980 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 13:04:23.645199 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 13:04:23.645240 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 13:04:23.654733 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 13:04:23.655948 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 13:04:23.656007 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 13:04:23.657267 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 13:04:23.657309 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 13:04:23.658884 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 13:04:23.658977 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 13:04:23.660173 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 13:04:23.668747 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 13:04:23.676187 systemd[1]: Switching root. Jan 29 13:04:23.699721 systemd-journald[184]: Journal stopped Jan 29 13:04:25.417631 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 29 13:04:25.417680 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 13:04:25.417702 kernel: SELinux: policy capability open_perms=1 Jan 29 13:04:25.417717 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 13:04:25.417728 kernel: SELinux: policy capability always_check_network=0 Jan 29 13:04:25.417744 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 13:04:25.417759 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 13:04:25.417771 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 13:04:25.417781 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 13:04:25.417792 kernel: audit: type=1403 audit(1738155864.410:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 13:04:25.417806 systemd[1]: Successfully loaded SELinux policy in 73.731ms. Jan 29 13:04:25.417820 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.389ms. Jan 29 13:04:25.417833 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 13:04:25.417848 systemd[1]: Detected virtualization kvm. Jan 29 13:04:25.417860 systemd[1]: Detected architecture x86-64. Jan 29 13:04:25.417872 systemd[1]: Detected first boot. Jan 29 13:04:25.417883 systemd[1]: Hostname set to . Jan 29 13:04:25.417895 systemd[1]: Initializing machine ID from VM UUID. Jan 29 13:04:25.417908 zram_generator::config[1000]: No configuration found. Jan 29 13:04:25.417920 systemd[1]: Populated /etc with preset unit settings. Jan 29 13:04:25.417933 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 13:04:25.417945 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 13:04:25.417957 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 13:04:25.417969 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 13:04:25.417982 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 13:04:25.417994 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 13:04:25.418006 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 13:04:25.418018 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 13:04:25.418030 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 13:04:25.418044 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 13:04:25.418056 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 13:04:25.418068 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 13:04:25.418080 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 13:04:25.418091 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 13:04:25.418103 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 13:04:25.418116 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 13:04:25.418127 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 13:04:25.418139 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 13:04:25.418154 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 13:04:25.418165 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 13:04:25.418177 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 13:04:25.418189 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 13:04:25.418201 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 13:04:25.418215 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 13:04:25.418227 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 13:04:25.418238 systemd[1]: Reached target slices.target - Slice Units. Jan 29 13:04:25.418250 systemd[1]: Reached target swap.target - Swaps. Jan 29 13:04:25.418261 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 13:04:25.418274 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 13:04:25.418286 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 13:04:25.418298 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 13:04:25.418309 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 13:04:25.418321 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 13:04:25.418335 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 13:04:25.418347 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 13:04:25.418359 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 13:04:25.418370 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 13:04:25.418382 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 13:04:25.418457 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 13:04:25.418471 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 13:04:25.418483 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 13:04:25.418498 systemd[1]: Reached target machines.target - Containers. Jan 29 13:04:25.418510 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 13:04:25.418521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 13:04:25.418533 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 13:04:25.418545 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 13:04:25.418557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 13:04:25.418569 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 13:04:25.418581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 13:04:25.418593 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 13:04:25.418607 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 13:04:25.418619 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 13:04:25.418631 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 13:04:25.418643 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 13:04:25.418655 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 13:04:25.418666 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 13:04:25.418678 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 13:04:25.418689 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 13:04:25.418701 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 13:04:25.418715 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 13:04:25.418730 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 13:04:25.418742 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 13:04:25.418753 kernel: loop: module loaded Jan 29 13:04:25.418765 systemd[1]: Stopped verity-setup.service. Jan 29 13:04:25.418777 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 13:04:25.418789 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 13:04:25.418800 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 13:04:25.418814 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 13:04:25.418826 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 13:04:25.418838 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 13:04:25.418849 kernel: fuse: init (API version 7.39) Jan 29 13:04:25.418860 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 13:04:25.418872 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 13:04:25.418886 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 13:04:25.418898 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 13:04:25.418910 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 13:04:25.418922 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 13:04:25.418934 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 13:04:25.418946 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 13:04:25.418960 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 13:04:25.418971 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 13:04:25.418983 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 13:04:25.418995 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 13:04:25.419007 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 13:04:25.419018 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 13:04:25.419046 systemd-journald[1090]: Collecting audit messages is disabled. Jan 29 13:04:25.419071 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 13:04:25.419085 systemd-journald[1090]: Journal started Jan 29 13:04:25.419111 systemd-journald[1090]: Runtime Journal (/run/log/journal/891bc8b40c92423b86fdd6514b956dfd) is 8.0M, max 78.3M, 70.3M free. Jan 29 13:04:25.022740 systemd[1]: Queued start job for default target multi-user.target. Jan 29 13:04:25.046154 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 13:04:25.046826 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 13:04:25.425412 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 13:04:25.425444 kernel: ACPI: bus type drm_connector registered Jan 29 13:04:25.425305 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 13:04:25.426099 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 13:04:25.426227 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 13:04:25.436216 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 13:04:25.443480 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 13:04:25.447467 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 13:04:25.448526 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 13:04:25.448567 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 13:04:25.451522 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 13:04:25.461553 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 13:04:25.465493 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 13:04:25.466195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 13:04:25.468658 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 13:04:25.473498 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 13:04:25.474138 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 13:04:25.479583 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 13:04:25.480205 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 13:04:25.482434 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 13:04:25.485810 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 13:04:25.487263 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 13:04:25.490176 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 13:04:25.490939 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 13:04:25.491755 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 13:04:25.492868 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 13:04:25.501876 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 13:04:25.506773 systemd-journald[1090]: Time spent on flushing to /var/log/journal/891bc8b40c92423b86fdd6514b956dfd is 41.979ms for 946 entries. Jan 29 13:04:25.506773 systemd-journald[1090]: System Journal (/var/log/journal/891bc8b40c92423b86fdd6514b956dfd) is 8.0M, max 584.8M, 576.8M free. Jan 29 13:04:25.579990 systemd-journald[1090]: Received client request to flush runtime journal. Jan 29 13:04:25.580036 kernel: loop0: detected capacity change from 0 to 210664 Jan 29 13:04:25.511250 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 13:04:25.512013 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 13:04:25.518347 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 13:04:25.546362 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 13:04:25.561466 udevadm[1139]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 13:04:25.582336 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 13:04:25.625715 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 13:04:25.626765 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 13:04:25.628693 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 13:04:25.631099 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 13:04:25.656614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 13:04:25.666436 kernel: loop1: detected capacity change from 0 to 8 Jan 29 13:04:25.685105 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jan 29 13:04:25.685538 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jan 29 13:04:25.692420 kernel: loop2: detected capacity change from 0 to 140768 Jan 29 13:04:25.694543 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 13:04:25.755436 kernel: loop3: detected capacity change from 0 to 142488 Jan 29 13:04:25.829940 kernel: loop4: detected capacity change from 0 to 210664 Jan 29 13:04:25.878660 kernel: loop5: detected capacity change from 0 to 8 Jan 29 13:04:25.882429 kernel: loop6: detected capacity change from 0 to 140768 Jan 29 13:04:25.922434 kernel: loop7: detected capacity change from 0 to 142488 Jan 29 13:04:25.974631 (sd-merge)[1159]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 13:04:25.975057 (sd-merge)[1159]: Merged extensions into '/usr'. Jan 29 13:04:25.980841 systemd[1]: Reloading requested from client PID 1133 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 13:04:25.980856 systemd[1]: Reloading... Jan 29 13:04:26.059416 zram_generator::config[1181]: No configuration found. Jan 29 13:04:26.290641 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 13:04:26.365155 systemd[1]: Reloading finished in 383 ms. Jan 29 13:04:26.394793 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 13:04:26.396896 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 13:04:26.397033 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 13:04:26.403505 systemd[1]: Starting ensure-sysext.service... Jan 29 13:04:26.405557 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 13:04:26.409581 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 13:04:26.411096 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 13:04:26.416457 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jan 29 13:04:26.416468 systemd[1]: Reloading... Jan 29 13:04:26.440176 systemd-udevd[1244]: Using default interface naming scheme 'v255'. Jan 29 13:04:26.440716 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 13:04:26.441060 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 13:04:26.441912 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 13:04:26.442228 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 29 13:04:26.442297 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 29 13:04:26.445868 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 13:04:26.445882 systemd-tmpfiles[1243]: Skipping /boot Jan 29 13:04:26.455945 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 13:04:26.455958 systemd-tmpfiles[1243]: Skipping /boot Jan 29 13:04:26.490801 zram_generator::config[1268]: No configuration found. Jan 29 13:04:26.624539 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1275) Jan 29 13:04:26.668416 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 13:04:26.675419 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 13:04:26.682418 kernel: ACPI: button: Power Button [PWRF] Jan 29 13:04:26.698467 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 13:04:26.725507 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 13:04:26.740446 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 13:04:26.765580 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 13:04:26.765640 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 13:04:26.770455 kernel: Console: switching to colour dummy device 80x25 Jan 29 13:04:26.772257 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 13:04:26.772298 kernel: [drm] features: -context_init Jan 29 13:04:26.774408 kernel: [drm] number of scanouts: 1 Jan 29 13:04:26.774460 kernel: [drm] number of cap sets: 0 Jan 29 13:04:26.776410 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 13:04:26.788697 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 13:04:26.788822 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 13:04:26.793552 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 13:04:26.808801 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 13:04:26.808896 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 13:04:26.812058 systemd[1]: Reloading finished in 395 ms. Jan 29 13:04:26.829658 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 13:04:26.835791 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 13:04:26.861999 systemd[1]: Finished ensure-sysext.service. Jan 29 13:04:26.872411 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 13:04:26.884688 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 13:04:26.892618 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 13:04:26.892985 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 13:04:26.896449 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 13:04:26.901639 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 13:04:26.905926 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 13:04:26.908200 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 13:04:26.909362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 13:04:26.918653 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 13:04:26.920755 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 13:04:26.930611 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 13:04:26.937645 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 13:04:26.947568 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 13:04:26.949523 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 13:04:26.954649 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 13:04:26.957022 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 13:04:26.958689 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 13:04:26.959004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 13:04:26.959129 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 13:04:26.959618 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 13:04:26.959774 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 13:04:26.961269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 13:04:26.961385 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 13:04:26.961662 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 13:04:26.961774 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 13:04:26.974710 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 13:04:26.977559 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 13:04:26.977651 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 13:04:26.983194 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 13:04:27.013225 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 13:04:27.023949 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 13:04:27.028561 lvm[1387]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 13:04:27.045069 augenrules[1400]: No rules Jan 29 13:04:27.045796 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 13:04:27.056238 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 13:04:27.065642 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 13:04:27.070730 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 13:04:27.075659 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 13:04:27.078653 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 13:04:27.089660 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 13:04:27.090364 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 13:04:27.094603 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 13:04:27.100311 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 13:04:27.095355 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 13:04:27.134478 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 13:04:27.151741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 13:04:27.178712 systemd-resolved[1376]: Positive Trust Anchors: Jan 29 13:04:27.178731 systemd-resolved[1376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 13:04:27.178773 systemd-resolved[1376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 13:04:27.183922 systemd-resolved[1376]: Using system hostname 'ci-4081-3-0-e-f5d4e76a77.novalocal'. Jan 29 13:04:27.185699 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 13:04:27.192242 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 13:04:27.206608 systemd-networkd[1375]: lo: Link UP Jan 29 13:04:27.206619 systemd-networkd[1375]: lo: Gained carrier Jan 29 13:04:27.207953 systemd-networkd[1375]: Enumeration completed Jan 29 13:04:27.208041 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 13:04:27.208823 systemd[1]: Reached target network.target - Network. Jan 29 13:04:27.213641 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 13:04:27.213740 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 13:04:27.214801 systemd-networkd[1375]: eth0: Link UP Jan 29 13:04:27.215078 systemd-networkd[1375]: eth0: Gained carrier Jan 29 13:04:27.215145 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 13:04:27.219652 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 13:04:27.221959 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 13:04:27.222604 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 13:04:27.223188 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 13:04:27.227573 systemd-networkd[1375]: eth0: DHCPv4 address 172.24.4.245/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 13:04:27.228034 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 13:04:27.230866 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 13:04:27.231636 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Jan 29 13:04:27.233049 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 13:04:27.233164 systemd[1]: Reached target paths.target - Path Units. Jan 29 13:04:27.234699 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 13:04:27.236313 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 13:04:27.237802 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 13:04:27.239282 systemd[1]: Reached target timers.target - Timer Units. Jan 29 13:04:27.242516 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 13:04:27.245499 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 13:04:27.252664 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 13:04:27.256514 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 13:04:27.257606 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 13:04:27.258969 systemd[1]: Reached target basic.target - Basic System. Jan 29 13:04:27.260597 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 13:04:27.260685 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 13:04:27.267915 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 13:04:27.273474 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 13:04:27.915240 systemd-timesyncd[1378]: Contacted time server 212.85.158.10:123 (0.flatcar.pool.ntp.org). Jan 29 13:04:27.915304 systemd-timesyncd[1378]: Initial clock synchronization to Wed 2025-01-29 13:04:27.915113 UTC. Jan 29 13:04:27.917570 systemd-resolved[1376]: Clock change detected. Flushing caches. Jan 29 13:04:27.917911 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 13:04:27.930572 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 13:04:27.934642 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 13:04:27.937020 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 13:04:27.941778 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 13:04:27.942619 jq[1433]: false Jan 29 13:04:27.954706 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 13:04:27.958879 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 13:04:27.963968 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 13:04:27.971942 dbus-daemon[1430]: [system] SELinux support is enabled Jan 29 13:04:27.976043 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 13:04:27.981960 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 13:04:27.982504 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 13:04:27.984544 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 13:04:27.987255 extend-filesystems[1434]: Found loop4 Jan 29 13:04:27.989417 extend-filesystems[1434]: Found loop5 Jan 29 13:04:27.989417 extend-filesystems[1434]: Found loop6 Jan 29 13:04:27.989417 extend-filesystems[1434]: Found loop7 Jan 29 13:04:27.989417 extend-filesystems[1434]: Found vda Jan 29 13:04:27.989417 extend-filesystems[1434]: Found vda1 Jan 29 13:04:27.989417 extend-filesystems[1434]: Found vda2 Jan 29 13:04:27.989417 extend-filesystems[1434]: Found vda3 Jan 29 13:04:27.989417 extend-filesystems[1434]: Found usr Jan 29 13:04:27.989417 extend-filesystems[1434]: Found vda4 Jan 29 13:04:27.989417 extend-filesystems[1434]: Found vda6 Jan 29 13:04:27.989417 extend-filesystems[1434]: Found vda7 Jan 29 13:04:27.989417 extend-filesystems[1434]: Found vda9 Jan 29 13:04:27.989417 extend-filesystems[1434]: Checking size of /dev/vda9 Jan 29 13:04:27.999066 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 13:04:28.000130 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 13:04:28.015766 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 13:04:28.015933 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 13:04:28.022526 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 13:04:28.022703 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 13:04:28.022788 extend-filesystems[1434]: Resized partition /dev/vda9 Jan 29 13:04:28.032684 extend-filesystems[1461]: resize2fs 1.47.1 (20-May-2024) Jan 29 13:04:28.048616 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 29 13:04:28.043888 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 13:04:28.044359 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 13:04:28.048866 jq[1447]: true Jan 29 13:04:28.059710 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 29 13:04:28.140962 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1281) Jan 29 13:04:28.141030 update_engine[1445]: I20250129 13:04:28.111495 1445 main.cc:92] Flatcar Update Engine starting Jan 29 13:04:28.141030 update_engine[1445]: I20250129 13:04:28.112715 1445 update_check_scheduler.cc:74] Next update check in 3m50s Jan 29 13:04:28.068181 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 13:04:28.161536 tar[1454]: linux-amd64/helm Jan 29 13:04:28.161748 extend-filesystems[1461]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 13:04:28.161748 extend-filesystems[1461]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 13:04:28.161748 extend-filesystems[1461]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 29 13:04:28.178543 jq[1462]: true Jan 29 13:04:28.068209 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 13:04:28.178784 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Jan 29 13:04:28.072336 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 13:04:28.072357 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 13:04:28.088739 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 13:04:28.114475 systemd[1]: Started update-engine.service - Update Engine. Jan 29 13:04:28.127756 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 13:04:28.161606 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 13:04:28.162122 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 13:04:28.166217 systemd-logind[1443]: New seat seat0. Jan 29 13:04:28.175270 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 13:04:28.175290 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 13:04:28.177507 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 13:04:28.192777 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Jan 29 13:04:28.200048 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 13:04:28.213769 systemd[1]: Starting sshkeys.service... Jan 29 13:04:28.268711 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 13:04:28.281238 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 13:04:28.359295 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 13:04:28.370471 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 13:04:28.543857 containerd[1464]: time="2025-01-29T13:04:28.543741857Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 13:04:28.583847 containerd[1464]: time="2025-01-29T13:04:28.583794696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 13:04:28.585460 containerd[1464]: time="2025-01-29T13:04:28.585432368Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 13:04:28.585872 containerd[1464]: time="2025-01-29T13:04:28.585521585Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 13:04:28.585872 containerd[1464]: time="2025-01-29T13:04:28.585543646Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 13:04:28.585872 containerd[1464]: time="2025-01-29T13:04:28.585704117Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 13:04:28.585872 containerd[1464]: time="2025-01-29T13:04:28.585723353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 13:04:28.585872 containerd[1464]: time="2025-01-29T13:04:28.585790089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 13:04:28.585872 containerd[1464]: time="2025-01-29T13:04:28.585807161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 13:04:28.586216 containerd[1464]: time="2025-01-29T13:04:28.586193605Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 13:04:28.586279 containerd[1464]: time="2025-01-29T13:04:28.586264709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 13:04:28.586340 containerd[1464]: time="2025-01-29T13:04:28.586324461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 13:04:28.586466 containerd[1464]: time="2025-01-29T13:04:28.586388942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 13:04:28.586725 containerd[1464]: time="2025-01-29T13:04:28.586589999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 13:04:28.586916 containerd[1464]: time="2025-01-29T13:04:28.586897476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 13:04:28.587426 containerd[1464]: time="2025-01-29T13:04:28.587114523Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 13:04:28.587426 containerd[1464]: time="2025-01-29T13:04:28.587134500Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 13:04:28.587426 containerd[1464]: time="2025-01-29T13:04:28.587228116Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 13:04:28.587426 containerd[1464]: time="2025-01-29T13:04:28.587279722Z" level=info msg="metadata content store policy set" policy=shared Jan 29 13:04:28.595773 containerd[1464]: time="2025-01-29T13:04:28.595752293Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 13:04:28.595867 containerd[1464]: time="2025-01-29T13:04:28.595852421Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 13:04:28.595955 containerd[1464]: time="2025-01-29T13:04:28.595941107Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 13:04:28.596437 containerd[1464]: time="2025-01-29T13:04:28.596014164Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 13:04:28.596437 containerd[1464]: time="2025-01-29T13:04:28.596035223Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 13:04:28.596582 containerd[1464]: time="2025-01-29T13:04:28.596564676Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 13:04:28.596979 containerd[1464]: time="2025-01-29T13:04:28.596960068Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 13:04:28.597138 containerd[1464]: time="2025-01-29T13:04:28.597120519Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 13:04:28.597265 containerd[1464]: time="2025-01-29T13:04:28.597189208Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 13:04:28.597329 containerd[1464]: time="2025-01-29T13:04:28.597314643Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 13:04:28.597387 containerd[1464]: time="2025-01-29T13:04:28.597374345Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 13:04:28.597473 containerd[1464]: time="2025-01-29T13:04:28.597458022Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 13:04:28.597530 containerd[1464]: time="2025-01-29T13:04:28.597517884Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 13:04:28.597640 containerd[1464]: time="2025-01-29T13:04:28.597624654Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 13:04:28.597721 containerd[1464]: time="2025-01-29T13:04:28.597706197Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 13:04:28.597781 containerd[1464]: time="2025-01-29T13:04:28.597768073Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 13:04:28.597837 containerd[1464]: time="2025-01-29T13:04:28.597824890Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 13:04:28.598146 containerd[1464]: time="2025-01-29T13:04:28.597922112Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 13:04:28.598146 containerd[1464]: time="2025-01-29T13:04:28.597950385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.598146 containerd[1464]: time="2025-01-29T13:04:28.597965484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.598146 containerd[1464]: time="2025-01-29T13:04:28.597978358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.598146 containerd[1464]: time="2025-01-29T13:04:28.597993135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.598146 containerd[1464]: time="2025-01-29T13:04:28.598006370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.598146 containerd[1464]: time="2025-01-29T13:04:28.598020898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.598146 containerd[1464]: time="2025-01-29T13:04:28.598039973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.598146 containerd[1464]: time="2025-01-29T13:04:28.598061013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.598146 containerd[1464]: time="2025-01-29T13:04:28.598076181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.598146 containerd[1464]: time="2025-01-29T13:04:28.598110656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.598561 containerd[1464]: time="2025-01-29T13:04:28.598130002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.598561 containerd[1464]: time="2025-01-29T13:04:28.598477845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.598561 containerd[1464]: time="2025-01-29T13:04:28.598509544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.598561 containerd[1464]: time="2025-01-29T13:04:28.598528530Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 13:04:28.598794 containerd[1464]: time="2025-01-29T13:04:28.598696114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.598794 containerd[1464]: time="2025-01-29T13:04:28.598723335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.598794 containerd[1464]: time="2025-01-29T13:04:28.598736399Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 13:04:28.599910 containerd[1464]: time="2025-01-29T13:04:28.598954919Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 13:04:28.599910 containerd[1464]: time="2025-01-29T13:04:28.598983673Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 13:04:28.599910 containerd[1464]: time="2025-01-29T13:04:28.598997229Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 13:04:28.599910 containerd[1464]: time="2025-01-29T13:04:28.599011165Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 13:04:28.599910 containerd[1464]: time="2025-01-29T13:04:28.599022396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.599910 containerd[1464]: time="2025-01-29T13:04:28.599035991Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 13:04:28.599910 containerd[1464]: time="2025-01-29T13:04:28.599047232Z" level=info msg="NRI interface is disabled by configuration." Jan 29 13:04:28.599910 containerd[1464]: time="2025-01-29T13:04:28.599059365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 13:04:28.600096 containerd[1464]: time="2025-01-29T13:04:28.599359678Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 13:04:28.600096 containerd[1464]: time="2025-01-29T13:04:28.599453454Z" level=info msg="Connect containerd service" Jan 29 13:04:28.600096 containerd[1464]: time="2025-01-29T13:04:28.599485234Z" level=info msg="using legacy CRI server" Jan 29 13:04:28.600096 containerd[1464]: time="2025-01-29T13:04:28.599492417Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 13:04:28.600096 containerd[1464]: time="2025-01-29T13:04:28.599599819Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 13:04:28.600674 containerd[1464]: time="2025-01-29T13:04:28.600649207Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 13:04:28.600901 containerd[1464]: time="2025-01-29T13:04:28.600865162Z" level=info msg="Start subscribing containerd event" Jan 29 13:04:28.601082 containerd[1464]: time="2025-01-29T13:04:28.601067732Z" level=info msg="Start recovering state" Jan 29 13:04:28.601234 containerd[1464]: time="2025-01-29T13:04:28.601218675Z" level=info msg="Start event monitor" Jan 29 13:04:28.601471 containerd[1464]: time="2025-01-29T13:04:28.601454968Z" level=info msg="Start snapshots syncer" Jan 29 13:04:28.601537 containerd[1464]: time="2025-01-29T13:04:28.601524509Z" level=info msg="Start cni network conf syncer for default" Jan 29 13:04:28.601600 containerd[1464]: time="2025-01-29T13:04:28.601587316Z" level=info msg="Start streaming server" Jan 29 13:04:28.601867 containerd[1464]: time="2025-01-29T13:04:28.601655314Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 13:04:28.601961 containerd[1464]: time="2025-01-29T13:04:28.601946901Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 13:04:28.602132 containerd[1464]: time="2025-01-29T13:04:28.602118072Z" level=info msg="containerd successfully booted in 0.059287s" Jan 29 13:04:28.602204 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 13:04:28.893359 tar[1454]: linux-amd64/LICENSE Jan 29 13:04:28.893869 tar[1454]: linux-amd64/README.md Jan 29 13:04:28.904253 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 13:04:28.925930 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 13:04:28.948448 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 13:04:28.954733 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 13:04:28.961771 systemd[1]: Started sshd@0-172.24.4.245:22-172.24.4.1:40930.service - OpenSSH per-connection server daemon (172.24.4.1:40930). Jan 29 13:04:28.964219 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 13:04:28.965504 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 13:04:28.971961 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 13:04:28.988596 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 13:04:29.002763 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 13:04:29.006842 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 13:04:29.007889 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 13:04:29.632644 systemd-networkd[1375]: eth0: Gained IPv6LL Jan 29 13:04:29.636770 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 13:04:29.643222 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 13:04:29.656007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 13:04:29.662531 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 13:04:29.723282 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 13:04:29.878100 sshd[1519]: Accepted publickey for core from 172.24.4.1 port 40930 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:04:29.884734 sshd[1519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:04:29.914899 systemd-logind[1443]: New session 1 of user core. Jan 29 13:04:29.918455 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 13:04:29.932011 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 13:04:29.957194 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 13:04:29.970804 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 13:04:29.979543 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 13:04:30.105342 systemd[1542]: Queued start job for default target default.target. Jan 29 13:04:30.113266 systemd[1542]: Created slice app.slice - User Application Slice. Jan 29 13:04:30.113296 systemd[1542]: Reached target paths.target - Paths. Jan 29 13:04:30.113312 systemd[1542]: Reached target timers.target - Timers. Jan 29 13:04:30.114840 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 13:04:30.136106 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 13:04:30.136213 systemd[1542]: Reached target sockets.target - Sockets. Jan 29 13:04:30.136228 systemd[1542]: Reached target basic.target - Basic System. Jan 29 13:04:30.136266 systemd[1542]: Reached target default.target - Main User Target. Jan 29 13:04:30.136292 systemd[1542]: Startup finished in 149ms. Jan 29 13:04:30.136371 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 13:04:30.141628 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 13:04:30.636741 systemd[1]: Started sshd@1-172.24.4.245:22-172.24.4.1:40936.service - OpenSSH per-connection server daemon (172.24.4.1:40936). Jan 29 13:04:31.395359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 13:04:31.399651 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 13:04:32.355451 sshd[1553]: Accepted publickey for core from 172.24.4.1 port 40936 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:04:32.358770 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:04:32.372816 systemd-logind[1443]: New session 2 of user core. Jan 29 13:04:32.381858 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 13:04:32.783739 kubelet[1561]: E0129 13:04:32.783636 1561 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 13:04:32.788601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 13:04:32.788931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 13:04:32.789836 systemd[1]: kubelet.service: Consumed 2.071s CPU time. Jan 29 13:04:33.102691 sshd[1553]: pam_unix(sshd:session): session closed for user core Jan 29 13:04:33.114863 systemd[1]: sshd@1-172.24.4.245:22-172.24.4.1:40936.service: Deactivated successfully. Jan 29 13:04:33.118274 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 13:04:33.121761 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Jan 29 13:04:33.130152 systemd[1]: Started sshd@2-172.24.4.245:22-172.24.4.1:40952.service - OpenSSH per-connection server daemon (172.24.4.1:40952). Jan 29 13:04:33.138868 systemd-logind[1443]: Removed session 2. Jan 29 13:04:34.049258 login[1527]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 13:04:34.058740 login[1526]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 13:04:34.060877 systemd-logind[1443]: New session 3 of user core. Jan 29 13:04:34.072136 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 13:04:34.078863 systemd-logind[1443]: New session 4 of user core. Jan 29 13:04:34.092820 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 13:04:34.266150 sshd[1575]: Accepted publickey for core from 172.24.4.1 port 40952 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:04:34.268846 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:04:34.278311 systemd-logind[1443]: New session 5 of user core. Jan 29 13:04:34.291923 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 13:04:34.914387 sshd[1575]: pam_unix(sshd:session): session closed for user core Jan 29 13:04:34.921343 systemd[1]: sshd@2-172.24.4.245:22-172.24.4.1:40952.service: Deactivated successfully. Jan 29 13:04:34.926666 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 13:04:34.929980 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Jan 29 13:04:34.932387 systemd-logind[1443]: Removed session 5. Jan 29 13:04:34.973485 coreos-metadata[1429]: Jan 29 13:04:34.973 WARN failed to locate config-drive, using the metadata service API instead Jan 29 13:04:35.045852 coreos-metadata[1429]: Jan 29 13:04:35.045 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 13:04:35.385170 coreos-metadata[1491]: Jan 29 13:04:35.384 WARN failed to locate config-drive, using the metadata service API instead Jan 29 13:04:35.427459 coreos-metadata[1491]: Jan 29 13:04:35.427 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 13:04:35.470986 coreos-metadata[1429]: Jan 29 13:04:35.470 INFO Fetch successful Jan 29 13:04:35.470986 coreos-metadata[1429]: Jan 29 13:04:35.470 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 13:04:35.481137 coreos-metadata[1429]: Jan 29 13:04:35.480 INFO Fetch successful Jan 29 13:04:35.481345 coreos-metadata[1429]: Jan 29 13:04:35.481 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 13:04:35.494522 coreos-metadata[1429]: Jan 29 13:04:35.494 INFO Fetch successful Jan 29 13:04:35.494522 coreos-metadata[1429]: Jan 29 13:04:35.494 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 13:04:35.509513 coreos-metadata[1429]: Jan 29 13:04:35.509 INFO Fetch successful Jan 29 13:04:35.509740 coreos-metadata[1429]: Jan 29 13:04:35.509 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 13:04:35.523298 coreos-metadata[1429]: Jan 29 13:04:35.523 INFO Fetch successful Jan 29 13:04:35.523298 coreos-metadata[1429]: Jan 29 13:04:35.523 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 13:04:35.538673 coreos-metadata[1429]: Jan 29 13:04:35.538 INFO Fetch successful Jan 29 13:04:35.581190 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 13:04:35.583947 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 13:04:35.830562 coreos-metadata[1491]: Jan 29 13:04:35.830 INFO Fetch successful Jan 29 13:04:35.830562 coreos-metadata[1491]: Jan 29 13:04:35.830 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 13:04:35.845295 coreos-metadata[1491]: Jan 29 13:04:35.845 INFO Fetch successful Jan 29 13:04:35.850910 unknown[1491]: wrote ssh authorized keys file for user: core Jan 29 13:04:35.885367 update-ssh-keys[1616]: Updated "/home/core/.ssh/authorized_keys" Jan 29 13:04:35.886471 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 13:04:35.889533 systemd[1]: Finished sshkeys.service. Jan 29 13:04:35.894677 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 13:04:35.894932 systemd[1]: Startup finished in 1.273s (kernel) + 14.591s (initrd) + 10.919s (userspace) = 26.784s. Jan 29 13:04:43.039735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 13:04:43.051763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 13:04:43.362691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 13:04:43.366488 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 13:04:43.433145 kubelet[1628]: E0129 13:04:43.433102 1628 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 13:04:43.437694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 13:04:43.438013 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 13:04:44.937950 systemd[1]: Started sshd@3-172.24.4.245:22-172.24.4.1:52956.service - OpenSSH per-connection server daemon (172.24.4.1:52956). Jan 29 13:04:46.116782 sshd[1637]: Accepted publickey for core from 172.24.4.1 port 52956 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:04:46.119476 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:04:46.130790 systemd-logind[1443]: New session 6 of user core. Jan 29 13:04:46.134724 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 13:04:46.906745 sshd[1637]: pam_unix(sshd:session): session closed for user core Jan 29 13:04:46.917735 systemd[1]: sshd@3-172.24.4.245:22-172.24.4.1:52956.service: Deactivated successfully. Jan 29 13:04:46.921005 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 13:04:46.924708 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Jan 29 13:04:46.935930 systemd[1]: Started sshd@4-172.24.4.245:22-172.24.4.1:52972.service - OpenSSH per-connection server daemon (172.24.4.1:52972). Jan 29 13:04:46.938943 systemd-logind[1443]: Removed session 6. Jan 29 13:04:48.135118 sshd[1644]: Accepted publickey for core from 172.24.4.1 port 52972 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:04:48.137916 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:04:48.147618 systemd-logind[1443]: New session 7 of user core. Jan 29 13:04:48.157711 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 13:04:48.884232 sshd[1644]: pam_unix(sshd:session): session closed for user core Jan 29 13:04:48.896997 systemd[1]: sshd@4-172.24.4.245:22-172.24.4.1:52972.service: Deactivated successfully. Jan 29 13:04:48.900688 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 13:04:48.902552 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Jan 29 13:04:48.909984 systemd[1]: Started sshd@5-172.24.4.245:22-172.24.4.1:52976.service - OpenSSH per-connection server daemon (172.24.4.1:52976). Jan 29 13:04:48.914376 systemd-logind[1443]: Removed session 7. Jan 29 13:04:50.127089 sshd[1651]: Accepted publickey for core from 172.24.4.1 port 52976 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:04:50.129749 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:04:50.140879 systemd-logind[1443]: New session 8 of user core. Jan 29 13:04:50.153673 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 13:04:50.875115 sshd[1651]: pam_unix(sshd:session): session closed for user core Jan 29 13:04:50.886674 systemd[1]: sshd@5-172.24.4.245:22-172.24.4.1:52976.service: Deactivated successfully. Jan 29 13:04:50.890213 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 13:04:50.894708 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Jan 29 13:04:50.899994 systemd[1]: Started sshd@6-172.24.4.245:22-172.24.4.1:52982.service - OpenSSH per-connection server daemon (172.24.4.1:52982). Jan 29 13:04:50.903179 systemd-logind[1443]: Removed session 8. Jan 29 13:04:52.124541 sshd[1658]: Accepted publickey for core from 172.24.4.1 port 52982 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:04:52.127159 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:04:52.137985 systemd-logind[1443]: New session 9 of user core. Jan 29 13:04:52.147695 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 13:04:52.613248 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 13:04:52.613559 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 13:04:52.625605 sudo[1661]: pam_unix(sudo:session): session closed for user root Jan 29 13:04:52.872655 sshd[1658]: pam_unix(sshd:session): session closed for user core Jan 29 13:04:52.883960 systemd[1]: sshd@6-172.24.4.245:22-172.24.4.1:52982.service: Deactivated successfully. Jan 29 13:04:52.886896 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 13:04:52.889562 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Jan 29 13:04:52.898838 systemd[1]: Started sshd@7-172.24.4.245:22-172.24.4.1:52990.service - OpenSSH per-connection server daemon (172.24.4.1:52990). Jan 29 13:04:52.901784 systemd-logind[1443]: Removed session 9. Jan 29 13:04:53.564763 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 13:04:53.573861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 13:04:53.863029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 13:04:53.866612 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 13:04:54.007217 kubelet[1676]: E0129 13:04:54.007114 1676 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 13:04:54.012114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 13:04:54.012274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 13:04:54.172909 sshd[1666]: Accepted publickey for core from 172.24.4.1 port 52990 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:04:54.175756 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:04:54.187070 systemd-logind[1443]: New session 10 of user core. Jan 29 13:04:54.193742 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 13:04:54.662454 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 13:04:54.663120 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 13:04:54.671025 sudo[1686]: pam_unix(sudo:session): session closed for user root Jan 29 13:04:54.683080 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 13:04:54.683787 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 13:04:54.709531 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 13:04:54.726728 auditctl[1689]: No rules Jan 29 13:04:54.729295 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 13:04:54.729805 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 13:04:54.738505 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 13:04:54.805637 augenrules[1707]: No rules Jan 29 13:04:54.807651 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 13:04:54.810363 sudo[1685]: pam_unix(sudo:session): session closed for user root Jan 29 13:04:55.070611 sshd[1666]: pam_unix(sshd:session): session closed for user core Jan 29 13:04:55.088141 systemd[1]: sshd@7-172.24.4.245:22-172.24.4.1:52990.service: Deactivated successfully. Jan 29 13:04:55.091703 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 13:04:55.096068 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Jan 29 13:04:55.102956 systemd[1]: Started sshd@8-172.24.4.245:22-172.24.4.1:55452.service - OpenSSH per-connection server daemon (172.24.4.1:55452). Jan 29 13:04:55.105891 systemd-logind[1443]: Removed session 10. Jan 29 13:04:56.274626 sshd[1715]: Accepted publickey for core from 172.24.4.1 port 55452 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:04:56.277580 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:04:56.287388 systemd-logind[1443]: New session 11 of user core. Jan 29 13:04:56.299831 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 13:04:56.720727 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 13:04:56.722100 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 13:04:57.411657 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 13:04:57.413186 (dockerd)[1735]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 13:04:58.019373 dockerd[1735]: time="2025-01-29T13:04:58.019121093Z" level=info msg="Starting up" Jan 29 13:04:58.202044 systemd[1]: var-lib-docker-metacopy\x2dcheck311837917-merged.mount: Deactivated successfully. Jan 29 13:04:58.247774 dockerd[1735]: time="2025-01-29T13:04:58.247137859Z" level=info msg="Loading containers: start." Jan 29 13:04:58.417470 kernel: Initializing XFRM netlink socket Jan 29 13:04:58.509855 systemd-networkd[1375]: docker0: Link UP Jan 29 13:04:58.526477 dockerd[1735]: time="2025-01-29T13:04:58.526434510Z" level=info msg="Loading containers: done." Jan 29 13:04:58.542431 dockerd[1735]: time="2025-01-29T13:04:58.542246184Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 13:04:58.542431 dockerd[1735]: time="2025-01-29T13:04:58.542373253Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 13:04:58.542578 dockerd[1735]: time="2025-01-29T13:04:58.542489431Z" level=info msg="Daemon has completed initialization" Jan 29 13:04:58.581762 dockerd[1735]: time="2025-01-29T13:04:58.581694299Z" level=info msg="API listen on /run/docker.sock" Jan 29 13:04:58.582168 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 13:05:00.208606 containerd[1464]: time="2025-01-29T13:05:00.208517606Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 13:05:00.927954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3447535602.mount: Deactivated successfully. Jan 29 13:05:02.792800 containerd[1464]: time="2025-01-29T13:05:02.792753451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:02.794126 containerd[1464]: time="2025-01-29T13:05:02.794072591Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 29 13:05:02.795305 containerd[1464]: time="2025-01-29T13:05:02.795282986Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:02.798570 containerd[1464]: time="2025-01-29T13:05:02.798539024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:02.799890 containerd[1464]: time="2025-01-29T13:05:02.799859087Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.59128299s" Jan 29 13:05:02.799938 containerd[1464]: time="2025-01-29T13:05:02.799893040Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 13:05:02.823199 containerd[1464]: time="2025-01-29T13:05:02.823152895Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 13:05:04.063902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 13:05:04.071948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 13:05:04.285608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 13:05:04.289298 (kubelet)[1948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 13:05:04.478481 kubelet[1948]: E0129 13:05:04.478242 1948 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 13:05:04.482708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 13:05:04.483257 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 13:05:05.282380 containerd[1464]: time="2025-01-29T13:05:05.282323415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:05.283982 containerd[1464]: time="2025-01-29T13:05:05.283580386Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 29 13:05:05.287411 containerd[1464]: time="2025-01-29T13:05:05.286491686Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:05.290699 containerd[1464]: time="2025-01-29T13:05:05.290662200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:05.291907 containerd[1464]: time="2025-01-29T13:05:05.291880538Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.468681998s" Jan 29 13:05:05.291981 containerd[1464]: time="2025-01-29T13:05:05.291964707Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 13:05:05.318252 containerd[1464]: time="2025-01-29T13:05:05.318218706Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 13:05:07.248377 containerd[1464]: time="2025-01-29T13:05:07.248238281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:07.249836 containerd[1464]: time="2025-01-29T13:05:07.249584468Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 29 13:05:07.251083 containerd[1464]: time="2025-01-29T13:05:07.251026084Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:07.254353 containerd[1464]: time="2025-01-29T13:05:07.254274375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:07.256103 containerd[1464]: time="2025-01-29T13:05:07.255427850Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.936965735s" Jan 29 13:05:07.256103 containerd[1464]: time="2025-01-29T13:05:07.255469277Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 13:05:07.279064 containerd[1464]: time="2025-01-29T13:05:07.279026141Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 13:05:08.625465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount696115048.mount: Deactivated successfully. Jan 29 13:05:09.107443 containerd[1464]: time="2025-01-29T13:05:09.107170908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:09.108858 containerd[1464]: time="2025-01-29T13:05:09.108627232Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 29 13:05:09.110163 containerd[1464]: time="2025-01-29T13:05:09.110083253Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:09.112608 containerd[1464]: time="2025-01-29T13:05:09.112555300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:09.113516 containerd[1464]: time="2025-01-29T13:05:09.113305933Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.83423602s" Jan 29 13:05:09.113516 containerd[1464]: time="2025-01-29T13:05:09.113356969Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 13:05:09.135777 containerd[1464]: time="2025-01-29T13:05:09.135710761Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 13:05:09.779943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968302573.mount: Deactivated successfully. Jan 29 13:05:10.830518 containerd[1464]: time="2025-01-29T13:05:10.830424983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:10.831822 containerd[1464]: time="2025-01-29T13:05:10.831748365Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 29 13:05:10.833291 containerd[1464]: time="2025-01-29T13:05:10.833238070Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:10.837570 containerd[1464]: time="2025-01-29T13:05:10.837493362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:10.838226 containerd[1464]: time="2025-01-29T13:05:10.838190655Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.702439647s" Jan 29 13:05:10.838274 containerd[1464]: time="2025-01-29T13:05:10.838227144Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 13:05:10.860947 containerd[1464]: time="2025-01-29T13:05:10.860916957Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 13:05:11.789960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3584181833.mount: Deactivated successfully. Jan 29 13:05:11.801634 containerd[1464]: time="2025-01-29T13:05:11.801544208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:11.803581 containerd[1464]: time="2025-01-29T13:05:11.803483748Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 29 13:05:11.804710 containerd[1464]: time="2025-01-29T13:05:11.804606162Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:11.811553 containerd[1464]: time="2025-01-29T13:05:11.811481745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:11.813924 containerd[1464]: time="2025-01-29T13:05:11.813706363Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 952.745333ms" Jan 29 13:05:11.813924 containerd[1464]: time="2025-01-29T13:05:11.813775302Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 13:05:11.860226 containerd[1464]: time="2025-01-29T13:05:11.860142437Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 13:05:12.495867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount685975820.mount: Deactivated successfully. Jan 29 13:05:13.124599 update_engine[1445]: I20250129 13:05:13.124547 1445 update_attempter.cc:509] Updating boot flags... Jan 29 13:05:13.164679 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2077) Jan 29 13:05:14.564443 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 13:05:14.571566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 13:05:14.664537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 13:05:14.668952 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 13:05:14.873699 kubelet[2107]: E0129 13:05:14.873322 2107 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 13:05:14.876295 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 13:05:14.876453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 13:05:15.536540 containerd[1464]: time="2025-01-29T13:05:15.536388696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:15.539048 containerd[1464]: time="2025-01-29T13:05:15.538903566Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 29 13:05:15.546351 containerd[1464]: time="2025-01-29T13:05:15.546275652Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:15.562326 containerd[1464]: time="2025-01-29T13:05:15.562249606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:15.564963 containerd[1464]: time="2025-01-29T13:05:15.564781298Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.704442943s" Jan 29 13:05:15.565428 containerd[1464]: time="2025-01-29T13:05:15.565247875Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 13:05:20.477219 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 13:05:20.484740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 13:05:20.511127 systemd[1]: Reloading requested from client PID 2183 ('systemctl') (unit session-11.scope)... Jan 29 13:05:20.511143 systemd[1]: Reloading... Jan 29 13:05:20.581421 zram_generator::config[2219]: No configuration found. Jan 29 13:05:20.730942 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 13:05:20.812283 systemd[1]: Reloading finished in 300 ms. Jan 29 13:05:20.863468 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 13:05:20.863554 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 13:05:20.863805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 13:05:20.872732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 13:05:21.831726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 13:05:21.831727 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 13:05:21.888686 kubelet[2287]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 13:05:21.888686 kubelet[2287]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 13:05:21.888686 kubelet[2287]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 13:05:21.889360 kubelet[2287]: I0129 13:05:21.888694 2287 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 13:05:22.592831 kubelet[2287]: I0129 13:05:22.592776 2287 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 13:05:22.592831 kubelet[2287]: I0129 13:05:22.592803 2287 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 13:05:22.593098 kubelet[2287]: I0129 13:05:22.593004 2287 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 13:05:22.619475 kubelet[2287]: I0129 13:05:22.618902 2287 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 13:05:22.619654 kubelet[2287]: E0129 13:05:22.619571 2287 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.245:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:22.641655 kubelet[2287]: I0129 13:05:22.641614 2287 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 13:05:22.642388 kubelet[2287]: I0129 13:05:22.642325 2287 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 13:05:22.643463 kubelet[2287]: I0129 13:05:22.642580 2287 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-e-f5d4e76a77.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 13:05:22.643463 kubelet[2287]: I0129 13:05:22.643027 2287 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 13:05:22.643463 kubelet[2287]: I0129 13:05:22.643055 2287 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 13:05:22.643463 kubelet[2287]: I0129 13:05:22.643270 2287 state_mem.go:36] "Initialized new in-memory state store" Jan 29 13:05:22.645830 kubelet[2287]: I0129 13:05:22.645797 2287 kubelet.go:400] "Attempting to sync node with API server" Jan 29 13:05:22.646178 kubelet[2287]: I0129 13:05:22.645985 2287 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 13:05:22.646178 kubelet[2287]: I0129 13:05:22.646045 2287 kubelet.go:312] "Adding apiserver pod source" Jan 29 13:05:22.646178 kubelet[2287]: I0129 13:05:22.646075 2287 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 13:05:22.655543 kubelet[2287]: W0129 13:05:22.655116 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.245:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-f5d4e76a77.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:22.655543 kubelet[2287]: E0129 13:05:22.655242 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.245:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-f5d4e76a77.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:22.659143 kubelet[2287]: I0129 13:05:22.658741 2287 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 13:05:22.659455 kubelet[2287]: W0129 13:05:22.659323 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.245:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:22.659455 kubelet[2287]: E0129 13:05:22.659405 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.245:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:22.663147 kubelet[2287]: I0129 13:05:22.662103 2287 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 13:05:22.663147 kubelet[2287]: W0129 13:05:22.662211 2287 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 13:05:22.664164 kubelet[2287]: I0129 13:05:22.664056 2287 server.go:1264] "Started kubelet" Jan 29 13:05:22.667948 kubelet[2287]: I0129 13:05:22.667880 2287 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 13:05:22.677566 kubelet[2287]: E0129 13:05:22.677201 2287 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.245:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.245:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-e-f5d4e76a77.novalocal.181f2b9a3ea0f3f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-e-f5d4e76a77.novalocal,UID:ci-4081-3-0-e-f5d4e76a77.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-e-f5d4e76a77.novalocal,},FirstTimestamp:2025-01-29 13:05:22.663994359 +0000 UTC m=+0.828642700,LastTimestamp:2025-01-29 13:05:22.663994359 +0000 UTC m=+0.828642700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-e-f5d4e76a77.novalocal,}" Jan 29 13:05:22.679538 kubelet[2287]: I0129 13:05:22.678915 2287 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 13:05:22.680228 kubelet[2287]: I0129 13:05:22.680182 2287 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 13:05:22.681224 kubelet[2287]: I0129 13:05:22.681168 2287 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 13:05:22.681353 kubelet[2287]: I0129 13:05:22.681291 2287 reconciler.go:26] "Reconciler: start to sync state" Jan 29 13:05:22.681966 kubelet[2287]: I0129 13:05:22.681931 2287 server.go:455] "Adding debug handlers to kubelet server" Jan 29 13:05:22.684659 kubelet[2287]: I0129 13:05:22.684542 2287 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 13:05:22.685539 kubelet[2287]: I0129 13:05:22.685466 2287 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 13:05:22.689473 kubelet[2287]: W0129 13:05:22.689329 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.245:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:22.689880 kubelet[2287]: E0129 13:05:22.689811 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.245:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:22.690464 kubelet[2287]: E0129 13:05:22.690286 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-f5d4e76a77.novalocal?timeout=10s\": dial tcp 172.24.4.245:6443: connect: connection refused" interval="200ms" Jan 29 13:05:22.690978 kubelet[2287]: I0129 13:05:22.690942 2287 factory.go:221] Registration of the systemd container factory successfully Jan 29 13:05:22.691297 kubelet[2287]: I0129 13:05:22.691258 2287 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 13:05:22.697180 kubelet[2287]: I0129 13:05:22.697140 2287 factory.go:221] Registration of the containerd container factory successfully Jan 29 13:05:22.715978 kubelet[2287]: I0129 13:05:22.715913 2287 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 13:05:22.717729 kubelet[2287]: I0129 13:05:22.717658 2287 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 13:05:22.717729 kubelet[2287]: I0129 13:05:22.717706 2287 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 13:05:22.717729 kubelet[2287]: I0129 13:05:22.717730 2287 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 13:05:22.717850 kubelet[2287]: E0129 13:05:22.717771 2287 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 13:05:22.726347 kubelet[2287]: W0129 13:05:22.726304 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.245:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:22.726515 kubelet[2287]: E0129 13:05:22.726503 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.245:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:22.728704 kubelet[2287]: I0129 13:05:22.728661 2287 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 13:05:22.728704 kubelet[2287]: I0129 13:05:22.728674 2287 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 13:05:22.728924 kubelet[2287]: I0129 13:05:22.728820 2287 state_mem.go:36] "Initialized new in-memory state store" Jan 29 13:05:22.735000 kubelet[2287]: I0129 13:05:22.734939 2287 policy_none.go:49] "None policy: Start" Jan 29 13:05:22.735714 kubelet[2287]: I0129 13:05:22.735649 2287 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 13:05:22.735714 kubelet[2287]: I0129 13:05:22.735668 2287 state_mem.go:35] "Initializing new in-memory state store" Jan 29 13:05:22.743999 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 13:05:22.755163 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 13:05:22.758515 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 13:05:22.770000 kubelet[2287]: I0129 13:05:22.769958 2287 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 13:05:22.770381 kubelet[2287]: I0129 13:05:22.770116 2287 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 13:05:22.770381 kubelet[2287]: I0129 13:05:22.770222 2287 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 13:05:22.772328 kubelet[2287]: E0129 13:05:22.772307 2287 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-e-f5d4e76a77.novalocal\" not found" Jan 29 13:05:22.781970 kubelet[2287]: I0129 13:05:22.781917 2287 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:22.782446 kubelet[2287]: E0129 13:05:22.782419 2287 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.245:6443/api/v1/nodes\": dial tcp 172.24.4.245:6443: connect: connection refused" node="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:22.819173 kubelet[2287]: I0129 13:05:22.818759 2287 topology_manager.go:215] "Topology Admit Handler" podUID="96274634fb2731a55f6d64ace7da59a4" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:22.821969 kubelet[2287]: I0129 13:05:22.821888 2287 topology_manager.go:215] "Topology Admit Handler" podUID="548375779413cb88b589b3ab554e436b" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:22.826436 kubelet[2287]: I0129 13:05:22.826281 2287 topology_manager.go:215] "Topology Admit Handler" podUID="b3567059fc6fb6ae058aa42e89432e9e" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:22.840039 systemd[1]: Created slice kubepods-burstable-pod96274634fb2731a55f6d64ace7da59a4.slice - libcontainer container kubepods-burstable-pod96274634fb2731a55f6d64ace7da59a4.slice. Jan 29 13:05:22.862069 systemd[1]: Created slice kubepods-burstable-pod548375779413cb88b589b3ab554e436b.slice - libcontainer container kubepods-burstable-pod548375779413cb88b589b3ab554e436b.slice. Jan 29 13:05:22.881747 kubelet[2287]: I0129 13:05:22.881667 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b3567059fc6fb6ae058aa42e89432e9e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"b3567059fc6fb6ae058aa42e89432e9e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:22.882933 kubelet[2287]: I0129 13:05:22.882015 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b3567059fc6fb6ae058aa42e89432e9e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"b3567059fc6fb6ae058aa42e89432e9e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:22.882933 kubelet[2287]: I0129 13:05:22.882086 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b3567059fc6fb6ae058aa42e89432e9e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"b3567059fc6fb6ae058aa42e89432e9e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:22.882861 systemd[1]: Created slice kubepods-burstable-podb3567059fc6fb6ae058aa42e89432e9e.slice - libcontainer container kubepods-burstable-podb3567059fc6fb6ae058aa42e89432e9e.slice. Jan 29 13:05:22.891869 kubelet[2287]: E0129 13:05:22.891717 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-f5d4e76a77.novalocal?timeout=10s\": dial tcp 172.24.4.245:6443: connect: connection refused" interval="400ms" Jan 29 13:05:22.983318 kubelet[2287]: I0129 13:05:22.983168 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3567059fc6fb6ae058aa42e89432e9e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"b3567059fc6fb6ae058aa42e89432e9e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:22.983318 kubelet[2287]: I0129 13:05:22.983256 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/96274634fb2731a55f6d64ace7da59a4-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"96274634fb2731a55f6d64ace7da59a4\") " pod="kube-system/kube-scheduler-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:22.983318 kubelet[2287]: I0129 13:05:22.983306 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/548375779413cb88b589b3ab554e436b-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"548375779413cb88b589b3ab554e436b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:22.983637 kubelet[2287]: I0129 13:05:22.983432 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3567059fc6fb6ae058aa42e89432e9e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"b3567059fc6fb6ae058aa42e89432e9e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:22.983637 kubelet[2287]: I0129 13:05:22.983486 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/548375779413cb88b589b3ab554e436b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"548375779413cb88b589b3ab554e436b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:22.983637 kubelet[2287]: I0129 13:05:22.983533 2287 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/548375779413cb88b589b3ab554e436b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"548375779413cb88b589b3ab554e436b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:22.987290 kubelet[2287]: I0129 13:05:22.987233 2287 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:22.988159 kubelet[2287]: E0129 13:05:22.988058 2287 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.245:6443/api/v1/nodes\": dial tcp 172.24.4.245:6443: connect: connection refused" node="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:23.158113 containerd[1464]: time="2025-01-29T13:05:23.157769585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-e-f5d4e76a77.novalocal,Uid:96274634fb2731a55f6d64ace7da59a4,Namespace:kube-system,Attempt:0,}" Jan 29 13:05:23.183103 containerd[1464]: time="2025-01-29T13:05:23.182988264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal,Uid:548375779413cb88b589b3ab554e436b,Namespace:kube-system,Attempt:0,}" Jan 29 13:05:23.190588 containerd[1464]: time="2025-01-29T13:05:23.190444822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal,Uid:b3567059fc6fb6ae058aa42e89432e9e,Namespace:kube-system,Attempt:0,}" Jan 29 13:05:23.293612 kubelet[2287]: E0129 13:05:23.293381 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-f5d4e76a77.novalocal?timeout=10s\": dial tcp 172.24.4.245:6443: connect: connection refused" interval="800ms" Jan 29 13:05:23.391245 kubelet[2287]: I0129 13:05:23.390960 2287 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:23.391824 kubelet[2287]: E0129 13:05:23.391680 2287 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.245:6443/api/v1/nodes\": dial tcp 172.24.4.245:6443: connect: connection refused" node="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:23.626614 kubelet[2287]: W0129 13:05:23.626478 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.245:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:23.626614 kubelet[2287]: E0129 13:05:23.626581 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.245:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:23.759064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1263446752.mount: Deactivated successfully. Jan 29 13:05:23.767772 containerd[1464]: time="2025-01-29T13:05:23.767669822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 13:05:23.772538 containerd[1464]: time="2025-01-29T13:05:23.772362398Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 13:05:23.774118 containerd[1464]: time="2025-01-29T13:05:23.774013199Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 13:05:23.776905 containerd[1464]: time="2025-01-29T13:05:23.776833538Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 13:05:23.778146 containerd[1464]: time="2025-01-29T13:05:23.778075110Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 13:05:23.779772 containerd[1464]: time="2025-01-29T13:05:23.779716625Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 13:05:23.783439 containerd[1464]: time="2025-01-29T13:05:23.781582561Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 13:05:23.789194 containerd[1464]: time="2025-01-29T13:05:23.789134809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 13:05:23.796459 containerd[1464]: time="2025-01-29T13:05:23.796332441Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 613.188826ms" Jan 29 13:05:23.800053 containerd[1464]: time="2025-01-29T13:05:23.799982428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 609.397944ms" Jan 29 13:05:23.801233 containerd[1464]: time="2025-01-29T13:05:23.801187472Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 643.266754ms" Jan 29 13:05:23.873252 kubelet[2287]: W0129 13:05:23.873212 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.245:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:23.874311 kubelet[2287]: E0129 13:05:23.874281 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.245:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:23.908852 kubelet[2287]: W0129 13:05:23.907614 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.245:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:23.908852 kubelet[2287]: E0129 13:05:23.907835 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.245:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:24.028470 containerd[1464]: time="2025-01-29T13:05:24.027893272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:05:24.028470 containerd[1464]: time="2025-01-29T13:05:24.027955709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:05:24.028470 containerd[1464]: time="2025-01-29T13:05:24.027974845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:24.028470 containerd[1464]: time="2025-01-29T13:05:24.028058401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:24.028968 containerd[1464]: time="2025-01-29T13:05:24.028339950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:05:24.029262 containerd[1464]: time="2025-01-29T13:05:24.028976116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:05:24.029262 containerd[1464]: time="2025-01-29T13:05:24.029030708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:24.030182 containerd[1464]: time="2025-01-29T13:05:24.030023022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:24.030182 containerd[1464]: time="2025-01-29T13:05:24.029544984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:05:24.030182 containerd[1464]: time="2025-01-29T13:05:24.029596671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:05:24.030182 containerd[1464]: time="2025-01-29T13:05:24.029615897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:24.030761 containerd[1464]: time="2025-01-29T13:05:24.030524184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:24.060661 systemd[1]: Started cri-containerd-83b39167233f8cfbb3e3084851cc24abdc5f339eb88862ed7403f2d08673b22f.scope - libcontainer container 83b39167233f8cfbb3e3084851cc24abdc5f339eb88862ed7403f2d08673b22f. Jan 29 13:05:24.068136 systemd[1]: Started cri-containerd-7828ff78097f6089d495d73d09c25f7188cb60882ffc0a3a563204641eb54b33.scope - libcontainer container 7828ff78097f6089d495d73d09c25f7188cb60882ffc0a3a563204641eb54b33. Jan 29 13:05:24.070519 systemd[1]: Started cri-containerd-87f3abbed43b521bfd6987695c14a6b4e3fe6ce9442210f4653014291537583c.scope - libcontainer container 87f3abbed43b521bfd6987695c14a6b4e3fe6ce9442210f4653014291537583c. Jan 29 13:05:24.094958 kubelet[2287]: E0129 13:05:24.094886 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-f5d4e76a77.novalocal?timeout=10s\": dial tcp 172.24.4.245:6443: connect: connection refused" interval="1.6s" Jan 29 13:05:24.125543 containerd[1464]: time="2025-01-29T13:05:24.125502011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal,Uid:548375779413cb88b589b3ab554e436b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7828ff78097f6089d495d73d09c25f7188cb60882ffc0a3a563204641eb54b33\"" Jan 29 13:05:24.129321 containerd[1464]: time="2025-01-29T13:05:24.129284506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal,Uid:b3567059fc6fb6ae058aa42e89432e9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"83b39167233f8cfbb3e3084851cc24abdc5f339eb88862ed7403f2d08673b22f\"" Jan 29 13:05:24.131024 kubelet[2287]: W0129 13:05:24.130976 2287 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.245:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-f5d4e76a77.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:24.131163 kubelet[2287]: E0129 13:05:24.131143 2287 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.245:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-f5d4e76a77.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.245:6443: connect: connection refused Jan 29 13:05:24.135182 containerd[1464]: time="2025-01-29T13:05:24.135136520Z" level=info msg="CreateContainer within sandbox \"83b39167233f8cfbb3e3084851cc24abdc5f339eb88862ed7403f2d08673b22f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 13:05:24.136426 containerd[1464]: time="2025-01-29T13:05:24.135700268Z" level=info msg="CreateContainer within sandbox \"7828ff78097f6089d495d73d09c25f7188cb60882ffc0a3a563204641eb54b33\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 13:05:24.147567 containerd[1464]: time="2025-01-29T13:05:24.147513029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-e-f5d4e76a77.novalocal,Uid:96274634fb2731a55f6d64ace7da59a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"87f3abbed43b521bfd6987695c14a6b4e3fe6ce9442210f4653014291537583c\"" Jan 29 13:05:24.150278 containerd[1464]: time="2025-01-29T13:05:24.150236575Z" level=info msg="CreateContainer within sandbox \"87f3abbed43b521bfd6987695c14a6b4e3fe6ce9442210f4653014291537583c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 13:05:24.173120 containerd[1464]: time="2025-01-29T13:05:24.173023180Z" level=info msg="CreateContainer within sandbox \"83b39167233f8cfbb3e3084851cc24abdc5f339eb88862ed7403f2d08673b22f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"851dff2cd2409238aa3b8cc1651a8c6f15e8ad15bbee786e6a20ded6506aae11\"" Jan 29 13:05:24.173953 containerd[1464]: time="2025-01-29T13:05:24.173647934Z" level=info msg="StartContainer for \"851dff2cd2409238aa3b8cc1651a8c6f15e8ad15bbee786e6a20ded6506aae11\"" Jan 29 13:05:24.179438 containerd[1464]: time="2025-01-29T13:05:24.179379911Z" level=info msg="CreateContainer within sandbox \"7828ff78097f6089d495d73d09c25f7188cb60882ffc0a3a563204641eb54b33\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1017e8674e9fab7a2105d84053ec8da95db43ac1010c63d450508724df9f74c0\"" Jan 29 13:05:24.183571 containerd[1464]: time="2025-01-29T13:05:24.183544695Z" level=info msg="StartContainer for \"1017e8674e9fab7a2105d84053ec8da95db43ac1010c63d450508724df9f74c0\"" Jan 29 13:05:24.193198 containerd[1464]: time="2025-01-29T13:05:24.193143537Z" level=info msg="CreateContainer within sandbox \"87f3abbed43b521bfd6987695c14a6b4e3fe6ce9442210f4653014291537583c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7de6481b3f3635b50f04ff98079350d11191a74e2b59cc5b1f8b2bb77141bd06\"" Jan 29 13:05:24.194831 containerd[1464]: time="2025-01-29T13:05:24.194593961Z" level=info msg="StartContainer for \"7de6481b3f3635b50f04ff98079350d11191a74e2b59cc5b1f8b2bb77141bd06\"" Jan 29 13:05:24.196988 kubelet[2287]: I0129 13:05:24.196923 2287 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:24.197336 kubelet[2287]: E0129 13:05:24.197295 2287 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.245:6443/api/v1/nodes\": dial tcp 172.24.4.245:6443: connect: connection refused" node="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:24.205610 systemd[1]: Started cri-containerd-851dff2cd2409238aa3b8cc1651a8c6f15e8ad15bbee786e6a20ded6506aae11.scope - libcontainer container 851dff2cd2409238aa3b8cc1651a8c6f15e8ad15bbee786e6a20ded6506aae11. Jan 29 13:05:24.221579 systemd[1]: Started cri-containerd-1017e8674e9fab7a2105d84053ec8da95db43ac1010c63d450508724df9f74c0.scope - libcontainer container 1017e8674e9fab7a2105d84053ec8da95db43ac1010c63d450508724df9f74c0. Jan 29 13:05:24.242536 systemd[1]: Started cri-containerd-7de6481b3f3635b50f04ff98079350d11191a74e2b59cc5b1f8b2bb77141bd06.scope - libcontainer container 7de6481b3f3635b50f04ff98079350d11191a74e2b59cc5b1f8b2bb77141bd06. Jan 29 13:05:24.289330 containerd[1464]: time="2025-01-29T13:05:24.288773860Z" level=info msg="StartContainer for \"851dff2cd2409238aa3b8cc1651a8c6f15e8ad15bbee786e6a20ded6506aae11\" returns successfully" Jan 29 13:05:24.297148 containerd[1464]: time="2025-01-29T13:05:24.297110270Z" level=info msg="StartContainer for \"1017e8674e9fab7a2105d84053ec8da95db43ac1010c63d450508724df9f74c0\" returns successfully" Jan 29 13:05:24.334014 containerd[1464]: time="2025-01-29T13:05:24.333967035Z" level=info msg="StartContainer for \"7de6481b3f3635b50f04ff98079350d11191a74e2b59cc5b1f8b2bb77141bd06\" returns successfully" Jan 29 13:05:25.799709 kubelet[2287]: I0129 13:05:25.799664 2287 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:25.912917 kubelet[2287]: E0129 13:05:25.912865 2287 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-e-f5d4e76a77.novalocal\" not found" node="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:26.039498 kubelet[2287]: E0129 13:05:26.039336 2287 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-e-f5d4e76a77.novalocal.181f2b9a3ea0f3f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-e-f5d4e76a77.novalocal,UID:ci-4081-3-0-e-f5d4e76a77.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-e-f5d4e76a77.novalocal,},FirstTimestamp:2025-01-29 13:05:22.663994359 +0000 UTC m=+0.828642700,LastTimestamp:2025-01-29 13:05:22.663994359 +0000 UTC m=+0.828642700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-e-f5d4e76a77.novalocal,}" Jan 29 13:05:26.098959 kubelet[2287]: E0129 13:05:26.098730 2287 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-e-f5d4e76a77.novalocal.181f2b9a42722160 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-e-f5d4e76a77.novalocal,UID:ci-4081-3-0-e-f5d4e76a77.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081-3-0-e-f5d4e76a77.novalocal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-e-f5d4e76a77.novalocal,},FirstTimestamp:2025-01-29 13:05:22.728034656 +0000 UTC m=+0.892682947,LastTimestamp:2025-01-29 13:05:22.728034656 +0000 UTC m=+0.892682947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-e-f5d4e76a77.novalocal,}" Jan 29 13:05:26.110583 kubelet[2287]: I0129 13:05:26.110429 2287 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:26.125793 kubelet[2287]: E0129 13:05:26.125735 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-f5d4e76a77.novalocal\" not found" Jan 29 13:05:26.225996 kubelet[2287]: E0129 13:05:26.225941 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-f5d4e76a77.novalocal\" not found" Jan 29 13:05:26.326990 kubelet[2287]: E0129 13:05:26.326956 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-f5d4e76a77.novalocal\" not found" Jan 29 13:05:26.427996 kubelet[2287]: E0129 13:05:26.427341 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-f5d4e76a77.novalocal\" not found" Jan 29 13:05:26.528342 kubelet[2287]: E0129 13:05:26.528295 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-f5d4e76a77.novalocal\" not found" Jan 29 13:05:26.628652 kubelet[2287]: E0129 13:05:26.628605 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-f5d4e76a77.novalocal\" not found" Jan 29 13:05:26.729567 kubelet[2287]: E0129 13:05:26.729449 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-f5d4e76a77.novalocal\" not found" Jan 29 13:05:26.830667 kubelet[2287]: E0129 13:05:26.830617 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-f5d4e76a77.novalocal\" not found" Jan 29 13:05:26.931186 kubelet[2287]: E0129 13:05:26.931113 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-f5d4e76a77.novalocal\" not found" Jan 29 13:05:27.032559 kubelet[2287]: E0129 13:05:27.032273 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-f5d4e76a77.novalocal\" not found" Jan 29 13:05:27.132788 kubelet[2287]: E0129 13:05:27.132704 2287 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-f5d4e76a77.novalocal\" not found" Jan 29 13:05:27.658333 kubelet[2287]: I0129 13:05:27.658219 2287 apiserver.go:52] "Watching apiserver" Jan 29 13:05:27.681882 kubelet[2287]: I0129 13:05:27.681831 2287 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 13:05:28.677278 systemd[1]: Reloading requested from client PID 2557 ('systemctl') (unit session-11.scope)... Jan 29 13:05:28.677576 systemd[1]: Reloading... Jan 29 13:05:28.776455 zram_generator::config[2592]: No configuration found. Jan 29 13:05:28.926960 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 13:05:29.025247 systemd[1]: Reloading finished in 346 ms. Jan 29 13:05:29.064152 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 13:05:29.075779 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 13:05:29.075949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 13:05:29.075995 systemd[1]: kubelet.service: Consumed 1.357s CPU time, 113.3M memory peak, 0B memory swap peak. Jan 29 13:05:29.080720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 13:05:29.269166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 13:05:29.277871 (kubelet)[2660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 13:05:29.329106 kubelet[2660]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 13:05:29.329106 kubelet[2660]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 13:05:29.329106 kubelet[2660]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 13:05:29.329483 kubelet[2660]: I0129 13:05:29.329144 2660 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 13:05:29.334371 kubelet[2660]: I0129 13:05:29.333264 2660 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 13:05:29.334371 kubelet[2660]: I0129 13:05:29.333288 2660 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 13:05:29.334371 kubelet[2660]: I0129 13:05:29.333518 2660 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 13:05:29.442064 kubelet[2660]: I0129 13:05:29.335146 2660 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 13:05:29.442064 kubelet[2660]: I0129 13:05:29.336368 2660 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 13:05:29.442064 kubelet[2660]: I0129 13:05:29.347330 2660 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 13:05:29.442064 kubelet[2660]: I0129 13:05:29.347571 2660 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 13:05:29.442308 kubelet[2660]: I0129 13:05:29.347599 2660 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-e-f5d4e76a77.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 13:05:29.442308 kubelet[2660]: I0129 13:05:29.347774 2660 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 13:05:29.442308 kubelet[2660]: I0129 13:05:29.347785 2660 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 13:05:29.442308 kubelet[2660]: I0129 13:05:29.347820 2660 state_mem.go:36] "Initialized new in-memory state store" Jan 29 13:05:29.442308 kubelet[2660]: I0129 13:05:29.347916 2660 kubelet.go:400] "Attempting to sync node with API server" Jan 29 13:05:29.442771 kubelet[2660]: I0129 13:05:29.347935 2660 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 13:05:29.442771 kubelet[2660]: I0129 13:05:29.348772 2660 kubelet.go:312] "Adding apiserver pod source" Jan 29 13:05:29.442771 kubelet[2660]: I0129 13:05:29.348808 2660 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 13:05:29.442771 kubelet[2660]: I0129 13:05:29.351428 2660 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 13:05:29.442771 kubelet[2660]: I0129 13:05:29.351590 2660 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 13:05:29.442771 kubelet[2660]: I0129 13:05:29.351958 2660 server.go:1264] "Started kubelet" Jan 29 13:05:29.442771 kubelet[2660]: I0129 13:05:29.352870 2660 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 13:05:29.442771 kubelet[2660]: I0129 13:05:29.353693 2660 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 13:05:29.442771 kubelet[2660]: I0129 13:05:29.353757 2660 server.go:455] "Adding debug handlers to kubelet server" Jan 29 13:05:29.442771 kubelet[2660]: E0129 13:05:29.384004 2660 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 13:05:29.442771 kubelet[2660]: I0129 13:05:29.439559 2660 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 13:05:29.448297 kubelet[2660]: I0129 13:05:29.444323 2660 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 13:05:29.448297 kubelet[2660]: I0129 13:05:29.444658 2660 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 13:05:29.448297 kubelet[2660]: I0129 13:05:29.445468 2660 reconciler.go:26] "Reconciler: start to sync state" Jan 29 13:05:29.448297 kubelet[2660]: I0129 13:05:29.448141 2660 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 13:05:29.451621 kubelet[2660]: I0129 13:05:29.451566 2660 factory.go:221] Registration of the systemd container factory successfully Jan 29 13:05:29.452521 kubelet[2660]: I0129 13:05:29.452244 2660 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 13:05:29.464829 kubelet[2660]: I0129 13:05:29.464765 2660 factory.go:221] Registration of the containerd container factory successfully Jan 29 13:05:29.484031 kubelet[2660]: I0129 13:05:29.483952 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 13:05:29.486850 kubelet[2660]: I0129 13:05:29.486277 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 13:05:29.486850 kubelet[2660]: I0129 13:05:29.486342 2660 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 13:05:29.486850 kubelet[2660]: I0129 13:05:29.486375 2660 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 13:05:29.486850 kubelet[2660]: E0129 13:05:29.486519 2660 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 13:05:29.548550 kubelet[2660]: I0129 13:05:29.547241 2660 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:29.555820 kubelet[2660]: I0129 13:05:29.555648 2660 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 13:05:29.556413 kubelet[2660]: I0129 13:05:29.556255 2660 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 13:05:29.556413 kubelet[2660]: I0129 13:05:29.556283 2660 state_mem.go:36] "Initialized new in-memory state store" Jan 29 13:05:29.556684 kubelet[2660]: I0129 13:05:29.556625 2660 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 13:05:29.556892 kubelet[2660]: I0129 13:05:29.556750 2660 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 13:05:29.556892 kubelet[2660]: I0129 13:05:29.556788 2660 policy_none.go:49] "None policy: Start" Jan 29 13:05:29.558274 kubelet[2660]: I0129 13:05:29.558124 2660 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:29.558274 kubelet[2660]: I0129 13:05:29.558181 2660 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:29.558553 kubelet[2660]: I0129 13:05:29.558540 2660 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 13:05:29.559509 kubelet[2660]: I0129 13:05:29.558645 2660 state_mem.go:35] "Initializing new in-memory state store" Jan 29 13:05:29.559509 kubelet[2660]: I0129 13:05:29.558777 2660 state_mem.go:75] "Updated machine memory state" Jan 29 13:05:29.565742 kubelet[2660]: I0129 13:05:29.565721 2660 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 13:05:29.566319 kubelet[2660]: I0129 13:05:29.566292 2660 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 13:05:29.566516 kubelet[2660]: I0129 13:05:29.566505 2660 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 13:05:29.586637 kubelet[2660]: I0129 13:05:29.586607 2660 topology_manager.go:215] "Topology Admit Handler" podUID="548375779413cb88b589b3ab554e436b" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:29.586859 kubelet[2660]: I0129 13:05:29.586842 2660 topology_manager.go:215] "Topology Admit Handler" podUID="b3567059fc6fb6ae058aa42e89432e9e" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:29.587040 kubelet[2660]: I0129 13:05:29.586992 2660 topology_manager.go:215] "Topology Admit Handler" podUID="96274634fb2731a55f6d64ace7da59a4" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:29.598305 kubelet[2660]: W0129 13:05:29.598282 2660 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 13:05:29.600639 kubelet[2660]: W0129 13:05:29.600607 2660 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 13:05:29.600907 kubelet[2660]: W0129 13:05:29.600811 2660 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 13:05:29.646241 kubelet[2660]: I0129 13:05:29.646207 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/548375779413cb88b589b3ab554e436b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"548375779413cb88b589b3ab554e436b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:29.646525 kubelet[2660]: I0129 13:05:29.646494 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/548375779413cb88b589b3ab554e436b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"548375779413cb88b589b3ab554e436b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:29.646591 kubelet[2660]: I0129 13:05:29.646537 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b3567059fc6fb6ae058aa42e89432e9e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"b3567059fc6fb6ae058aa42e89432e9e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:29.646591 kubelet[2660]: I0129 13:05:29.646564 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3567059fc6fb6ae058aa42e89432e9e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"b3567059fc6fb6ae058aa42e89432e9e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:29.646591 kubelet[2660]: I0129 13:05:29.646584 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/96274634fb2731a55f6d64ace7da59a4-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"96274634fb2731a55f6d64ace7da59a4\") " pod="kube-system/kube-scheduler-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:29.646690 kubelet[2660]: I0129 13:05:29.646603 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/548375779413cb88b589b3ab554e436b-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"548375779413cb88b589b3ab554e436b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:29.646690 kubelet[2660]: I0129 13:05:29.646624 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b3567059fc6fb6ae058aa42e89432e9e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"b3567059fc6fb6ae058aa42e89432e9e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:29.646690 kubelet[2660]: I0129 13:05:29.646648 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b3567059fc6fb6ae058aa42e89432e9e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"b3567059fc6fb6ae058aa42e89432e9e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:29.646690 kubelet[2660]: I0129 13:05:29.646667 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3567059fc6fb6ae058aa42e89432e9e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal\" (UID: \"b3567059fc6fb6ae058aa42e89432e9e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:30.349516 kubelet[2660]: I0129 13:05:30.349459 2660 apiserver.go:52] "Watching apiserver" Jan 29 13:05:30.445857 kubelet[2660]: I0129 13:05:30.445799 2660 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 13:05:30.548455 kubelet[2660]: W0129 13:05:30.545490 2660 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 13:05:30.548455 kubelet[2660]: E0129 13:05:30.545560 2660 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:30.548455 kubelet[2660]: W0129 13:05:30.546272 2660 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 13:05:30.548455 kubelet[2660]: E0129 13:05:30.546311 2660 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:05:30.571527 kubelet[2660]: I0129 13:05:30.571466 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-e-f5d4e76a77.novalocal" podStartSLOduration=1.5709227559999999 podStartE2EDuration="1.570922756s" podCreationTimestamp="2025-01-29 13:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 13:05:30.560666279 +0000 UTC m=+1.277655856" watchObservedRunningTime="2025-01-29 13:05:30.570922756 +0000 UTC m=+1.287912333" Jan 29 13:05:30.571714 kubelet[2660]: I0129 13:05:30.571587 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-e-f5d4e76a77.novalocal" podStartSLOduration=1.571581122 podStartE2EDuration="1.571581122s" podCreationTimestamp="2025-01-29 13:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 13:05:30.570655545 +0000 UTC m=+1.287645122" watchObservedRunningTime="2025-01-29 13:05:30.571581122 +0000 UTC m=+1.288570690" Jan 29 13:05:30.579496 kubelet[2660]: I0129 13:05:30.579445 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f5d4e76a77.novalocal" podStartSLOduration=1.5794324739999999 podStartE2EDuration="1.579432474s" podCreationTimestamp="2025-01-29 13:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 13:05:30.579067248 +0000 UTC m=+1.296056825" watchObservedRunningTime="2025-01-29 13:05:30.579432474 +0000 UTC m=+1.296422051" Jan 29 13:05:35.209913 sudo[1718]: pam_unix(sudo:session): session closed for user root Jan 29 13:05:35.417966 sshd[1715]: pam_unix(sshd:session): session closed for user core Jan 29 13:05:35.424074 systemd[1]: sshd@8-172.24.4.245:22-172.24.4.1:55452.service: Deactivated successfully. Jan 29 13:05:35.426762 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 13:05:35.427077 systemd[1]: session-11.scope: Consumed 7.900s CPU time, 192.2M memory peak, 0B memory swap peak. Jan 29 13:05:35.428136 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Jan 29 13:05:35.430054 systemd-logind[1443]: Removed session 11. Jan 29 13:05:43.008230 kubelet[2660]: I0129 13:05:43.008023 2660 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 13:05:43.009007 kubelet[2660]: I0129 13:05:43.008822 2660 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 13:05:43.009040 containerd[1464]: time="2025-01-29T13:05:43.008631037Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 13:05:43.852990 kubelet[2660]: I0129 13:05:43.852885 2660 topology_manager.go:215] "Topology Admit Handler" podUID="43ac9a24-ccf2-4287-ac75-b2b769598ba7" podNamespace="kube-system" podName="kube-proxy-hkq7s" Jan 29 13:05:43.881347 systemd[1]: Created slice kubepods-besteffort-pod43ac9a24_ccf2_4287_ac75_b2b769598ba7.slice - libcontainer container kubepods-besteffort-pod43ac9a24_ccf2_4287_ac75_b2b769598ba7.slice. Jan 29 13:05:43.942169 kubelet[2660]: I0129 13:05:43.942039 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43ac9a24-ccf2-4287-ac75-b2b769598ba7-kube-proxy\") pod \"kube-proxy-hkq7s\" (UID: \"43ac9a24-ccf2-4287-ac75-b2b769598ba7\") " pod="kube-system/kube-proxy-hkq7s" Jan 29 13:05:43.942169 kubelet[2660]: I0129 13:05:43.942077 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43ac9a24-ccf2-4287-ac75-b2b769598ba7-xtables-lock\") pod \"kube-proxy-hkq7s\" (UID: \"43ac9a24-ccf2-4287-ac75-b2b769598ba7\") " pod="kube-system/kube-proxy-hkq7s" Jan 29 13:05:43.942169 kubelet[2660]: I0129 13:05:43.942103 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43ac9a24-ccf2-4287-ac75-b2b769598ba7-lib-modules\") pod \"kube-proxy-hkq7s\" (UID: \"43ac9a24-ccf2-4287-ac75-b2b769598ba7\") " pod="kube-system/kube-proxy-hkq7s" Jan 29 13:05:43.942169 kubelet[2660]: I0129 13:05:43.942122 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwp9q\" (UniqueName: \"kubernetes.io/projected/43ac9a24-ccf2-4287-ac75-b2b769598ba7-kube-api-access-xwp9q\") pod \"kube-proxy-hkq7s\" (UID: \"43ac9a24-ccf2-4287-ac75-b2b769598ba7\") " pod="kube-system/kube-proxy-hkq7s" Jan 29 13:05:44.095922 kubelet[2660]: I0129 13:05:44.095841 2660 topology_manager.go:215] "Topology Admit Handler" podUID="59b2d493-9d5e-43f8-be48-b9eb7dcd6ad5" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-89dmd" Jan 29 13:05:44.115336 systemd[1]: Created slice kubepods-besteffort-pod59b2d493_9d5e_43f8_be48_b9eb7dcd6ad5.slice - libcontainer container kubepods-besteffort-pod59b2d493_9d5e_43f8_be48_b9eb7dcd6ad5.slice. Jan 29 13:05:44.143845 kubelet[2660]: I0129 13:05:44.143741 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/59b2d493-9d5e-43f8-be48-b9eb7dcd6ad5-var-lib-calico\") pod \"tigera-operator-7bc55997bb-89dmd\" (UID: \"59b2d493-9d5e-43f8-be48-b9eb7dcd6ad5\") " pod="tigera-operator/tigera-operator-7bc55997bb-89dmd" Jan 29 13:05:44.143845 kubelet[2660]: I0129 13:05:44.143797 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4th9\" (UniqueName: \"kubernetes.io/projected/59b2d493-9d5e-43f8-be48-b9eb7dcd6ad5-kube-api-access-d4th9\") pod \"tigera-operator-7bc55997bb-89dmd\" (UID: \"59b2d493-9d5e-43f8-be48-b9eb7dcd6ad5\") " pod="tigera-operator/tigera-operator-7bc55997bb-89dmd" Jan 29 13:05:44.193012 containerd[1464]: time="2025-01-29T13:05:44.192963158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hkq7s,Uid:43ac9a24-ccf2-4287-ac75-b2b769598ba7,Namespace:kube-system,Attempt:0,}" Jan 29 13:05:44.238683 containerd[1464]: time="2025-01-29T13:05:44.238543322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:05:44.238683 containerd[1464]: time="2025-01-29T13:05:44.238668377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:05:44.239056 containerd[1464]: time="2025-01-29T13:05:44.238710697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:44.239056 containerd[1464]: time="2025-01-29T13:05:44.238950617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:44.286650 systemd[1]: Started cri-containerd-4a50826c52f9e61bc3baceb2ffe34fd36bb0675786f9d1c8972e4f3594a75bf7.scope - libcontainer container 4a50826c52f9e61bc3baceb2ffe34fd36bb0675786f9d1c8972e4f3594a75bf7. Jan 29 13:05:44.308925 containerd[1464]: time="2025-01-29T13:05:44.308804055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hkq7s,Uid:43ac9a24-ccf2-4287-ac75-b2b769598ba7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a50826c52f9e61bc3baceb2ffe34fd36bb0675786f9d1c8972e4f3594a75bf7\"" Jan 29 13:05:44.313425 containerd[1464]: time="2025-01-29T13:05:44.313366424Z" level=info msg="CreateContainer within sandbox \"4a50826c52f9e61bc3baceb2ffe34fd36bb0675786f9d1c8972e4f3594a75bf7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 13:05:44.336621 containerd[1464]: time="2025-01-29T13:05:44.336588497Z" level=info msg="CreateContainer within sandbox \"4a50826c52f9e61bc3baceb2ffe34fd36bb0675786f9d1c8972e4f3594a75bf7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fc3919f11ab2f30055e7236b9f274c0a00c84554b09453abcff867abcf88a75d\"" Jan 29 13:05:44.337418 containerd[1464]: time="2025-01-29T13:05:44.337366737Z" level=info msg="StartContainer for \"fc3919f11ab2f30055e7236b9f274c0a00c84554b09453abcff867abcf88a75d\"" Jan 29 13:05:44.364539 systemd[1]: Started cri-containerd-fc3919f11ab2f30055e7236b9f274c0a00c84554b09453abcff867abcf88a75d.scope - libcontainer container fc3919f11ab2f30055e7236b9f274c0a00c84554b09453abcff867abcf88a75d. Jan 29 13:05:44.397557 containerd[1464]: time="2025-01-29T13:05:44.397432029Z" level=info msg="StartContainer for \"fc3919f11ab2f30055e7236b9f274c0a00c84554b09453abcff867abcf88a75d\" returns successfully" Jan 29 13:05:44.420085 containerd[1464]: time="2025-01-29T13:05:44.419703007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-89dmd,Uid:59b2d493-9d5e-43f8-be48-b9eb7dcd6ad5,Namespace:tigera-operator,Attempt:0,}" Jan 29 13:05:44.460175 containerd[1464]: time="2025-01-29T13:05:44.458331487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:05:44.460175 containerd[1464]: time="2025-01-29T13:05:44.458610771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:05:44.460175 containerd[1464]: time="2025-01-29T13:05:44.458759580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:44.460175 containerd[1464]: time="2025-01-29T13:05:44.459000883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:44.483140 systemd[1]: Started cri-containerd-2f77df7985c2e7f65e3acca3828596b597dedcc935a5d5305f033d353737497f.scope - libcontainer container 2f77df7985c2e7f65e3acca3828596b597dedcc935a5d5305f033d353737497f. Jan 29 13:05:44.524957 containerd[1464]: time="2025-01-29T13:05:44.524915701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-89dmd,Uid:59b2d493-9d5e-43f8-be48-b9eb7dcd6ad5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2f77df7985c2e7f65e3acca3828596b597dedcc935a5d5305f033d353737497f\"" Jan 29 13:05:44.526839 containerd[1464]: time="2025-01-29T13:05:44.526810787Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 13:05:44.595169 kubelet[2660]: I0129 13:05:44.593199 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hkq7s" podStartSLOduration=1.593163186 podStartE2EDuration="1.593163186s" podCreationTimestamp="2025-01-29 13:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 13:05:44.592639663 +0000 UTC m=+15.309629230" watchObservedRunningTime="2025-01-29 13:05:44.593163186 +0000 UTC m=+15.310152803" Jan 29 13:05:46.456279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2796132760.mount: Deactivated successfully. Jan 29 13:05:47.181209 containerd[1464]: time="2025-01-29T13:05:47.181154659Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:47.182503 containerd[1464]: time="2025-01-29T13:05:47.182465689Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 13:05:47.183440 containerd[1464]: time="2025-01-29T13:05:47.183405182Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:47.186094 containerd[1464]: time="2025-01-29T13:05:47.186044143Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:47.186778 containerd[1464]: time="2025-01-29T13:05:47.186746541Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.659899746s" Jan 29 13:05:47.186826 containerd[1464]: time="2025-01-29T13:05:47.186777900Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 13:05:47.189775 containerd[1464]: time="2025-01-29T13:05:47.189448641Z" level=info msg="CreateContainer within sandbox \"2f77df7985c2e7f65e3acca3828596b597dedcc935a5d5305f033d353737497f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 13:05:47.215350 containerd[1464]: time="2025-01-29T13:05:47.215318547Z" level=info msg="CreateContainer within sandbox \"2f77df7985c2e7f65e3acca3828596b597dedcc935a5d5305f033d353737497f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4d432f311b26804958742759fbddf2fa1cd943f5acf3fbe88982fa6a1263512f\"" Jan 29 13:05:47.217550 containerd[1464]: time="2025-01-29T13:05:47.216496757Z" level=info msg="StartContainer for \"4d432f311b26804958742759fbddf2fa1cd943f5acf3fbe88982fa6a1263512f\"" Jan 29 13:05:47.244548 systemd[1]: Started cri-containerd-4d432f311b26804958742759fbddf2fa1cd943f5acf3fbe88982fa6a1263512f.scope - libcontainer container 4d432f311b26804958742759fbddf2fa1cd943f5acf3fbe88982fa6a1263512f. Jan 29 13:05:47.272109 containerd[1464]: time="2025-01-29T13:05:47.272010652Z" level=info msg="StartContainer for \"4d432f311b26804958742759fbddf2fa1cd943f5acf3fbe88982fa6a1263512f\" returns successfully" Jan 29 13:05:47.596431 kubelet[2660]: I0129 13:05:47.596231 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-89dmd" podStartSLOduration=0.934591898 podStartE2EDuration="3.596212925s" podCreationTimestamp="2025-01-29 13:05:44 +0000 UTC" firstStartedPulling="2025-01-29 13:05:44.526281193 +0000 UTC m=+15.243270770" lastFinishedPulling="2025-01-29 13:05:47.187902219 +0000 UTC m=+17.904891797" observedRunningTime="2025-01-29 13:05:47.594853634 +0000 UTC m=+18.311843221" watchObservedRunningTime="2025-01-29 13:05:47.596212925 +0000 UTC m=+18.313202502" Jan 29 13:05:50.387141 kubelet[2660]: I0129 13:05:50.386174 2660 topology_manager.go:215] "Topology Admit Handler" podUID="e5cff4ed-85ba-4e1a-a236-fd4540825b9e" podNamespace="calico-system" podName="calico-typha-5f6d58bb8b-xg8pd" Jan 29 13:05:50.394244 systemd[1]: Created slice kubepods-besteffort-pode5cff4ed_85ba_4e1a_a236_fd4540825b9e.slice - libcontainer container kubepods-besteffort-pode5cff4ed_85ba_4e1a_a236_fd4540825b9e.slice. Jan 29 13:05:50.487154 kubelet[2660]: I0129 13:05:50.486950 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e5cff4ed-85ba-4e1a-a236-fd4540825b9e-typha-certs\") pod \"calico-typha-5f6d58bb8b-xg8pd\" (UID: \"e5cff4ed-85ba-4e1a-a236-fd4540825b9e\") " pod="calico-system/calico-typha-5f6d58bb8b-xg8pd" Jan 29 13:05:50.487154 kubelet[2660]: I0129 13:05:50.487059 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8wcq\" (UniqueName: \"kubernetes.io/projected/e5cff4ed-85ba-4e1a-a236-fd4540825b9e-kube-api-access-p8wcq\") pod \"calico-typha-5f6d58bb8b-xg8pd\" (UID: \"e5cff4ed-85ba-4e1a-a236-fd4540825b9e\") " pod="calico-system/calico-typha-5f6d58bb8b-xg8pd" Jan 29 13:05:50.487154 kubelet[2660]: I0129 13:05:50.487116 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5cff4ed-85ba-4e1a-a236-fd4540825b9e-tigera-ca-bundle\") pod \"calico-typha-5f6d58bb8b-xg8pd\" (UID: \"e5cff4ed-85ba-4e1a-a236-fd4540825b9e\") " pod="calico-system/calico-typha-5f6d58bb8b-xg8pd" Jan 29 13:05:50.488696 kubelet[2660]: I0129 13:05:50.488160 2660 topology_manager.go:215] "Topology Admit Handler" podUID="c19664e9-9c2e-4c2f-8d2a-080c1e34ac52" podNamespace="calico-system" podName="calico-node-npxgs" Jan 29 13:05:50.498382 systemd[1]: Created slice kubepods-besteffort-podc19664e9_9c2e_4c2f_8d2a_080c1e34ac52.slice - libcontainer container kubepods-besteffort-podc19664e9_9c2e_4c2f_8d2a_080c1e34ac52.slice. Jan 29 13:05:50.589082 kubelet[2660]: I0129 13:05:50.588443 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c19664e9-9c2e-4c2f-8d2a-080c1e34ac52-cni-log-dir\") pod \"calico-node-npxgs\" (UID: \"c19664e9-9c2e-4c2f-8d2a-080c1e34ac52\") " pod="calico-system/calico-node-npxgs" Jan 29 13:05:50.589082 kubelet[2660]: I0129 13:05:50.588561 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c19664e9-9c2e-4c2f-8d2a-080c1e34ac52-var-run-calico\") pod \"calico-node-npxgs\" (UID: \"c19664e9-9c2e-4c2f-8d2a-080c1e34ac52\") " pod="calico-system/calico-node-npxgs" Jan 29 13:05:50.589082 kubelet[2660]: I0129 13:05:50.588587 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c19664e9-9c2e-4c2f-8d2a-080c1e34ac52-var-lib-calico\") pod \"calico-node-npxgs\" (UID: \"c19664e9-9c2e-4c2f-8d2a-080c1e34ac52\") " pod="calico-system/calico-node-npxgs" Jan 29 13:05:50.589082 kubelet[2660]: I0129 13:05:50.588621 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c19664e9-9c2e-4c2f-8d2a-080c1e34ac52-flexvol-driver-host\") pod \"calico-node-npxgs\" (UID: \"c19664e9-9c2e-4c2f-8d2a-080c1e34ac52\") " pod="calico-system/calico-node-npxgs" Jan 29 13:05:50.589082 kubelet[2660]: I0129 13:05:50.588645 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c19664e9-9c2e-4c2f-8d2a-080c1e34ac52-tigera-ca-bundle\") pod \"calico-node-npxgs\" (UID: \"c19664e9-9c2e-4c2f-8d2a-080c1e34ac52\") " pod="calico-system/calico-node-npxgs" Jan 29 13:05:50.589366 kubelet[2660]: I0129 13:05:50.588674 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c19664e9-9c2e-4c2f-8d2a-080c1e34ac52-xtables-lock\") pod \"calico-node-npxgs\" (UID: \"c19664e9-9c2e-4c2f-8d2a-080c1e34ac52\") " pod="calico-system/calico-node-npxgs" Jan 29 13:05:50.589366 kubelet[2660]: I0129 13:05:50.588691 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c19664e9-9c2e-4c2f-8d2a-080c1e34ac52-cni-net-dir\") pod \"calico-node-npxgs\" (UID: \"c19664e9-9c2e-4c2f-8d2a-080c1e34ac52\") " pod="calico-system/calico-node-npxgs" Jan 29 13:05:50.589366 kubelet[2660]: I0129 13:05:50.588710 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqxbr\" (UniqueName: \"kubernetes.io/projected/c19664e9-9c2e-4c2f-8d2a-080c1e34ac52-kube-api-access-dqxbr\") pod \"calico-node-npxgs\" (UID: \"c19664e9-9c2e-4c2f-8d2a-080c1e34ac52\") " pod="calico-system/calico-node-npxgs" Jan 29 13:05:50.589366 kubelet[2660]: I0129 13:05:50.588727 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c19664e9-9c2e-4c2f-8d2a-080c1e34ac52-node-certs\") pod \"calico-node-npxgs\" (UID: \"c19664e9-9c2e-4c2f-8d2a-080c1e34ac52\") " pod="calico-system/calico-node-npxgs" Jan 29 13:05:50.589366 kubelet[2660]: I0129 13:05:50.588744 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c19664e9-9c2e-4c2f-8d2a-080c1e34ac52-cni-bin-dir\") pod \"calico-node-npxgs\" (UID: \"c19664e9-9c2e-4c2f-8d2a-080c1e34ac52\") " pod="calico-system/calico-node-npxgs" Jan 29 13:05:50.589683 kubelet[2660]: I0129 13:05:50.588762 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c19664e9-9c2e-4c2f-8d2a-080c1e34ac52-lib-modules\") pod \"calico-node-npxgs\" (UID: \"c19664e9-9c2e-4c2f-8d2a-080c1e34ac52\") " pod="calico-system/calico-node-npxgs" Jan 29 13:05:50.589683 kubelet[2660]: I0129 13:05:50.588779 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c19664e9-9c2e-4c2f-8d2a-080c1e34ac52-policysync\") pod \"calico-node-npxgs\" (UID: \"c19664e9-9c2e-4c2f-8d2a-080c1e34ac52\") " pod="calico-system/calico-node-npxgs" Jan 29 13:05:50.625485 kubelet[2660]: I0129 13:05:50.625437 2660 topology_manager.go:215] "Topology Admit Handler" podUID="c60872ff-6905-49ac-9a5c-64272dbc73e4" podNamespace="calico-system" podName="csi-node-driver-crzf7" Jan 29 13:05:50.626046 kubelet[2660]: E0129 13:05:50.625792 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-crzf7" podUID="c60872ff-6905-49ac-9a5c-64272dbc73e4" Jan 29 13:05:50.691339 kubelet[2660]: I0129 13:05:50.689886 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c60872ff-6905-49ac-9a5c-64272dbc73e4-registration-dir\") pod \"csi-node-driver-crzf7\" (UID: \"c60872ff-6905-49ac-9a5c-64272dbc73e4\") " pod="calico-system/csi-node-driver-crzf7" Jan 29 13:05:50.692479 kubelet[2660]: I0129 13:05:50.691668 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c60872ff-6905-49ac-9a5c-64272dbc73e4-varrun\") pod \"csi-node-driver-crzf7\" (UID: \"c60872ff-6905-49ac-9a5c-64272dbc73e4\") " pod="calico-system/csi-node-driver-crzf7" Jan 29 13:05:50.692479 kubelet[2660]: I0129 13:05:50.691720 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c60872ff-6905-49ac-9a5c-64272dbc73e4-kubelet-dir\") pod \"csi-node-driver-crzf7\" (UID: \"c60872ff-6905-49ac-9a5c-64272dbc73e4\") " pod="calico-system/csi-node-driver-crzf7" Jan 29 13:05:50.692479 kubelet[2660]: I0129 13:05:50.691769 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c60872ff-6905-49ac-9a5c-64272dbc73e4-socket-dir\") pod \"csi-node-driver-crzf7\" (UID: \"c60872ff-6905-49ac-9a5c-64272dbc73e4\") " pod="calico-system/csi-node-driver-crzf7" Jan 29 13:05:50.692479 kubelet[2660]: I0129 13:05:50.691805 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x99n6\" (UniqueName: \"kubernetes.io/projected/c60872ff-6905-49ac-9a5c-64272dbc73e4-kube-api-access-x99n6\") pod \"csi-node-driver-crzf7\" (UID: \"c60872ff-6905-49ac-9a5c-64272dbc73e4\") " pod="calico-system/csi-node-driver-crzf7" Jan 29 13:05:50.697543 kubelet[2660]: E0129 13:05:50.697514 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.697543 kubelet[2660]: W0129 13:05:50.697536 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.697678 kubelet[2660]: E0129 13:05:50.697555 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.706445 containerd[1464]: time="2025-01-29T13:05:50.706376560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f6d58bb8b-xg8pd,Uid:e5cff4ed-85ba-4e1a-a236-fd4540825b9e,Namespace:calico-system,Attempt:0,}" Jan 29 13:05:50.716989 kubelet[2660]: E0129 13:05:50.716965 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.717889 kubelet[2660]: W0129 13:05:50.717062 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.717889 kubelet[2660]: E0129 13:05:50.717085 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.757077 containerd[1464]: time="2025-01-29T13:05:50.757008238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:05:50.758069 containerd[1464]: time="2025-01-29T13:05:50.757983558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:05:50.758449 containerd[1464]: time="2025-01-29T13:05:50.758107170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:50.758500 containerd[1464]: time="2025-01-29T13:05:50.758363230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:50.783621 systemd[1]: Started cri-containerd-b449eff7d5921786a1c8485dc8eee522246183b29e2fd2f6a71c51ae1bfeccf6.scope - libcontainer container b449eff7d5921786a1c8485dc8eee522246183b29e2fd2f6a71c51ae1bfeccf6. Jan 29 13:05:50.794163 kubelet[2660]: E0129 13:05:50.794121 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.794163 kubelet[2660]: W0129 13:05:50.794150 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.794163 kubelet[2660]: E0129 13:05:50.794169 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.794922 kubelet[2660]: E0129 13:05:50.794489 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.794922 kubelet[2660]: W0129 13:05:50.794502 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.794922 kubelet[2660]: E0129 13:05:50.794516 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.794922 kubelet[2660]: E0129 13:05:50.794701 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.794922 kubelet[2660]: W0129 13:05:50.794709 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.794922 kubelet[2660]: E0129 13:05:50.794737 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.794922 kubelet[2660]: E0129 13:05:50.794914 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.794922 kubelet[2660]: W0129 13:05:50.794923 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.794922 kubelet[2660]: E0129 13:05:50.794933 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.795169 kubelet[2660]: E0129 13:05:50.795108 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.795169 kubelet[2660]: W0129 13:05:50.795117 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.795169 kubelet[2660]: E0129 13:05:50.795125 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.796256 kubelet[2660]: E0129 13:05:50.795313 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.796256 kubelet[2660]: W0129 13:05:50.795338 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.796256 kubelet[2660]: E0129 13:05:50.795347 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.796256 kubelet[2660]: E0129 13:05:50.795510 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.796256 kubelet[2660]: W0129 13:05:50.795519 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.796256 kubelet[2660]: E0129 13:05:50.795528 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.796256 kubelet[2660]: E0129 13:05:50.795687 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.796256 kubelet[2660]: W0129 13:05:50.795695 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.796256 kubelet[2660]: E0129 13:05:50.795703 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.796256 kubelet[2660]: E0129 13:05:50.795992 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.796584 kubelet[2660]: W0129 13:05:50.796001 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.796584 kubelet[2660]: E0129 13:05:50.796010 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.796584 kubelet[2660]: E0129 13:05:50.796177 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.796584 kubelet[2660]: W0129 13:05:50.796185 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.796584 kubelet[2660]: E0129 13:05:50.796218 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.796584 kubelet[2660]: E0129 13:05:50.796338 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.796584 kubelet[2660]: W0129 13:05:50.796346 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.796584 kubelet[2660]: E0129 13:05:50.796426 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.796584 kubelet[2660]: E0129 13:05:50.796571 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.796584 kubelet[2660]: W0129 13:05:50.796581 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.797120 kubelet[2660]: E0129 13:05:50.796663 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.797120 kubelet[2660]: E0129 13:05:50.796782 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.797120 kubelet[2660]: W0129 13:05:50.796791 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.797120 kubelet[2660]: E0129 13:05:50.796823 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.797120 kubelet[2660]: E0129 13:05:50.796977 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.797120 kubelet[2660]: W0129 13:05:50.796985 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.797120 kubelet[2660]: E0129 13:05:50.796996 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.797278 kubelet[2660]: E0129 13:05:50.797154 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.797278 kubelet[2660]: W0129 13:05:50.797162 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.797278 kubelet[2660]: E0129 13:05:50.797174 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.797951 kubelet[2660]: E0129 13:05:50.797364 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.797951 kubelet[2660]: W0129 13:05:50.797376 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.797951 kubelet[2660]: E0129 13:05:50.797408 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.797951 kubelet[2660]: E0129 13:05:50.797577 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.797951 kubelet[2660]: W0129 13:05:50.797585 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.797951 kubelet[2660]: E0129 13:05:50.797654 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.797951 kubelet[2660]: E0129 13:05:50.797751 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.797951 kubelet[2660]: W0129 13:05:50.797758 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.797951 kubelet[2660]: E0129 13:05:50.797842 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.797951 kubelet[2660]: E0129 13:05:50.797948 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.798811 kubelet[2660]: W0129 13:05:50.797956 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.798811 kubelet[2660]: E0129 13:05:50.798041 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.798811 kubelet[2660]: E0129 13:05:50.798140 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.798811 kubelet[2660]: W0129 13:05:50.798150 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.798811 kubelet[2660]: E0129 13:05:50.798163 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.798811 kubelet[2660]: E0129 13:05:50.798481 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.798811 kubelet[2660]: W0129 13:05:50.798490 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.798811 kubelet[2660]: E0129 13:05:50.798523 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.798811 kubelet[2660]: E0129 13:05:50.798686 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.798811 kubelet[2660]: W0129 13:05:50.798694 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.800261 kubelet[2660]: E0129 13:05:50.798707 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.800261 kubelet[2660]: E0129 13:05:50.798937 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.800261 kubelet[2660]: W0129 13:05:50.798945 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.800261 kubelet[2660]: E0129 13:05:50.798960 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.800261 kubelet[2660]: E0129 13:05:50.799455 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.800261 kubelet[2660]: W0129 13:05:50.799471 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.800261 kubelet[2660]: E0129 13:05:50.799487 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.801822 kubelet[2660]: E0129 13:05:50.800985 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.801822 kubelet[2660]: W0129 13:05:50.800999 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.801822 kubelet[2660]: E0129 13:05:50.801013 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.802388 containerd[1464]: time="2025-01-29T13:05:50.801614051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-npxgs,Uid:c19664e9-9c2e-4c2f-8d2a-080c1e34ac52,Namespace:calico-system,Attempt:0,}" Jan 29 13:05:50.816284 kubelet[2660]: E0129 13:05:50.816260 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:50.816647 kubelet[2660]: W0129 13:05:50.816631 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:50.816998 kubelet[2660]: E0129 13:05:50.816734 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:50.849968 containerd[1464]: time="2025-01-29T13:05:50.848505903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:05:50.849968 containerd[1464]: time="2025-01-29T13:05:50.849233478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:05:50.849968 containerd[1464]: time="2025-01-29T13:05:50.849266480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:50.849968 containerd[1464]: time="2025-01-29T13:05:50.849344476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:50.860178 containerd[1464]: time="2025-01-29T13:05:50.860136052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f6d58bb8b-xg8pd,Uid:e5cff4ed-85ba-4e1a-a236-fd4540825b9e,Namespace:calico-system,Attempt:0,} returns sandbox id \"b449eff7d5921786a1c8485dc8eee522246183b29e2fd2f6a71c51ae1bfeccf6\"" Jan 29 13:05:50.862578 containerd[1464]: time="2025-01-29T13:05:50.862523772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 13:05:50.879568 systemd[1]: Started cri-containerd-7e1ca7482aae2abacf706b45b422c9e7b8a20b1277980255334ad499c5ff659a.scope - libcontainer container 7e1ca7482aae2abacf706b45b422c9e7b8a20b1277980255334ad499c5ff659a. Jan 29 13:05:50.914088 containerd[1464]: time="2025-01-29T13:05:50.914026645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-npxgs,Uid:c19664e9-9c2e-4c2f-8d2a-080c1e34ac52,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e1ca7482aae2abacf706b45b422c9e7b8a20b1277980255334ad499c5ff659a\"" Jan 29 13:05:52.488647 kubelet[2660]: E0129 13:05:52.487711 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-crzf7" podUID="c60872ff-6905-49ac-9a5c-64272dbc73e4" Jan 29 13:05:52.609016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1050330526.mount: Deactivated successfully. Jan 29 13:05:54.040501 containerd[1464]: time="2025-01-29T13:05:54.040434193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:54.041937 containerd[1464]: time="2025-01-29T13:05:54.041883772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 29 13:05:54.043230 containerd[1464]: time="2025-01-29T13:05:54.043188039Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:54.045376 containerd[1464]: time="2025-01-29T13:05:54.045355826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:54.046178 containerd[1464]: time="2025-01-29T13:05:54.046134115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.183547526s" Jan 29 13:05:54.046178 containerd[1464]: time="2025-01-29T13:05:54.046168670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 13:05:54.048121 containerd[1464]: time="2025-01-29T13:05:54.048077702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 13:05:54.062411 containerd[1464]: time="2025-01-29T13:05:54.061254511Z" level=info msg="CreateContainer within sandbox \"b449eff7d5921786a1c8485dc8eee522246183b29e2fd2f6a71c51ae1bfeccf6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 13:05:54.083404 containerd[1464]: time="2025-01-29T13:05:54.083356522Z" level=info msg="CreateContainer within sandbox \"b449eff7d5921786a1c8485dc8eee522246183b29e2fd2f6a71c51ae1bfeccf6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fdb34cc3b5f6c3199a9b819ea4c86ee3bfdb97889fb97520958392f576a84fff\"" Jan 29 13:05:54.084366 containerd[1464]: time="2025-01-29T13:05:54.084345107Z" level=info msg="StartContainer for \"fdb34cc3b5f6c3199a9b819ea4c86ee3bfdb97889fb97520958392f576a84fff\"" Jan 29 13:05:54.119585 systemd[1]: Started cri-containerd-fdb34cc3b5f6c3199a9b819ea4c86ee3bfdb97889fb97520958392f576a84fff.scope - libcontainer container fdb34cc3b5f6c3199a9b819ea4c86ee3bfdb97889fb97520958392f576a84fff. Jan 29 13:05:54.169327 containerd[1464]: time="2025-01-29T13:05:54.169280901Z" level=info msg="StartContainer for \"fdb34cc3b5f6c3199a9b819ea4c86ee3bfdb97889fb97520958392f576a84fff\" returns successfully" Jan 29 13:05:54.487944 kubelet[2660]: E0129 13:05:54.487535 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-crzf7" podUID="c60872ff-6905-49ac-9a5c-64272dbc73e4" Jan 29 13:05:54.708866 kubelet[2660]: E0129 13:05:54.708801 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.708999 kubelet[2660]: W0129 13:05:54.708933 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.709039 kubelet[2660]: E0129 13:05:54.708974 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.709543 kubelet[2660]: E0129 13:05:54.709515 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.709606 kubelet[2660]: W0129 13:05:54.709548 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.709650 kubelet[2660]: E0129 13:05:54.709635 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.710177 kubelet[2660]: E0129 13:05:54.710148 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.710223 kubelet[2660]: W0129 13:05:54.710179 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.710223 kubelet[2660]: E0129 13:05:54.710203 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.710662 kubelet[2660]: E0129 13:05:54.710613 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.710708 kubelet[2660]: W0129 13:05:54.710679 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.710734 kubelet[2660]: E0129 13:05:54.710704 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.711225 kubelet[2660]: E0129 13:05:54.711162 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.711264 kubelet[2660]: W0129 13:05:54.711228 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.711264 kubelet[2660]: E0129 13:05:54.711251 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.711678 kubelet[2660]: E0129 13:05:54.711651 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.711724 kubelet[2660]: W0129 13:05:54.711680 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.711724 kubelet[2660]: E0129 13:05:54.711703 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.712069 kubelet[2660]: E0129 13:05:54.712040 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.712108 kubelet[2660]: W0129 13:05:54.712093 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.712139 kubelet[2660]: E0129 13:05:54.712116 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.712520 kubelet[2660]: E0129 13:05:54.712493 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.712566 kubelet[2660]: W0129 13:05:54.712522 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.712617 kubelet[2660]: E0129 13:05:54.712586 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.712959 kubelet[2660]: E0129 13:05:54.712933 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.713001 kubelet[2660]: W0129 13:05:54.712962 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.713033 kubelet[2660]: E0129 13:05:54.713007 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.713377 kubelet[2660]: E0129 13:05:54.713350 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.713443 kubelet[2660]: W0129 13:05:54.713381 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.713471 kubelet[2660]: E0129 13:05:54.713457 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.713791 kubelet[2660]: E0129 13:05:54.713765 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.713834 kubelet[2660]: W0129 13:05:54.713794 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.713834 kubelet[2660]: E0129 13:05:54.713817 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.714170 kubelet[2660]: E0129 13:05:54.714144 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.714213 kubelet[2660]: W0129 13:05:54.714172 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.714213 kubelet[2660]: E0129 13:05:54.714197 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.714625 kubelet[2660]: E0129 13:05:54.714597 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.714674 kubelet[2660]: W0129 13:05:54.714627 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.714703 kubelet[2660]: E0129 13:05:54.714688 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.715116 kubelet[2660]: E0129 13:05:54.715089 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.715160 kubelet[2660]: W0129 13:05:54.715119 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.715160 kubelet[2660]: E0129 13:05:54.715140 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.715572 kubelet[2660]: E0129 13:05:54.715546 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.715621 kubelet[2660]: W0129 13:05:54.715575 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.715650 kubelet[2660]: E0129 13:05:54.715622 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.720994 kubelet[2660]: E0129 13:05:54.720954 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.720994 kubelet[2660]: W0129 13:05:54.720988 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.721077 kubelet[2660]: E0129 13:05:54.721011 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.721483 kubelet[2660]: E0129 13:05:54.721458 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.721532 kubelet[2660]: W0129 13:05:54.721487 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.721595 kubelet[2660]: E0129 13:05:54.721569 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.722103 kubelet[2660]: E0129 13:05:54.722063 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.722103 kubelet[2660]: W0129 13:05:54.722098 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.722302 kubelet[2660]: E0129 13:05:54.722167 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.722691 kubelet[2660]: E0129 13:05:54.722670 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.723256 kubelet[2660]: W0129 13:05:54.722750 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.723256 kubelet[2660]: E0129 13:05:54.722781 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.723325 kubelet[2660]: E0129 13:05:54.723276 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.723325 kubelet[2660]: W0129 13:05:54.723299 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.723636 kubelet[2660]: E0129 13:05:54.723553 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.723823 kubelet[2660]: E0129 13:05:54.723787 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.723860 kubelet[2660]: W0129 13:05:54.723818 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.723919 kubelet[2660]: E0129 13:05:54.723892 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.724245 kubelet[2660]: E0129 13:05:54.724205 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.724424 kubelet[2660]: W0129 13:05:54.724293 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.724424 kubelet[2660]: E0129 13:05:54.724316 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.724762 kubelet[2660]: E0129 13:05:54.724710 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.724762 kubelet[2660]: W0129 13:05:54.724721 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.724762 kubelet[2660]: E0129 13:05:54.724741 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.725188 kubelet[2660]: E0129 13:05:54.725149 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.725188 kubelet[2660]: W0129 13:05:54.725183 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.725274 kubelet[2660]: E0129 13:05:54.725252 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.725781 kubelet[2660]: E0129 13:05:54.725753 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.725836 kubelet[2660]: W0129 13:05:54.725784 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.725924 kubelet[2660]: E0129 13:05:54.725879 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.726311 kubelet[2660]: E0129 13:05:54.726273 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.726311 kubelet[2660]: W0129 13:05:54.726305 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.726449 kubelet[2660]: E0129 13:05:54.726423 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.726887 kubelet[2660]: E0129 13:05:54.726850 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.726887 kubelet[2660]: W0129 13:05:54.726881 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.727017 kubelet[2660]: E0129 13:05:54.726980 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.727457 kubelet[2660]: E0129 13:05:54.727353 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.727520 kubelet[2660]: W0129 13:05:54.727496 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.727723 kubelet[2660]: E0129 13:05:54.727567 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.728023 kubelet[2660]: E0129 13:05:54.727984 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.728023 kubelet[2660]: W0129 13:05:54.728017 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.728115 kubelet[2660]: E0129 13:05:54.728102 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.728510 kubelet[2660]: E0129 13:05:54.728475 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.728559 kubelet[2660]: W0129 13:05:54.728507 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.728675 kubelet[2660]: E0129 13:05:54.728662 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.729014 kubelet[2660]: E0129 13:05:54.728866 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.729014 kubelet[2660]: W0129 13:05:54.728896 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.729014 kubelet[2660]: E0129 13:05:54.728920 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.729330 kubelet[2660]: E0129 13:05:54.729305 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.729377 kubelet[2660]: W0129 13:05:54.729333 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.729377 kubelet[2660]: E0129 13:05:54.729365 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:54.729823 kubelet[2660]: E0129 13:05:54.729797 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:54.729877 kubelet[2660]: W0129 13:05:54.729826 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:54.729877 kubelet[2660]: E0129 13:05:54.729848 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.631452 kubelet[2660]: I0129 13:05:55.630452 2660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 13:05:55.725293 kubelet[2660]: E0129 13:05:55.725267 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.725293 kubelet[2660]: W0129 13:05:55.725286 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.725479 kubelet[2660]: E0129 13:05:55.725316 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.725667 kubelet[2660]: E0129 13:05:55.725649 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.725667 kubelet[2660]: W0129 13:05:55.725662 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.725811 kubelet[2660]: E0129 13:05:55.725792 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.726081 kubelet[2660]: E0129 13:05:55.726064 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.726081 kubelet[2660]: W0129 13:05:55.726077 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.726358 kubelet[2660]: E0129 13:05:55.726086 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.726358 kubelet[2660]: E0129 13:05:55.726313 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.726836 kubelet[2660]: W0129 13:05:55.726584 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.726836 kubelet[2660]: E0129 13:05:55.726604 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.727191 kubelet[2660]: E0129 13:05:55.727125 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.727191 kubelet[2660]: W0129 13:05:55.727138 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.727191 kubelet[2660]: E0129 13:05:55.727148 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.728189 kubelet[2660]: E0129 13:05:55.728173 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.728189 kubelet[2660]: W0129 13:05:55.728186 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.728260 kubelet[2660]: E0129 13:05:55.728196 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.729003 kubelet[2660]: E0129 13:05:55.728976 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.729003 kubelet[2660]: W0129 13:05:55.728989 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.729003 kubelet[2660]: E0129 13:05:55.728998 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.729207 kubelet[2660]: E0129 13:05:55.729167 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.729207 kubelet[2660]: W0129 13:05:55.729175 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.729207 kubelet[2660]: E0129 13:05:55.729184 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.729501 kubelet[2660]: E0129 13:05:55.729334 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.729501 kubelet[2660]: W0129 13:05:55.729346 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.729501 kubelet[2660]: E0129 13:05:55.729355 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.729707 kubelet[2660]: E0129 13:05:55.729662 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.729707 kubelet[2660]: W0129 13:05:55.729671 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.729707 kubelet[2660]: E0129 13:05:55.729680 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.729904 kubelet[2660]: E0129 13:05:55.729880 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.729904 kubelet[2660]: W0129 13:05:55.729892 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.729904 kubelet[2660]: E0129 13:05:55.729902 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.730661 kubelet[2660]: E0129 13:05:55.730626 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.730661 kubelet[2660]: W0129 13:05:55.730639 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.730661 kubelet[2660]: E0129 13:05:55.730651 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.731439 kubelet[2660]: E0129 13:05:55.731104 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.731439 kubelet[2660]: W0129 13:05:55.731114 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.731439 kubelet[2660]: E0129 13:05:55.731123 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.731664 kubelet[2660]: E0129 13:05:55.731556 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.731664 kubelet[2660]: W0129 13:05:55.731569 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.731664 kubelet[2660]: E0129 13:05:55.731578 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.731899 kubelet[2660]: E0129 13:05:55.731882 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.731899 kubelet[2660]: W0129 13:05:55.731898 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.732056 kubelet[2660]: E0129 13:05:55.731907 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.732361 kubelet[2660]: E0129 13:05:55.732324 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.732361 kubelet[2660]: W0129 13:05:55.732338 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.732361 kubelet[2660]: E0129 13:05:55.732348 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.733334 kubelet[2660]: E0129 13:05:55.733280 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.733856 kubelet[2660]: W0129 13:05:55.733698 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.733856 kubelet[2660]: E0129 13:05:55.733725 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.734797 kubelet[2660]: E0129 13:05:55.734785 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.734984 kubelet[2660]: W0129 13:05:55.734971 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.735064 kubelet[2660]: E0129 13:05:55.735049 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.735307 kubelet[2660]: E0129 13:05:55.735271 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.735307 kubelet[2660]: W0129 13:05:55.735286 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.735307 kubelet[2660]: E0129 13:05:55.735303 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.735914 kubelet[2660]: E0129 13:05:55.735898 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.735914 kubelet[2660]: W0129 13:05:55.735911 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.736046 kubelet[2660]: E0129 13:05:55.736025 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.736621 kubelet[2660]: E0129 13:05:55.736604 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.736621 kubelet[2660]: W0129 13:05:55.736616 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.736750 kubelet[2660]: E0129 13:05:55.736670 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.737088 kubelet[2660]: E0129 13:05:55.736956 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.737088 kubelet[2660]: W0129 13:05:55.736969 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.737088 kubelet[2660]: E0129 13:05:55.736994 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.737088 kubelet[2660]: E0129 13:05:55.737104 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.737227 kubelet[2660]: W0129 13:05:55.737112 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.737227 kubelet[2660]: E0129 13:05:55.737128 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.737468 kubelet[2660]: E0129 13:05:55.737450 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.737468 kubelet[2660]: W0129 13:05:55.737463 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.737551 kubelet[2660]: E0129 13:05:55.737479 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.737996 kubelet[2660]: E0129 13:05:55.737808 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.737996 kubelet[2660]: W0129 13:05:55.737820 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.737996 kubelet[2660]: E0129 13:05:55.737854 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.738461 kubelet[2660]: E0129 13:05:55.738247 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.738461 kubelet[2660]: W0129 13:05:55.738258 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.738461 kubelet[2660]: E0129 13:05:55.738342 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.738847 kubelet[2660]: E0129 13:05:55.738797 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.738847 kubelet[2660]: W0129 13:05:55.738807 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.738847 kubelet[2660]: E0129 13:05:55.738835 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.739328 kubelet[2660]: E0129 13:05:55.739214 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.739328 kubelet[2660]: W0129 13:05:55.739225 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.739328 kubelet[2660]: E0129 13:05:55.739253 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.739641 kubelet[2660]: E0129 13:05:55.739587 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.739641 kubelet[2660]: W0129 13:05:55.739600 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.739641 kubelet[2660]: E0129 13:05:55.739627 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.740103 kubelet[2660]: E0129 13:05:55.740093 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.740336 kubelet[2660]: W0129 13:05:55.740197 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.740336 kubelet[2660]: E0129 13:05:55.740218 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.740584 kubelet[2660]: E0129 13:05:55.740478 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.740584 kubelet[2660]: W0129 13:05:55.740489 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.740584 kubelet[2660]: E0129 13:05:55.740505 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.740876 kubelet[2660]: E0129 13:05:55.740866 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.741000 kubelet[2660]: W0129 13:05:55.740946 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.741000 kubelet[2660]: E0129 13:05:55.740961 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.742504 kubelet[2660]: E0129 13:05:55.742488 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:55.742504 kubelet[2660]: W0129 13:05:55.742501 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:55.742581 kubelet[2660]: E0129 13:05:55.742512 2660 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:55.836717 containerd[1464]: time="2025-01-29T13:05:55.836676183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:55.838053 containerd[1464]: time="2025-01-29T13:05:55.837896933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 29 13:05:55.839279 containerd[1464]: time="2025-01-29T13:05:55.839218783Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:55.842108 containerd[1464]: time="2025-01-29T13:05:55.842068939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:55.843311 containerd[1464]: time="2025-01-29T13:05:55.842783760Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.794675321s" Jan 29 13:05:55.843311 containerd[1464]: time="2025-01-29T13:05:55.842827552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 13:05:55.845534 containerd[1464]: time="2025-01-29T13:05:55.845506307Z" level=info msg="CreateContainer within sandbox \"7e1ca7482aae2abacf706b45b422c9e7b8a20b1277980255334ad499c5ff659a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 13:05:55.867132 containerd[1464]: time="2025-01-29T13:05:55.867065731Z" level=info msg="CreateContainer within sandbox \"7e1ca7482aae2abacf706b45b422c9e7b8a20b1277980255334ad499c5ff659a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8dc157f997a0990bc6e13ac5b4927b4fb4f4c1f19fecd287c501de63cfc2eb0e\"" Jan 29 13:05:55.867900 containerd[1464]: time="2025-01-29T13:05:55.867876070Z" level=info msg="StartContainer for \"8dc157f997a0990bc6e13ac5b4927b4fb4f4c1f19fecd287c501de63cfc2eb0e\"" Jan 29 13:05:55.899441 systemd[1]: run-containerd-runc-k8s.io-8dc157f997a0990bc6e13ac5b4927b4fb4f4c1f19fecd287c501de63cfc2eb0e-runc.93xrGv.mount: Deactivated successfully. Jan 29 13:05:55.917570 systemd[1]: Started cri-containerd-8dc157f997a0990bc6e13ac5b4927b4fb4f4c1f19fecd287c501de63cfc2eb0e.scope - libcontainer container 8dc157f997a0990bc6e13ac5b4927b4fb4f4c1f19fecd287c501de63cfc2eb0e. Jan 29 13:05:55.950342 containerd[1464]: time="2025-01-29T13:05:55.950050312Z" level=info msg="StartContainer for \"8dc157f997a0990bc6e13ac5b4927b4fb4f4c1f19fecd287c501de63cfc2eb0e\" returns successfully" Jan 29 13:05:55.964753 systemd[1]: cri-containerd-8dc157f997a0990bc6e13ac5b4927b4fb4f4c1f19fecd287c501de63cfc2eb0e.scope: Deactivated successfully. Jan 29 13:05:56.487616 kubelet[2660]: E0129 13:05:56.487360 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-crzf7" podUID="c60872ff-6905-49ac-9a5c-64272dbc73e4" Jan 29 13:05:56.607527 containerd[1464]: time="2025-01-29T13:05:56.605828887Z" level=info msg="shim disconnected" id=8dc157f997a0990bc6e13ac5b4927b4fb4f4c1f19fecd287c501de63cfc2eb0e namespace=k8s.io Jan 29 13:05:56.607527 containerd[1464]: time="2025-01-29T13:05:56.605963790Z" level=warning msg="cleaning up after shim disconnected" id=8dc157f997a0990bc6e13ac5b4927b4fb4f4c1f19fecd287c501de63cfc2eb0e namespace=k8s.io Jan 29 13:05:56.607527 containerd[1464]: time="2025-01-29T13:05:56.605991182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 13:05:56.672108 kubelet[2660]: I0129 13:05:56.672040 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f6d58bb8b-xg8pd" podStartSLOduration=3.48612301 podStartE2EDuration="6.672023721s" podCreationTimestamp="2025-01-29 13:05:50 +0000 UTC" firstStartedPulling="2025-01-29 13:05:50.861705456 +0000 UTC m=+21.578695023" lastFinishedPulling="2025-01-29 13:05:54.047606157 +0000 UTC m=+24.764595734" observedRunningTime="2025-01-29 13:05:54.659980922 +0000 UTC m=+25.376970540" watchObservedRunningTime="2025-01-29 13:05:56.672023721 +0000 UTC m=+27.389013288" Jan 29 13:05:56.861789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dc157f997a0990bc6e13ac5b4927b4fb4f4c1f19fecd287c501de63cfc2eb0e-rootfs.mount: Deactivated successfully. Jan 29 13:05:57.655494 containerd[1464]: time="2025-01-29T13:05:57.655335863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 13:05:58.488100 kubelet[2660]: E0129 13:05:58.488014 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-crzf7" podUID="c60872ff-6905-49ac-9a5c-64272dbc73e4" Jan 29 13:06:00.488270 kubelet[2660]: E0129 13:06:00.487259 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-crzf7" podUID="c60872ff-6905-49ac-9a5c-64272dbc73e4" Jan 29 13:06:02.487046 kubelet[2660]: E0129 13:06:02.486966 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-crzf7" podUID="c60872ff-6905-49ac-9a5c-64272dbc73e4" Jan 29 13:06:03.127985 containerd[1464]: time="2025-01-29T13:06:03.127798301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:03.129428 containerd[1464]: time="2025-01-29T13:06:03.129310529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 13:06:03.131041 containerd[1464]: time="2025-01-29T13:06:03.130995068Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:03.133686 containerd[1464]: time="2025-01-29T13:06:03.133648906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:03.134866 containerd[1464]: time="2025-01-29T13:06:03.134377152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.478914141s" Jan 29 13:06:03.134866 containerd[1464]: time="2025-01-29T13:06:03.134423369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 13:06:03.136717 containerd[1464]: time="2025-01-29T13:06:03.136561138Z" level=info msg="CreateContainer within sandbox \"7e1ca7482aae2abacf706b45b422c9e7b8a20b1277980255334ad499c5ff659a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 13:06:03.161489 containerd[1464]: time="2025-01-29T13:06:03.161444633Z" level=info msg="CreateContainer within sandbox \"7e1ca7482aae2abacf706b45b422c9e7b8a20b1277980255334ad499c5ff659a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"32bee77e389be7b79d6ccabf976be1a555aee6364f9cb2c85647352e6a7cb763\"" Jan 29 13:06:03.163202 containerd[1464]: time="2025-01-29T13:06:03.162376061Z" level=info msg="StartContainer for \"32bee77e389be7b79d6ccabf976be1a555aee6364f9cb2c85647352e6a7cb763\"" Jan 29 13:06:03.198544 systemd[1]: Started cri-containerd-32bee77e389be7b79d6ccabf976be1a555aee6364f9cb2c85647352e6a7cb763.scope - libcontainer container 32bee77e389be7b79d6ccabf976be1a555aee6364f9cb2c85647352e6a7cb763. Jan 29 13:06:03.230943 containerd[1464]: time="2025-01-29T13:06:03.230820715Z" level=info msg="StartContainer for \"32bee77e389be7b79d6ccabf976be1a555aee6364f9cb2c85647352e6a7cb763\" returns successfully" Jan 29 13:06:04.446079 containerd[1464]: time="2025-01-29T13:06:04.445968705Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 13:06:04.450469 systemd[1]: cri-containerd-32bee77e389be7b79d6ccabf976be1a555aee6364f9cb2c85647352e6a7cb763.scope: Deactivated successfully. Jan 29 13:06:04.488525 kubelet[2660]: E0129 13:06:04.487694 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-crzf7" podUID="c60872ff-6905-49ac-9a5c-64272dbc73e4" Jan 29 13:06:04.496773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32bee77e389be7b79d6ccabf976be1a555aee6364f9cb2c85647352e6a7cb763-rootfs.mount: Deactivated successfully. Jan 29 13:06:04.545235 kubelet[2660]: I0129 13:06:04.545188 2660 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 13:06:04.793116 kubelet[2660]: I0129 13:06:04.766045 2660 topology_manager.go:215] "Topology Admit Handler" podUID="2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6b7pj" Jan 29 13:06:04.776671 systemd[1]: Created slice kubepods-burstable-pod2ca282d0_cdb1_4b7f_a6d5_0674baf19e5a.slice - libcontainer container kubepods-burstable-pod2ca282d0_cdb1_4b7f_a6d5_0674baf19e5a.slice. Jan 29 13:06:04.800131 kubelet[2660]: I0129 13:06:04.799918 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft8z2\" (UniqueName: \"kubernetes.io/projected/2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a-kube-api-access-ft8z2\") pod \"coredns-7db6d8ff4d-6b7pj\" (UID: \"2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a\") " pod="kube-system/coredns-7db6d8ff4d-6b7pj" Jan 29 13:06:04.800131 kubelet[2660]: I0129 13:06:04.800008 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a-config-volume\") pod \"coredns-7db6d8ff4d-6b7pj\" (UID: \"2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a\") " pod="kube-system/coredns-7db6d8ff4d-6b7pj" Jan 29 13:06:04.835188 kubelet[2660]: I0129 13:06:04.834628 2660 topology_manager.go:215] "Topology Admit Handler" podUID="33e601d8-e340-43d3-8175-0473e13a164d" podNamespace="calico-apiserver" podName="calico-apiserver-59f5d86475-4cr5v" Jan 29 13:06:04.837904 kubelet[2660]: I0129 13:06:04.836871 2660 topology_manager.go:215] "Topology Admit Handler" podUID="ebf5e465-ca1d-4589-8d98-2c00876ac6ac" podNamespace="calico-apiserver" podName="calico-apiserver-59f5d86475-77ss4" Jan 29 13:06:04.839792 kubelet[2660]: I0129 13:06:04.839638 2660 topology_manager.go:215] "Topology Admit Handler" podUID="2ce8b713-0eab-46e6-97eb-990957745903" podNamespace="calico-system" podName="calico-kube-controllers-6cd8fd798f-wmf87" Jan 29 13:06:04.840785 kubelet[2660]: I0129 13:06:04.840741 2660 topology_manager.go:215] "Topology Admit Handler" podUID="42652645-3ddd-4845-94f8-f2a42fdbd94a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4pjxk" Jan 29 13:06:04.852035 systemd[1]: Created slice kubepods-besteffort-podebf5e465_ca1d_4589_8d98_2c00876ac6ac.slice - libcontainer container kubepods-besteffort-podebf5e465_ca1d_4589_8d98_2c00876ac6ac.slice. Jan 29 13:06:04.859536 systemd[1]: Created slice kubepods-besteffort-pod33e601d8_e340_43d3_8175_0473e13a164d.slice - libcontainer container kubepods-besteffort-pod33e601d8_e340_43d3_8175_0473e13a164d.slice. Jan 29 13:06:04.866910 systemd[1]: Created slice kubepods-besteffort-pod2ce8b713_0eab_46e6_97eb_990957745903.slice - libcontainer container kubepods-besteffort-pod2ce8b713_0eab_46e6_97eb_990957745903.slice. Jan 29 13:06:04.873590 systemd[1]: Created slice kubepods-burstable-pod42652645_3ddd_4845_94f8_f2a42fdbd94a.slice - libcontainer container kubepods-burstable-pod42652645_3ddd_4845_94f8_f2a42fdbd94a.slice. Jan 29 13:06:05.000947 kubelet[2660]: I0129 13:06:05.000893 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqdxp\" (UniqueName: \"kubernetes.io/projected/42652645-3ddd-4845-94f8-f2a42fdbd94a-kube-api-access-hqdxp\") pod \"coredns-7db6d8ff4d-4pjxk\" (UID: \"42652645-3ddd-4845-94f8-f2a42fdbd94a\") " pod="kube-system/coredns-7db6d8ff4d-4pjxk" Jan 29 13:06:05.002289 kubelet[2660]: I0129 13:06:05.001923 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/33e601d8-e340-43d3-8175-0473e13a164d-calico-apiserver-certs\") pod \"calico-apiserver-59f5d86475-4cr5v\" (UID: \"33e601d8-e340-43d3-8175-0473e13a164d\") " pod="calico-apiserver/calico-apiserver-59f5d86475-4cr5v" Jan 29 13:06:05.002289 kubelet[2660]: I0129 13:06:05.002035 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7dnp\" (UniqueName: \"kubernetes.io/projected/2ce8b713-0eab-46e6-97eb-990957745903-kube-api-access-v7dnp\") pod \"calico-kube-controllers-6cd8fd798f-wmf87\" (UID: \"2ce8b713-0eab-46e6-97eb-990957745903\") " pod="calico-system/calico-kube-controllers-6cd8fd798f-wmf87" Jan 29 13:06:05.002289 kubelet[2660]: I0129 13:06:05.002197 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ce8b713-0eab-46e6-97eb-990957745903-tigera-ca-bundle\") pod \"calico-kube-controllers-6cd8fd798f-wmf87\" (UID: \"2ce8b713-0eab-46e6-97eb-990957745903\") " pod="calico-system/calico-kube-controllers-6cd8fd798f-wmf87" Jan 29 13:06:05.006379 kubelet[2660]: I0129 13:06:05.005087 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpbgn\" (UniqueName: \"kubernetes.io/projected/33e601d8-e340-43d3-8175-0473e13a164d-kube-api-access-vpbgn\") pod \"calico-apiserver-59f5d86475-4cr5v\" (UID: \"33e601d8-e340-43d3-8175-0473e13a164d\") " pod="calico-apiserver/calico-apiserver-59f5d86475-4cr5v" Jan 29 13:06:05.006379 kubelet[2660]: I0129 13:06:05.005183 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42652645-3ddd-4845-94f8-f2a42fdbd94a-config-volume\") pod \"coredns-7db6d8ff4d-4pjxk\" (UID: \"42652645-3ddd-4845-94f8-f2a42fdbd94a\") " pod="kube-system/coredns-7db6d8ff4d-4pjxk" Jan 29 13:06:05.006379 kubelet[2660]: I0129 13:06:05.005282 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdq5q\" (UniqueName: \"kubernetes.io/projected/ebf5e465-ca1d-4589-8d98-2c00876ac6ac-kube-api-access-zdq5q\") pod \"calico-apiserver-59f5d86475-77ss4\" (UID: \"ebf5e465-ca1d-4589-8d98-2c00876ac6ac\") " pod="calico-apiserver/calico-apiserver-59f5d86475-77ss4" Jan 29 13:06:05.006379 kubelet[2660]: I0129 13:06:05.005386 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ebf5e465-ca1d-4589-8d98-2c00876ac6ac-calico-apiserver-certs\") pod \"calico-apiserver-59f5d86475-77ss4\" (UID: \"ebf5e465-ca1d-4589-8d98-2c00876ac6ac\") " pod="calico-apiserver/calico-apiserver-59f5d86475-77ss4" Jan 29 13:06:05.091689 containerd[1464]: time="2025-01-29T13:06:05.091548987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6b7pj,Uid:2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a,Namespace:kube-system,Attempt:0,}" Jan 29 13:06:05.374140 containerd[1464]: time="2025-01-29T13:06:05.373872610Z" level=info msg="shim disconnected" id=32bee77e389be7b79d6ccabf976be1a555aee6364f9cb2c85647352e6a7cb763 namespace=k8s.io Jan 29 13:06:05.374140 containerd[1464]: time="2025-01-29T13:06:05.373924556Z" level=warning msg="cleaning up after shim disconnected" id=32bee77e389be7b79d6ccabf976be1a555aee6364f9cb2c85647352e6a7cb763 namespace=k8s.io Jan 29 13:06:05.374140 containerd[1464]: time="2025-01-29T13:06:05.373935106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 13:06:05.393975 containerd[1464]: time="2025-01-29T13:06:05.393915687Z" level=warning msg="cleanup warnings time=\"2025-01-29T13:06:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 13:06:05.442879 containerd[1464]: time="2025-01-29T13:06:05.442811715Z" level=error msg="Failed to destroy network for sandbox \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.443366 containerd[1464]: time="2025-01-29T13:06:05.443327869Z" level=error msg="encountered an error cleaning up failed sandbox \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.443511 containerd[1464]: time="2025-01-29T13:06:05.443485814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6b7pj,Uid:2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.443846 kubelet[2660]: E0129 13:06:05.443796 2660 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.443922 kubelet[2660]: E0129 13:06:05.443873 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6b7pj" Jan 29 13:06:05.443922 kubelet[2660]: E0129 13:06:05.443897 2660 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6b7pj" Jan 29 13:06:05.444005 kubelet[2660]: E0129 13:06:05.443967 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6b7pj_kube-system(2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6b7pj_kube-system(2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6b7pj" podUID="2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a" Jan 29 13:06:05.457329 containerd[1464]: time="2025-01-29T13:06:05.457258213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f5d86475-77ss4,Uid:ebf5e465-ca1d-4589-8d98-2c00876ac6ac,Namespace:calico-apiserver,Attempt:0,}" Jan 29 13:06:05.465038 containerd[1464]: time="2025-01-29T13:06:05.464778788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f5d86475-4cr5v,Uid:33e601d8-e340-43d3-8175-0473e13a164d,Namespace:calico-apiserver,Attempt:0,}" Jan 29 13:06:05.473900 containerd[1464]: time="2025-01-29T13:06:05.473725648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cd8fd798f-wmf87,Uid:2ce8b713-0eab-46e6-97eb-990957745903,Namespace:calico-system,Attempt:0,}" Jan 29 13:06:05.478887 containerd[1464]: time="2025-01-29T13:06:05.478854292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4pjxk,Uid:42652645-3ddd-4845-94f8-f2a42fdbd94a,Namespace:kube-system,Attempt:0,}" Jan 29 13:06:05.623119 containerd[1464]: time="2025-01-29T13:06:05.623073202Z" level=error msg="Failed to destroy network for sandbox \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.623954 containerd[1464]: time="2025-01-29T13:06:05.623802404Z" level=error msg="encountered an error cleaning up failed sandbox \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.624520 containerd[1464]: time="2025-01-29T13:06:05.623915936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f5d86475-77ss4,Uid:ebf5e465-ca1d-4589-8d98-2c00876ac6ac,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.625237 kubelet[2660]: E0129 13:06:05.624727 2660 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.625237 kubelet[2660]: E0129 13:06:05.625086 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f5d86475-77ss4" Jan 29 13:06:05.625237 kubelet[2660]: E0129 13:06:05.625118 2660 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f5d86475-77ss4" Jan 29 13:06:05.625874 kubelet[2660]: E0129 13:06:05.625197 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59f5d86475-77ss4_calico-apiserver(ebf5e465-ca1d-4589-8d98-2c00876ac6ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59f5d86475-77ss4_calico-apiserver(ebf5e465-ca1d-4589-8d98-2c00876ac6ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f5d86475-77ss4" podUID="ebf5e465-ca1d-4589-8d98-2c00876ac6ac" Jan 29 13:06:05.659018 containerd[1464]: time="2025-01-29T13:06:05.658472426Z" level=error msg="Failed to destroy network for sandbox \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.659018 containerd[1464]: time="2025-01-29T13:06:05.658880960Z" level=error msg="encountered an error cleaning up failed sandbox \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.659018 containerd[1464]: time="2025-01-29T13:06:05.658928068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f5d86475-4cr5v,Uid:33e601d8-e340-43d3-8175-0473e13a164d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.659700 kubelet[2660]: E0129 13:06:05.659525 2660 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.659840 kubelet[2660]: E0129 13:06:05.659809 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f5d86475-4cr5v" Jan 29 13:06:05.659919 kubelet[2660]: E0129 13:06:05.659901 2660 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f5d86475-4cr5v" Jan 29 13:06:05.660340 kubelet[2660]: E0129 13:06:05.660008 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59f5d86475-4cr5v_calico-apiserver(33e601d8-e340-43d3-8175-0473e13a164d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59f5d86475-4cr5v_calico-apiserver(33e601d8-e340-43d3-8175-0473e13a164d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f5d86475-4cr5v" podUID="33e601d8-e340-43d3-8175-0473e13a164d" Jan 29 13:06:05.664247 containerd[1464]: time="2025-01-29T13:06:05.664164053Z" level=error msg="Failed to destroy network for sandbox \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.664593 containerd[1464]: time="2025-01-29T13:06:05.664534205Z" level=error msg="encountered an error cleaning up failed sandbox \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.664676 containerd[1464]: time="2025-01-29T13:06:05.664600148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cd8fd798f-wmf87,Uid:2ce8b713-0eab-46e6-97eb-990957745903,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.664906 kubelet[2660]: E0129 13:06:05.664875 2660 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.665328 kubelet[2660]: E0129 13:06:05.665020 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cd8fd798f-wmf87" Jan 29 13:06:05.665328 kubelet[2660]: E0129 13:06:05.665047 2660 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cd8fd798f-wmf87" Jan 29 13:06:05.665328 kubelet[2660]: E0129 13:06:05.665124 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cd8fd798f-wmf87_calico-system(2ce8b713-0eab-46e6-97eb-990957745903)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cd8fd798f-wmf87_calico-system(2ce8b713-0eab-46e6-97eb-990957745903)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cd8fd798f-wmf87" podUID="2ce8b713-0eab-46e6-97eb-990957745903" Jan 29 13:06:05.666573 containerd[1464]: time="2025-01-29T13:06:05.666510438Z" level=error msg="Failed to destroy network for sandbox \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.666854 containerd[1464]: time="2025-01-29T13:06:05.666811581Z" level=error msg="encountered an error cleaning up failed sandbox \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.666906 containerd[1464]: time="2025-01-29T13:06:05.666857086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4pjxk,Uid:42652645-3ddd-4845-94f8-f2a42fdbd94a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.667123 kubelet[2660]: E0129 13:06:05.667085 2660 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.667175 kubelet[2660]: E0129 13:06:05.667142 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4pjxk" Jan 29 13:06:05.667175 kubelet[2660]: E0129 13:06:05.667165 2660 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4pjxk" Jan 29 13:06:05.667256 kubelet[2660]: E0129 13:06:05.667216 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4pjxk_kube-system(42652645-3ddd-4845-94f8-f2a42fdbd94a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4pjxk_kube-system(42652645-3ddd-4845-94f8-f2a42fdbd94a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4pjxk" podUID="42652645-3ddd-4845-94f8-f2a42fdbd94a" Jan 29 13:06:05.679125 kubelet[2660]: I0129 13:06:05.678695 2660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Jan 29 13:06:05.679275 containerd[1464]: time="2025-01-29T13:06:05.679248914Z" level=info msg="StopPodSandbox for \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\"" Jan 29 13:06:05.679447 containerd[1464]: time="2025-01-29T13:06:05.679426316Z" level=info msg="Ensure that sandbox 5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee in task-service has been cleanup successfully" Jan 29 13:06:05.681161 kubelet[2660]: I0129 13:06:05.681135 2660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Jan 29 13:06:05.682185 containerd[1464]: time="2025-01-29T13:06:05.682153933Z" level=info msg="StopPodSandbox for \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\"" Jan 29 13:06:05.682519 containerd[1464]: time="2025-01-29T13:06:05.682495170Z" level=info msg="Ensure that sandbox 1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc in task-service has been cleanup successfully" Jan 29 13:06:05.696108 containerd[1464]: time="2025-01-29T13:06:05.695461874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 13:06:05.696674 kubelet[2660]: I0129 13:06:05.696418 2660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Jan 29 13:06:05.698523 containerd[1464]: time="2025-01-29T13:06:05.698245636Z" level=info msg="StopPodSandbox for \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\"" Jan 29 13:06:05.699039 containerd[1464]: time="2025-01-29T13:06:05.699017689Z" level=info msg="Ensure that sandbox b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff in task-service has been cleanup successfully" Jan 29 13:06:05.711058 kubelet[2660]: I0129 13:06:05.710918 2660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Jan 29 13:06:05.718318 containerd[1464]: time="2025-01-29T13:06:05.718111331Z" level=info msg="StopPodSandbox for \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\"" Jan 29 13:06:05.719276 containerd[1464]: time="2025-01-29T13:06:05.719175099Z" level=info msg="Ensure that sandbox c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9 in task-service has been cleanup successfully" Jan 29 13:06:05.726684 kubelet[2660]: I0129 13:06:05.725894 2660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Jan 29 13:06:05.729135 containerd[1464]: time="2025-01-29T13:06:05.728868364Z" level=info msg="StopPodSandbox for \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\"" Jan 29 13:06:05.730473 containerd[1464]: time="2025-01-29T13:06:05.729930248Z" level=info msg="Ensure that sandbox 8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58 in task-service has been cleanup successfully" Jan 29 13:06:05.790631 containerd[1464]: time="2025-01-29T13:06:05.790566057Z" level=error msg="StopPodSandbox for \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\" failed" error="failed to destroy network for sandbox \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.790924 containerd[1464]: time="2025-01-29T13:06:05.790821754Z" level=error msg="StopPodSandbox for \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\" failed" error="failed to destroy network for sandbox \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.791109 kubelet[2660]: E0129 13:06:05.791072 2660 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Jan 29 13:06:05.791443 kubelet[2660]: E0129 13:06:05.791371 2660 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Jan 29 13:06:05.791506 kubelet[2660]: E0129 13:06:05.791419 2660 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee"} Jan 29 13:06:05.791506 kubelet[2660]: E0129 13:06:05.791490 2660 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ebf5e465-ca1d-4589-8d98-2c00876ac6ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 13:06:05.791610 kubelet[2660]: E0129 13:06:05.791516 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ebf5e465-ca1d-4589-8d98-2c00876ac6ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f5d86475-77ss4" podUID="ebf5e465-ca1d-4589-8d98-2c00876ac6ac" Jan 29 13:06:05.791666 kubelet[2660]: E0129 13:06:05.791136 2660 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc"} Jan 29 13:06:05.791695 kubelet[2660]: E0129 13:06:05.791670 2660 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 13:06:05.791738 kubelet[2660]: E0129 13:06:05.791692 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6b7pj" podUID="2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a" Jan 29 13:06:05.795530 containerd[1464]: time="2025-01-29T13:06:05.795118834Z" level=error msg="StopPodSandbox for \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\" failed" error="failed to destroy network for sandbox \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.795666 kubelet[2660]: E0129 13:06:05.795466 2660 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Jan 29 13:06:05.795666 kubelet[2660]: E0129 13:06:05.795503 2660 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9"} Jan 29 13:06:05.795666 kubelet[2660]: E0129 13:06:05.795530 2660 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33e601d8-e340-43d3-8175-0473e13a164d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 13:06:05.795666 kubelet[2660]: E0129 13:06:05.795554 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33e601d8-e340-43d3-8175-0473e13a164d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f5d86475-4cr5v" podUID="33e601d8-e340-43d3-8175-0473e13a164d" Jan 29 13:06:05.801430 containerd[1464]: time="2025-01-29T13:06:05.801322348Z" level=error msg="StopPodSandbox for \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\" failed" error="failed to destroy network for sandbox \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.801675 kubelet[2660]: E0129 13:06:05.801518 2660 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Jan 29 13:06:05.801675 kubelet[2660]: E0129 13:06:05.801563 2660 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff"} Jan 29 13:06:05.801675 kubelet[2660]: E0129 13:06:05.801610 2660 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42652645-3ddd-4845-94f8-f2a42fdbd94a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 13:06:05.801675 kubelet[2660]: E0129 13:06:05.801636 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42652645-3ddd-4845-94f8-f2a42fdbd94a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4pjxk" podUID="42652645-3ddd-4845-94f8-f2a42fdbd94a" Jan 29 13:06:05.806923 containerd[1464]: time="2025-01-29T13:06:05.806771241Z" level=error msg="StopPodSandbox for \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\" failed" error="failed to destroy network for sandbox \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:05.807032 kubelet[2660]: E0129 13:06:05.806997 2660 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Jan 29 13:06:05.807105 kubelet[2660]: E0129 13:06:05.807032 2660 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58"} Jan 29 13:06:05.807105 kubelet[2660]: E0129 13:06:05.807063 2660 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ce8b713-0eab-46e6-97eb-990957745903\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 13:06:05.807105 kubelet[2660]: E0129 13:06:05.807085 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ce8b713-0eab-46e6-97eb-990957745903\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cd8fd798f-wmf87" podUID="2ce8b713-0eab-46e6-97eb-990957745903" Jan 29 13:06:06.497391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff-shm.mount: Deactivated successfully. Jan 29 13:06:06.497689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9-shm.mount: Deactivated successfully. Jan 29 13:06:06.497914 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee-shm.mount: Deactivated successfully. Jan 29 13:06:06.512228 systemd[1]: Created slice kubepods-besteffort-podc60872ff_6905_49ac_9a5c_64272dbc73e4.slice - libcontainer container kubepods-besteffort-podc60872ff_6905_49ac_9a5c_64272dbc73e4.slice. Jan 29 13:06:06.518590 containerd[1464]: time="2025-01-29T13:06:06.518238664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-crzf7,Uid:c60872ff-6905-49ac-9a5c-64272dbc73e4,Namespace:calico-system,Attempt:0,}" Jan 29 13:06:06.656307 containerd[1464]: time="2025-01-29T13:06:06.656017584Z" level=error msg="Failed to destroy network for sandbox \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:06.659101 containerd[1464]: time="2025-01-29T13:06:06.658990752Z" level=error msg="encountered an error cleaning up failed sandbox \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:06.659429 containerd[1464]: time="2025-01-29T13:06:06.659204150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-crzf7,Uid:c60872ff-6905-49ac-9a5c-64272dbc73e4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:06.659791 kubelet[2660]: E0129 13:06:06.659725 2660 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:06.660040 kubelet[2660]: E0129 13:06:06.659834 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-crzf7" Jan 29 13:06:06.660040 kubelet[2660]: E0129 13:06:06.659885 2660 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-crzf7" Jan 29 13:06:06.660040 kubelet[2660]: E0129 13:06:06.659979 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-crzf7_calico-system(c60872ff-6905-49ac-9a5c-64272dbc73e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-crzf7_calico-system(c60872ff-6905-49ac-9a5c-64272dbc73e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-crzf7" podUID="c60872ff-6905-49ac-9a5c-64272dbc73e4" Jan 29 13:06:06.663090 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4-shm.mount: Deactivated successfully. Jan 29 13:06:06.729770 kubelet[2660]: I0129 13:06:06.729727 2660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Jan 29 13:06:06.730719 containerd[1464]: time="2025-01-29T13:06:06.730668789Z" level=info msg="StopPodSandbox for \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\"" Jan 29 13:06:06.730947 containerd[1464]: time="2025-01-29T13:06:06.730838837Z" level=info msg="Ensure that sandbox 3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4 in task-service has been cleanup successfully" Jan 29 13:06:06.784024 containerd[1464]: time="2025-01-29T13:06:06.783844775Z" level=error msg="StopPodSandbox for \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\" failed" error="failed to destroy network for sandbox \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:06.784216 kubelet[2660]: E0129 13:06:06.784063 2660 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Jan 29 13:06:06.784216 kubelet[2660]: E0129 13:06:06.784111 2660 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4"} Jan 29 13:06:06.784216 kubelet[2660]: E0129 13:06:06.784148 2660 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c60872ff-6905-49ac-9a5c-64272dbc73e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 13:06:06.784216 kubelet[2660]: E0129 13:06:06.784178 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c60872ff-6905-49ac-9a5c-64272dbc73e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-crzf7" podUID="c60872ff-6905-49ac-9a5c-64272dbc73e4" Jan 29 13:06:11.192667 kubelet[2660]: I0129 13:06:11.192573 2660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 13:06:14.229779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1328034475.mount: Deactivated successfully. Jan 29 13:06:14.687685 containerd[1464]: time="2025-01-29T13:06:14.687584009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:14.689521 containerd[1464]: time="2025-01-29T13:06:14.689434840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 13:06:14.690985 containerd[1464]: time="2025-01-29T13:06:14.690928634Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:14.693992 containerd[1464]: time="2025-01-29T13:06:14.693817326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:14.695010 containerd[1464]: time="2025-01-29T13:06:14.694474676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.998973579s" Jan 29 13:06:14.695010 containerd[1464]: time="2025-01-29T13:06:14.694508500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 13:06:14.731847 containerd[1464]: time="2025-01-29T13:06:14.731601003Z" level=info msg="CreateContainer within sandbox \"7e1ca7482aae2abacf706b45b422c9e7b8a20b1277980255334ad499c5ff659a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 13:06:14.770192 containerd[1464]: time="2025-01-29T13:06:14.770118082Z" level=info msg="CreateContainer within sandbox \"7e1ca7482aae2abacf706b45b422c9e7b8a20b1277980255334ad499c5ff659a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7bbb0135d11e3fd7e401ad5dc7d58c56e97110d60d560e7cfa3ebf7f79730081\"" Jan 29 13:06:14.771868 containerd[1464]: time="2025-01-29T13:06:14.771086283Z" level=info msg="StartContainer for \"7bbb0135d11e3fd7e401ad5dc7d58c56e97110d60d560e7cfa3ebf7f79730081\"" Jan 29 13:06:14.807569 systemd[1]: Started cri-containerd-7bbb0135d11e3fd7e401ad5dc7d58c56e97110d60d560e7cfa3ebf7f79730081.scope - libcontainer container 7bbb0135d11e3fd7e401ad5dc7d58c56e97110d60d560e7cfa3ebf7f79730081. Jan 29 13:06:14.853957 containerd[1464]: time="2025-01-29T13:06:14.853827062Z" level=info msg="StartContainer for \"7bbb0135d11e3fd7e401ad5dc7d58c56e97110d60d560e7cfa3ebf7f79730081\" returns successfully" Jan 29 13:06:14.925884 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 13:06:14.926000 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 13:06:15.833766 kubelet[2660]: I0129 13:06:15.831897 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-npxgs" podStartSLOduration=2.051314276 podStartE2EDuration="25.831860452s" podCreationTimestamp="2025-01-29 13:05:50 +0000 UTC" firstStartedPulling="2025-01-29 13:05:50.915624913 +0000 UTC m=+21.632614480" lastFinishedPulling="2025-01-29 13:06:14.696171089 +0000 UTC m=+45.413160656" observedRunningTime="2025-01-29 13:06:15.827573363 +0000 UTC m=+46.544562980" watchObservedRunningTime="2025-01-29 13:06:15.831860452 +0000 UTC m=+46.548850069" Jan 29 13:06:16.490429 containerd[1464]: time="2025-01-29T13:06:16.488364377Z" level=info msg="StopPodSandbox for \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\"" Jan 29 13:06:16.517536 kernel: bpftool[3925]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 13:06:16.650778 containerd[1464]: 2025-01-29 13:06:16.599 [INFO][3916] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Jan 29 13:06:16.650778 containerd[1464]: 2025-01-29 13:06:16.599 [INFO][3916] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" iface="eth0" netns="/var/run/netns/cni-f6e417cd-07ae-76ba-e148-8c20da1de14b" Jan 29 13:06:16.650778 containerd[1464]: 2025-01-29 13:06:16.599 [INFO][3916] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" iface="eth0" netns="/var/run/netns/cni-f6e417cd-07ae-76ba-e148-8c20da1de14b" Jan 29 13:06:16.650778 containerd[1464]: 2025-01-29 13:06:16.603 [INFO][3916] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" iface="eth0" netns="/var/run/netns/cni-f6e417cd-07ae-76ba-e148-8c20da1de14b" Jan 29 13:06:16.650778 containerd[1464]: 2025-01-29 13:06:16.603 [INFO][3916] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Jan 29 13:06:16.650778 containerd[1464]: 2025-01-29 13:06:16.603 [INFO][3916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Jan 29 13:06:16.650778 containerd[1464]: 2025-01-29 13:06:16.636 [INFO][3931] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" HandleID="k8s-pod-network.5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:16.650778 containerd[1464]: 2025-01-29 13:06:16.636 [INFO][3931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:16.650778 containerd[1464]: 2025-01-29 13:06:16.636 [INFO][3931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:16.650778 containerd[1464]: 2025-01-29 13:06:16.644 [WARNING][3931] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" HandleID="k8s-pod-network.5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:16.650778 containerd[1464]: 2025-01-29 13:06:16.644 [INFO][3931] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" HandleID="k8s-pod-network.5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:16.650778 containerd[1464]: 2025-01-29 13:06:16.645 [INFO][3931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:16.650778 containerd[1464]: 2025-01-29 13:06:16.649 [INFO][3916] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Jan 29 13:06:16.652960 containerd[1464]: time="2025-01-29T13:06:16.652431536Z" level=info msg="TearDown network for sandbox \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\" successfully" Jan 29 13:06:16.652960 containerd[1464]: time="2025-01-29T13:06:16.652462645Z" level=info msg="StopPodSandbox for \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\" returns successfully" Jan 29 13:06:16.654706 systemd[1]: run-netns-cni\x2df6e417cd\x2d07ae\x2d76ba\x2de148\x2d8c20da1de14b.mount: Deactivated successfully. Jan 29 13:06:16.708340 containerd[1464]: time="2025-01-29T13:06:16.708276198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f5d86475-77ss4,Uid:ebf5e465-ca1d-4589-8d98-2c00876ac6ac,Namespace:calico-apiserver,Attempt:1,}" Jan 29 13:06:16.807712 systemd[1]: run-containerd-runc-k8s.io-7bbb0135d11e3fd7e401ad5dc7d58c56e97110d60d560e7cfa3ebf7f79730081-runc.j2MSPI.mount: Deactivated successfully. Jan 29 13:06:16.884423 systemd-networkd[1375]: vxlan.calico: Link UP Jan 29 13:06:16.886063 systemd-networkd[1375]: vxlan.calico: Gained carrier Jan 29 13:06:17.814950 systemd-networkd[1375]: cali1497bf0ddab: Link UP Jan 29 13:06:17.816757 systemd-networkd[1375]: cali1497bf0ddab: Gained carrier Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.697 [INFO][4026] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0 calico-apiserver-59f5d86475- calico-apiserver ebf5e465-ca1d-4589-8d98-2c00876ac6ac 790 0 2025-01-29 13:05:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59f5d86475 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-e-f5d4e76a77.novalocal calico-apiserver-59f5d86475-77ss4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1497bf0ddab [] []}} ContainerID="b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-77ss4" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-" Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.697 [INFO][4026] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-77ss4" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.748 [INFO][4037] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" HandleID="k8s-pod-network.b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.767 [INFO][4037] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" HandleID="k8s-pod-network.b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-e-f5d4e76a77.novalocal", "pod":"calico-apiserver-59f5d86475-77ss4", "timestamp":"2025-01-29 13:06:17.748719441 +0000 UTC"}, Hostname:"ci-4081-3-0-e-f5d4e76a77.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.767 [INFO][4037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.767 [INFO][4037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.767 [INFO][4037] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e-f5d4e76a77.novalocal' Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.770 [INFO][4037] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.774 [INFO][4037] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.779 [INFO][4037] ipam/ipam.go 489: Trying affinity for 192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.782 [INFO][4037] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.786 [INFO][4037] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.786 [INFO][4037] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.788 [INFO][4037] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896 Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.795 [INFO][4037] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.806 [INFO][4037] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.65/26] block=192.168.61.64/26 handle="k8s-pod-network.b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.806 [INFO][4037] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.65/26] handle="k8s-pod-network.b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.806 [INFO][4037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:17.836546 containerd[1464]: 2025-01-29 13:06:17.806 [INFO][4037] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.65/26] IPv6=[] ContainerID="b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" HandleID="k8s-pod-network.b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:17.838980 containerd[1464]: 2025-01-29 13:06:17.808 [INFO][4026] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-77ss4" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0", GenerateName:"calico-apiserver-59f5d86475-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebf5e465-ca1d-4589-8d98-2c00876ac6ac", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f5d86475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"", Pod:"calico-apiserver-59f5d86475-77ss4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1497bf0ddab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:17.838980 containerd[1464]: 2025-01-29 13:06:17.808 [INFO][4026] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.65/32] ContainerID="b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-77ss4" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:17.838980 containerd[1464]: 2025-01-29 13:06:17.808 [INFO][4026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1497bf0ddab ContainerID="b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-77ss4" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:17.838980 containerd[1464]: 2025-01-29 13:06:17.813 [INFO][4026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-77ss4" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:17.838980 containerd[1464]: 2025-01-29 13:06:17.813 [INFO][4026] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-77ss4" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0", GenerateName:"calico-apiserver-59f5d86475-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebf5e465-ca1d-4589-8d98-2c00876ac6ac", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f5d86475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896", Pod:"calico-apiserver-59f5d86475-77ss4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1497bf0ddab", MAC:"66:88:81:01:7c:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:17.838980 containerd[1464]: 2025-01-29 13:06:17.834 [INFO][4026] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-77ss4" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:17.871092 containerd[1464]: time="2025-01-29T13:06:17.870998790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:06:17.871268 containerd[1464]: time="2025-01-29T13:06:17.871069211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:06:17.871268 containerd[1464]: time="2025-01-29T13:06:17.871092184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:17.871268 containerd[1464]: time="2025-01-29T13:06:17.871183766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:17.899541 systemd[1]: Started cri-containerd-b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896.scope - libcontainer container b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896. Jan 29 13:06:17.940145 containerd[1464]: time="2025-01-29T13:06:17.940063129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f5d86475-77ss4,Uid:ebf5e465-ca1d-4589-8d98-2c00876ac6ac,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896\"" Jan 29 13:06:17.948324 containerd[1464]: time="2025-01-29T13:06:17.948267767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 13:06:18.368864 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL Jan 29 13:06:18.489854 containerd[1464]: time="2025-01-29T13:06:18.489635938Z" level=info msg="StopPodSandbox for \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\"" Jan 29 13:06:18.647904 containerd[1464]: 2025-01-29 13:06:18.588 [INFO][4110] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Jan 29 13:06:18.647904 containerd[1464]: 2025-01-29 13:06:18.589 [INFO][4110] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" iface="eth0" netns="/var/run/netns/cni-98261e26-1e88-e940-49ee-58ea92f2030d" Jan 29 13:06:18.647904 containerd[1464]: 2025-01-29 13:06:18.590 [INFO][4110] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" iface="eth0" netns="/var/run/netns/cni-98261e26-1e88-e940-49ee-58ea92f2030d" Jan 29 13:06:18.647904 containerd[1464]: 2025-01-29 13:06:18.590 [INFO][4110] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" iface="eth0" netns="/var/run/netns/cni-98261e26-1e88-e940-49ee-58ea92f2030d" Jan 29 13:06:18.647904 containerd[1464]: 2025-01-29 13:06:18.590 [INFO][4110] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Jan 29 13:06:18.647904 containerd[1464]: 2025-01-29 13:06:18.590 [INFO][4110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Jan 29 13:06:18.647904 containerd[1464]: 2025-01-29 13:06:18.635 [INFO][4116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" HandleID="k8s-pod-network.b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:18.647904 containerd[1464]: 2025-01-29 13:06:18.635 [INFO][4116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:18.647904 containerd[1464]: 2025-01-29 13:06:18.635 [INFO][4116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:18.647904 containerd[1464]: 2025-01-29 13:06:18.642 [WARNING][4116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" HandleID="k8s-pod-network.b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:18.647904 containerd[1464]: 2025-01-29 13:06:18.642 [INFO][4116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" HandleID="k8s-pod-network.b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:18.647904 containerd[1464]: 2025-01-29 13:06:18.645 [INFO][4116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:18.647904 containerd[1464]: 2025-01-29 13:06:18.646 [INFO][4110] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Jan 29 13:06:18.648532 containerd[1464]: time="2025-01-29T13:06:18.648101951Z" level=info msg="TearDown network for sandbox \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\" successfully" Jan 29 13:06:18.648532 containerd[1464]: time="2025-01-29T13:06:18.648127408Z" level=info msg="StopPodSandbox for \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\" returns successfully" Jan 29 13:06:18.651103 containerd[1464]: time="2025-01-29T13:06:18.650995585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4pjxk,Uid:42652645-3ddd-4845-94f8-f2a42fdbd94a,Namespace:kube-system,Attempt:1,}" Jan 29 13:06:18.652258 systemd[1]: run-netns-cni\x2d98261e26\x2d1e88\x2de940\x2d49ee\x2d58ea92f2030d.mount: Deactivated successfully. Jan 29 13:06:18.785132 systemd-networkd[1375]: cali13e33985f66: Link UP Jan 29 13:06:18.786230 systemd-networkd[1375]: cali13e33985f66: Gained carrier Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.709 [INFO][4124] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0 coredns-7db6d8ff4d- kube-system 42652645-3ddd-4845-94f8-f2a42fdbd94a 802 0 2025-01-29 13:05:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-e-f5d4e76a77.novalocal coredns-7db6d8ff4d-4pjxk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali13e33985f66 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4pjxk" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-" Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.709 [INFO][4124] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4pjxk" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.738 [INFO][4135] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" HandleID="k8s-pod-network.9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.749 [INFO][4135] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" HandleID="k8s-pod-network.9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002936d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-e-f5d4e76a77.novalocal", "pod":"coredns-7db6d8ff4d-4pjxk", "timestamp":"2025-01-29 13:06:18.738918353 +0000 UTC"}, Hostname:"ci-4081-3-0-e-f5d4e76a77.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.749 [INFO][4135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.749 [INFO][4135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.749 [INFO][4135] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e-f5d4e76a77.novalocal' Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.751 [INFO][4135] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.756 [INFO][4135] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.761 [INFO][4135] ipam/ipam.go 489: Trying affinity for 192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.763 [INFO][4135] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.767 [INFO][4135] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.767 [INFO][4135] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.769 [INFO][4135] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.773 [INFO][4135] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.780 [INFO][4135] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.66/26] block=192.168.61.64/26 handle="k8s-pod-network.9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.780 [INFO][4135] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.66/26] handle="k8s-pod-network.9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.780 [INFO][4135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:18.807588 containerd[1464]: 2025-01-29 13:06:18.780 [INFO][4135] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.66/26] IPv6=[] ContainerID="9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" HandleID="k8s-pod-network.9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:18.809274 containerd[1464]: 2025-01-29 13:06:18.782 [INFO][4124] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4pjxk" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"42652645-3ddd-4845-94f8-f2a42fdbd94a", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-4pjxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali13e33985f66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:18.809274 containerd[1464]: 2025-01-29 13:06:18.782 [INFO][4124] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.66/32] ContainerID="9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4pjxk" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:18.809274 containerd[1464]: 2025-01-29 13:06:18.782 [INFO][4124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13e33985f66 ContainerID="9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4pjxk" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:18.809274 containerd[1464]: 2025-01-29 13:06:18.786 [INFO][4124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4pjxk" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:18.809274 containerd[1464]: 2025-01-29 13:06:18.788 [INFO][4124] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4pjxk" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"42652645-3ddd-4845-94f8-f2a42fdbd94a", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b", Pod:"coredns-7db6d8ff4d-4pjxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali13e33985f66", MAC:"7e:81:36:32:67:5d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:18.809274 containerd[1464]: 2025-01-29 13:06:18.803 [INFO][4124] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4pjxk" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:18.834701 containerd[1464]: time="2025-01-29T13:06:18.834585640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:06:18.834838 containerd[1464]: time="2025-01-29T13:06:18.834707748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:06:18.834838 containerd[1464]: time="2025-01-29T13:06:18.834732705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:18.834930 containerd[1464]: time="2025-01-29T13:06:18.834822974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:18.858563 systemd[1]: Started cri-containerd-9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b.scope - libcontainer container 9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b. Jan 29 13:06:18.904505 containerd[1464]: time="2025-01-29T13:06:18.904253607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4pjxk,Uid:42652645-3ddd-4845-94f8-f2a42fdbd94a,Namespace:kube-system,Attempt:1,} returns sandbox id \"9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b\"" Jan 29 13:06:18.930983 containerd[1464]: time="2025-01-29T13:06:18.930935604Z" level=info msg="CreateContainer within sandbox \"9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 13:06:18.960819 containerd[1464]: time="2025-01-29T13:06:18.960772335Z" level=info msg="CreateContainer within sandbox \"9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ab0b2fd1d218ccc907ee196877648d7257f65faccd689993aa94012f4bc2c3b0\"" Jan 29 13:06:18.969897 containerd[1464]: time="2025-01-29T13:06:18.968650092Z" level=info msg="StartContainer for \"ab0b2fd1d218ccc907ee196877648d7257f65faccd689993aa94012f4bc2c3b0\"" Jan 29 13:06:19.032581 systemd[1]: Started cri-containerd-ab0b2fd1d218ccc907ee196877648d7257f65faccd689993aa94012f4bc2c3b0.scope - libcontainer container ab0b2fd1d218ccc907ee196877648d7257f65faccd689993aa94012f4bc2c3b0. Jan 29 13:06:19.070491 containerd[1464]: time="2025-01-29T13:06:19.070455251Z" level=info msg="StartContainer for \"ab0b2fd1d218ccc907ee196877648d7257f65faccd689993aa94012f4bc2c3b0\" returns successfully" Jan 29 13:06:19.510531 containerd[1464]: time="2025-01-29T13:06:19.510471140Z" level=info msg="StopPodSandbox for \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\"" Jan 29 13:06:19.512152 containerd[1464]: time="2025-01-29T13:06:19.511187740Z" level=info msg="StopPodSandbox for \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\"" Jan 29 13:06:19.512609 containerd[1464]: time="2025-01-29T13:06:19.512546182Z" level=info msg="StopPodSandbox for \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\"" Jan 29 13:06:19.585148 systemd-networkd[1375]: cali1497bf0ddab: Gained IPv6LL Jan 29 13:06:19.734567 containerd[1464]: 2025-01-29 13:06:19.654 [INFO][4280] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Jan 29 13:06:19.734567 containerd[1464]: 2025-01-29 13:06:19.655 [INFO][4280] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" iface="eth0" netns="/var/run/netns/cni-3818fb59-91f8-7852-6d3d-72b68bf4e442" Jan 29 13:06:19.734567 containerd[1464]: 2025-01-29 13:06:19.656 [INFO][4280] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" iface="eth0" netns="/var/run/netns/cni-3818fb59-91f8-7852-6d3d-72b68bf4e442" Jan 29 13:06:19.734567 containerd[1464]: 2025-01-29 13:06:19.659 [INFO][4280] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" iface="eth0" netns="/var/run/netns/cni-3818fb59-91f8-7852-6d3d-72b68bf4e442" Jan 29 13:06:19.734567 containerd[1464]: 2025-01-29 13:06:19.659 [INFO][4280] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Jan 29 13:06:19.734567 containerd[1464]: 2025-01-29 13:06:19.660 [INFO][4280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Jan 29 13:06:19.734567 containerd[1464]: 2025-01-29 13:06:19.710 [INFO][4298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" HandleID="k8s-pod-network.8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:19.734567 containerd[1464]: 2025-01-29 13:06:19.710 [INFO][4298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:19.734567 containerd[1464]: 2025-01-29 13:06:19.710 [INFO][4298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:19.734567 containerd[1464]: 2025-01-29 13:06:19.724 [WARNING][4298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" HandleID="k8s-pod-network.8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:19.734567 containerd[1464]: 2025-01-29 13:06:19.724 [INFO][4298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" HandleID="k8s-pod-network.8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:19.734567 containerd[1464]: 2025-01-29 13:06:19.726 [INFO][4298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:19.734567 containerd[1464]: 2025-01-29 13:06:19.729 [INFO][4280] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Jan 29 13:06:19.735221 containerd[1464]: time="2025-01-29T13:06:19.735112323Z" level=info msg="TearDown network for sandbox \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\" successfully" Jan 29 13:06:19.738675 containerd[1464]: time="2025-01-29T13:06:19.735142239Z" level=info msg="StopPodSandbox for \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\" returns successfully" Jan 29 13:06:19.744148 containerd[1464]: time="2025-01-29T13:06:19.744025610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cd8fd798f-wmf87,Uid:2ce8b713-0eab-46e6-97eb-990957745903,Namespace:calico-system,Attempt:1,}" Jan 29 13:06:19.766092 containerd[1464]: 2025-01-29 13:06:19.647 [INFO][4265] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Jan 29 13:06:19.766092 containerd[1464]: 2025-01-29 13:06:19.647 [INFO][4265] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" iface="eth0" netns="/var/run/netns/cni-8046d522-eb8d-ef17-4ca3-8fcbbabefc51" Jan 29 13:06:19.766092 containerd[1464]: 2025-01-29 13:06:19.648 [INFO][4265] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" iface="eth0" netns="/var/run/netns/cni-8046d522-eb8d-ef17-4ca3-8fcbbabefc51" Jan 29 13:06:19.766092 containerd[1464]: 2025-01-29 13:06:19.649 [INFO][4265] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" iface="eth0" netns="/var/run/netns/cni-8046d522-eb8d-ef17-4ca3-8fcbbabefc51" Jan 29 13:06:19.766092 containerd[1464]: 2025-01-29 13:06:19.649 [INFO][4265] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Jan 29 13:06:19.766092 containerd[1464]: 2025-01-29 13:06:19.649 [INFO][4265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Jan 29 13:06:19.766092 containerd[1464]: 2025-01-29 13:06:19.729 [INFO][4294] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" HandleID="k8s-pod-network.c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:19.766092 containerd[1464]: 2025-01-29 13:06:19.729 [INFO][4294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:19.766092 containerd[1464]: 2025-01-29 13:06:19.729 [INFO][4294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:19.766092 containerd[1464]: 2025-01-29 13:06:19.752 [WARNING][4294] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" HandleID="k8s-pod-network.c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:19.766092 containerd[1464]: 2025-01-29 13:06:19.754 [INFO][4294] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" HandleID="k8s-pod-network.c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:19.766092 containerd[1464]: 2025-01-29 13:06:19.757 [INFO][4294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:19.766092 containerd[1464]: 2025-01-29 13:06:19.760 [INFO][4265] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Jan 29 13:06:19.769517 containerd[1464]: time="2025-01-29T13:06:19.769359778Z" level=info msg="TearDown network for sandbox \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\" successfully" Jan 29 13:06:19.769517 containerd[1464]: time="2025-01-29T13:06:19.769439047Z" level=info msg="StopPodSandbox for \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\" returns successfully" Jan 29 13:06:19.771076 containerd[1464]: time="2025-01-29T13:06:19.771008774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f5d86475-4cr5v,Uid:33e601d8-e340-43d3-8175-0473e13a164d,Namespace:calico-apiserver,Attempt:1,}" Jan 29 13:06:19.804841 containerd[1464]: 2025-01-29 13:06:19.683 [INFO][4284] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Jan 29 13:06:19.804841 containerd[1464]: 2025-01-29 13:06:19.683 [INFO][4284] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" iface="eth0" netns="/var/run/netns/cni-a9a476e1-e38a-d491-be3c-3970b24f34e6" Jan 29 13:06:19.804841 containerd[1464]: 2025-01-29 13:06:19.684 [INFO][4284] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" iface="eth0" netns="/var/run/netns/cni-a9a476e1-e38a-d491-be3c-3970b24f34e6" Jan 29 13:06:19.804841 containerd[1464]: 2025-01-29 13:06:19.684 [INFO][4284] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" iface="eth0" netns="/var/run/netns/cni-a9a476e1-e38a-d491-be3c-3970b24f34e6" Jan 29 13:06:19.804841 containerd[1464]: 2025-01-29 13:06:19.684 [INFO][4284] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Jan 29 13:06:19.804841 containerd[1464]: 2025-01-29 13:06:19.684 [INFO][4284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Jan 29 13:06:19.804841 containerd[1464]: 2025-01-29 13:06:19.763 [INFO][4302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" HandleID="k8s-pod-network.1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:19.804841 containerd[1464]: 2025-01-29 13:06:19.763 [INFO][4302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:19.804841 containerd[1464]: 2025-01-29 13:06:19.764 [INFO][4302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:19.804841 containerd[1464]: 2025-01-29 13:06:19.783 [WARNING][4302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" HandleID="k8s-pod-network.1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:19.804841 containerd[1464]: 2025-01-29 13:06:19.785 [INFO][4302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" HandleID="k8s-pod-network.1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:19.804841 containerd[1464]: 2025-01-29 13:06:19.792 [INFO][4302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:19.804841 containerd[1464]: 2025-01-29 13:06:19.800 [INFO][4284] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Jan 29 13:06:19.808019 containerd[1464]: time="2025-01-29T13:06:19.807453961Z" level=info msg="TearDown network for sandbox \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\" successfully" Jan 29 13:06:19.808019 containerd[1464]: time="2025-01-29T13:06:19.807592349Z" level=info msg="StopPodSandbox for \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\" returns successfully" Jan 29 13:06:19.810039 containerd[1464]: time="2025-01-29T13:06:19.809791464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6b7pj,Uid:2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a,Namespace:kube-system,Attempt:1,}" Jan 29 13:06:19.814863 kubelet[2660]: I0129 13:06:19.814765 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4pjxk" podStartSLOduration=35.81474398 podStartE2EDuration="35.81474398s" podCreationTimestamp="2025-01-29 13:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 13:06:19.811091987 +0000 UTC m=+50.528081564" watchObservedRunningTime="2025-01-29 13:06:19.81474398 +0000 UTC m=+50.531733547" Jan 29 13:06:19.887805 systemd[1]: run-containerd-runc-k8s.io-ab0b2fd1d218ccc907ee196877648d7257f65faccd689993aa94012f4bc2c3b0-runc.hpnXhV.mount: Deactivated successfully. Jan 29 13:06:19.888363 systemd[1]: run-netns-cni\x2d3818fb59\x2d91f8\x2d7852\x2d6d3d\x2d72b68bf4e442.mount: Deactivated successfully. Jan 29 13:06:19.888762 systemd[1]: run-netns-cni\x2d8046d522\x2deb8d\x2def17\x2d4ca3\x2d8fcbbabefc51.mount: Deactivated successfully. Jan 29 13:06:19.889099 systemd[1]: run-netns-cni\x2da9a476e1\x2de38a\x2dd491\x2dbe3c\x2d3970b24f34e6.mount: Deactivated successfully. Jan 29 13:06:20.071064 systemd-networkd[1375]: calia106cd2a277: Link UP Jan 29 13:06:20.072741 systemd-networkd[1375]: calia106cd2a277: Gained carrier Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:19.930 [INFO][4314] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0 calico-kube-controllers-6cd8fd798f- calico-system 2ce8b713-0eab-46e6-97eb-990957745903 814 0 2025-01-29 13:05:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cd8fd798f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-e-f5d4e76a77.novalocal calico-kube-controllers-6cd8fd798f-wmf87 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia106cd2a277 [] []}} ContainerID="d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" Namespace="calico-system" Pod="calico-kube-controllers-6cd8fd798f-wmf87" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-" Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:19.930 [INFO][4314] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" Namespace="calico-system" Pod="calico-kube-controllers-6cd8fd798f-wmf87" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:19.994 [INFO][4352] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" HandleID="k8s-pod-network.d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.008 [INFO][4352] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" HandleID="k8s-pod-network.d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf700), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-e-f5d4e76a77.novalocal", "pod":"calico-kube-controllers-6cd8fd798f-wmf87", "timestamp":"2025-01-29 13:06:19.994826175 +0000 UTC"}, Hostname:"ci-4081-3-0-e-f5d4e76a77.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.009 [INFO][4352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.010 [INFO][4352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.010 [INFO][4352] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e-f5d4e76a77.novalocal' Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.012 [INFO][4352] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.020 [INFO][4352] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.028 [INFO][4352] ipam/ipam.go 489: Trying affinity for 192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.033 [INFO][4352] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.037 [INFO][4352] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.038 [INFO][4352] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.040 [INFO][4352] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.050 [INFO][4352] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.059 [INFO][4352] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.67/26] block=192.168.61.64/26 handle="k8s-pod-network.d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.059 [INFO][4352] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.67/26] handle="k8s-pod-network.d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.059 [INFO][4352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:20.108818 containerd[1464]: 2025-01-29 13:06:20.059 [INFO][4352] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.67/26] IPv6=[] ContainerID="d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" HandleID="k8s-pod-network.d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:20.110742 containerd[1464]: 2025-01-29 13:06:20.063 [INFO][4314] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" Namespace="calico-system" Pod="calico-kube-controllers-6cd8fd798f-wmf87" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0", GenerateName:"calico-kube-controllers-6cd8fd798f-", Namespace:"calico-system", SelfLink:"", UID:"2ce8b713-0eab-46e6-97eb-990957745903", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cd8fd798f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"", Pod:"calico-kube-controllers-6cd8fd798f-wmf87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia106cd2a277", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:20.110742 containerd[1464]: 2025-01-29 13:06:20.063 [INFO][4314] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.67/32] ContainerID="d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" Namespace="calico-system" Pod="calico-kube-controllers-6cd8fd798f-wmf87" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:20.110742 containerd[1464]: 2025-01-29 13:06:20.063 [INFO][4314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia106cd2a277 ContainerID="d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" Namespace="calico-system" Pod="calico-kube-controllers-6cd8fd798f-wmf87" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:20.110742 containerd[1464]: 2025-01-29 13:06:20.074 [INFO][4314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" Namespace="calico-system" Pod="calico-kube-controllers-6cd8fd798f-wmf87" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:20.110742 containerd[1464]: 2025-01-29 13:06:20.075 [INFO][4314] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" Namespace="calico-system" Pod="calico-kube-controllers-6cd8fd798f-wmf87" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0", GenerateName:"calico-kube-controllers-6cd8fd798f-", Namespace:"calico-system", SelfLink:"", UID:"2ce8b713-0eab-46e6-97eb-990957745903", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cd8fd798f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb", Pod:"calico-kube-controllers-6cd8fd798f-wmf87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia106cd2a277", MAC:"42:cf:3c:b3:22:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:20.110742 containerd[1464]: 2025-01-29 13:06:20.105 [INFO][4314] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb" Namespace="calico-system" Pod="calico-kube-controllers-6cd8fd798f-wmf87" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:20.147728 systemd-networkd[1375]: cali6960f228e99: Link UP Jan 29 13:06:20.147957 systemd-networkd[1375]: cali6960f228e99: Gained carrier Jan 29 13:06:20.177971 containerd[1464]: time="2025-01-29T13:06:20.177864102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:06:20.178283 containerd[1464]: time="2025-01-29T13:06:20.177943751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:06:20.178283 containerd[1464]: time="2025-01-29T13:06:20.177958228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:20.178283 containerd[1464]: time="2025-01-29T13:06:20.178068264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:19.925 [INFO][4323] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0 calico-apiserver-59f5d86475- calico-apiserver 33e601d8-e340-43d3-8175-0473e13a164d 813 0 2025-01-29 13:05:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59f5d86475 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-e-f5d4e76a77.novalocal calico-apiserver-59f5d86475-4cr5v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6960f228e99 [] []}} ContainerID="754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-4cr5v" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-" Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:19.926 [INFO][4323] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-4cr5v" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.025 [INFO][4357] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" HandleID="k8s-pod-network.754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.042 [INFO][4357] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" HandleID="k8s-pod-network.754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001025b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-e-f5d4e76a77.novalocal", "pod":"calico-apiserver-59f5d86475-4cr5v", "timestamp":"2025-01-29 13:06:20.025114536 +0000 UTC"}, Hostname:"ci-4081-3-0-e-f5d4e76a77.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.042 [INFO][4357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.062 [INFO][4357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.062 [INFO][4357] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e-f5d4e76a77.novalocal' Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.070 [INFO][4357] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.089 [INFO][4357] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.101 [INFO][4357] ipam/ipam.go 489: Trying affinity for 192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.103 [INFO][4357] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.110 [INFO][4357] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.110 [INFO][4357] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.113 [INFO][4357] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3 Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.120 [INFO][4357] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.132 [INFO][4357] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.68/26] block=192.168.61.64/26 handle="k8s-pod-network.754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.132 [INFO][4357] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.68/26] handle="k8s-pod-network.754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.132 [INFO][4357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:20.186004 containerd[1464]: 2025-01-29 13:06:20.132 [INFO][4357] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.68/26] IPv6=[] ContainerID="754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" HandleID="k8s-pod-network.754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:20.187122 containerd[1464]: 2025-01-29 13:06:20.137 [INFO][4323] cni-plugin/k8s.go 386: Populated endpoint ContainerID="754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-4cr5v" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0", GenerateName:"calico-apiserver-59f5d86475-", Namespace:"calico-apiserver", SelfLink:"", UID:"33e601d8-e340-43d3-8175-0473e13a164d", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f5d86475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"", Pod:"calico-apiserver-59f5d86475-4cr5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6960f228e99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:20.187122 containerd[1464]: 2025-01-29 13:06:20.137 [INFO][4323] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.68/32] ContainerID="754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-4cr5v" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:20.187122 containerd[1464]: 2025-01-29 13:06:20.137 [INFO][4323] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6960f228e99 ContainerID="754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-4cr5v" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:20.187122 containerd[1464]: 2025-01-29 13:06:20.148 [INFO][4323] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-4cr5v" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:20.187122 containerd[1464]: 2025-01-29 13:06:20.150 [INFO][4323] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-4cr5v" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0", GenerateName:"calico-apiserver-59f5d86475-", Namespace:"calico-apiserver", SelfLink:"", UID:"33e601d8-e340-43d3-8175-0473e13a164d", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f5d86475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3", Pod:"calico-apiserver-59f5d86475-4cr5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6960f228e99", MAC:"f2:6c:6e:81:a6:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:20.187122 containerd[1464]: 2025-01-29 13:06:20.178 [INFO][4323] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3" Namespace="calico-apiserver" Pod="calico-apiserver-59f5d86475-4cr5v" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:20.216897 systemd[1]: Started cri-containerd-d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb.scope - libcontainer container d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb. Jan 29 13:06:20.225618 systemd-networkd[1375]: cali13e33985f66: Gained IPv6LL Jan 29 13:06:20.250182 systemd-networkd[1375]: cali9c1417fa83c: Link UP Jan 29 13:06:20.251525 systemd-networkd[1375]: cali9c1417fa83c: Gained carrier Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:19.987 [INFO][4339] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0 coredns-7db6d8ff4d- kube-system 2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a 815 0 2025-01-29 13:05:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-e-f5d4e76a77.novalocal coredns-7db6d8ff4d-6b7pj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9c1417fa83c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6b7pj" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-" Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:19.988 [INFO][4339] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6b7pj" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.064 [INFO][4366] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" HandleID="k8s-pod-network.4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.096 [INFO][4366] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" HandleID="k8s-pod-network.4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319c30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-e-f5d4e76a77.novalocal", "pod":"coredns-7db6d8ff4d-6b7pj", "timestamp":"2025-01-29 13:06:20.064891763 +0000 UTC"}, Hostname:"ci-4081-3-0-e-f5d4e76a77.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.096 [INFO][4366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.134 [INFO][4366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.134 [INFO][4366] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e-f5d4e76a77.novalocal' Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.137 [INFO][4366] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.145 [INFO][4366] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.164 [INFO][4366] ipam/ipam.go 489: Trying affinity for 192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.166 [INFO][4366] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.175 [INFO][4366] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.177 [INFO][4366] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.194 [INFO][4366] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555 Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.213 [INFO][4366] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.240 [INFO][4366] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.69/26] block=192.168.61.64/26 handle="k8s-pod-network.4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.240 [INFO][4366] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.69/26] handle="k8s-pod-network.4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.240 [INFO][4366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:20.293719 containerd[1464]: 2025-01-29 13:06:20.240 [INFO][4366] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.69/26] IPv6=[] ContainerID="4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" HandleID="k8s-pod-network.4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:20.294409 containerd[1464]: 2025-01-29 13:06:20.243 [INFO][4339] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6b7pj" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-6b7pj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c1417fa83c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:20.294409 containerd[1464]: 2025-01-29 13:06:20.244 [INFO][4339] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.69/32] ContainerID="4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6b7pj" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:20.294409 containerd[1464]: 2025-01-29 13:06:20.244 [INFO][4339] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c1417fa83c ContainerID="4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6b7pj" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:20.294409 containerd[1464]: 2025-01-29 13:06:20.252 [INFO][4339] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6b7pj" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:20.294409 containerd[1464]: 2025-01-29 13:06:20.253 [INFO][4339] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6b7pj" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555", Pod:"coredns-7db6d8ff4d-6b7pj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c1417fa83c", MAC:"be:f9:71:ee:5d:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:20.294409 containerd[1464]: 2025-01-29 13:06:20.288 [INFO][4339] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6b7pj" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:20.299543 containerd[1464]: time="2025-01-29T13:06:20.298210739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:06:20.299543 containerd[1464]: time="2025-01-29T13:06:20.298271213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:06:20.299543 containerd[1464]: time="2025-01-29T13:06:20.298290950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:20.299543 containerd[1464]: time="2025-01-29T13:06:20.298374926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:20.361867 systemd[1]: Started cri-containerd-754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3.scope - libcontainer container 754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3. Jan 29 13:06:20.382222 containerd[1464]: time="2025-01-29T13:06:20.382145237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:06:20.385466 containerd[1464]: time="2025-01-29T13:06:20.384706990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:06:20.385466 containerd[1464]: time="2025-01-29T13:06:20.384727949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:20.385466 containerd[1464]: time="2025-01-29T13:06:20.384818308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:20.421593 systemd[1]: Started cri-containerd-4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555.scope - libcontainer container 4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555. Jan 29 13:06:20.486083 containerd[1464]: time="2025-01-29T13:06:20.485723359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6b7pj,Uid:2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a,Namespace:kube-system,Attempt:1,} returns sandbox id \"4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555\"" Jan 29 13:06:20.496055 containerd[1464]: time="2025-01-29T13:06:20.495600179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f5d86475-4cr5v,Uid:33e601d8-e340-43d3-8175-0473e13a164d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3\"" Jan 29 13:06:20.500119 containerd[1464]: time="2025-01-29T13:06:20.500087334Z" level=info msg="CreateContainer within sandbox \"4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 13:06:20.514322 containerd[1464]: time="2025-01-29T13:06:20.513274549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cd8fd798f-wmf87,Uid:2ce8b713-0eab-46e6-97eb-990957745903,Namespace:calico-system,Attempt:1,} returns sandbox id \"d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb\"" Jan 29 13:06:20.530896 containerd[1464]: time="2025-01-29T13:06:20.530779241Z" level=info msg="CreateContainer within sandbox \"4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b83140fcc37d85761ffe976308946e604998f18473838ad127cee1c97bdf6033\"" Jan 29 13:06:20.531324 containerd[1464]: time="2025-01-29T13:06:20.531303703Z" level=info msg="StartContainer for \"b83140fcc37d85761ffe976308946e604998f18473838ad127cee1c97bdf6033\"" Jan 29 13:06:20.562586 systemd[1]: Started cri-containerd-b83140fcc37d85761ffe976308946e604998f18473838ad127cee1c97bdf6033.scope - libcontainer container b83140fcc37d85761ffe976308946e604998f18473838ad127cee1c97bdf6033. Jan 29 13:06:20.595605 containerd[1464]: time="2025-01-29T13:06:20.595298091Z" level=info msg="StartContainer for \"b83140fcc37d85761ffe976308946e604998f18473838ad127cee1c97bdf6033\" returns successfully" Jan 29 13:06:20.830608 kubelet[2660]: I0129 13:06:20.830358 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6b7pj" podStartSLOduration=36.830340081 podStartE2EDuration="36.830340081s" podCreationTimestamp="2025-01-29 13:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 13:06:20.806432047 +0000 UTC m=+51.523421675" watchObservedRunningTime="2025-01-29 13:06:20.830340081 +0000 UTC m=+51.547329658" Jan 29 13:06:21.444811 systemd[1]: run-containerd-runc-k8s.io-7bbb0135d11e3fd7e401ad5dc7d58c56e97110d60d560e7cfa3ebf7f79730081-runc.mHUKKc.mount: Deactivated successfully. Jan 29 13:06:21.568779 systemd-networkd[1375]: cali6960f228e99: Gained IPv6LL Jan 29 13:06:21.952715 systemd-networkd[1375]: cali9c1417fa83c: Gained IPv6LL Jan 29 13:06:22.016888 systemd-networkd[1375]: calia106cd2a277: Gained IPv6LL Jan 29 13:06:22.489674 containerd[1464]: time="2025-01-29T13:06:22.489242323Z" level=info msg="StopPodSandbox for \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\"" Jan 29 13:06:22.602450 containerd[1464]: 2025-01-29 13:06:22.560 [INFO][4621] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Jan 29 13:06:22.602450 containerd[1464]: 2025-01-29 13:06:22.560 [INFO][4621] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" iface="eth0" netns="/var/run/netns/cni-dbafd466-fc11-25c6-36f5-bbcd46ce9a9d" Jan 29 13:06:22.602450 containerd[1464]: 2025-01-29 13:06:22.560 [INFO][4621] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" iface="eth0" netns="/var/run/netns/cni-dbafd466-fc11-25c6-36f5-bbcd46ce9a9d" Jan 29 13:06:22.602450 containerd[1464]: 2025-01-29 13:06:22.561 [INFO][4621] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" iface="eth0" netns="/var/run/netns/cni-dbafd466-fc11-25c6-36f5-bbcd46ce9a9d" Jan 29 13:06:22.602450 containerd[1464]: 2025-01-29 13:06:22.561 [INFO][4621] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Jan 29 13:06:22.602450 containerd[1464]: 2025-01-29 13:06:22.561 [INFO][4621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Jan 29 13:06:22.602450 containerd[1464]: 2025-01-29 13:06:22.584 [INFO][4627] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" HandleID="k8s-pod-network.3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:22.602450 containerd[1464]: 2025-01-29 13:06:22.584 [INFO][4627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:22.602450 containerd[1464]: 2025-01-29 13:06:22.584 [INFO][4627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:22.602450 containerd[1464]: 2025-01-29 13:06:22.596 [WARNING][4627] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" HandleID="k8s-pod-network.3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:22.602450 containerd[1464]: 2025-01-29 13:06:22.596 [INFO][4627] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" HandleID="k8s-pod-network.3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:22.602450 containerd[1464]: 2025-01-29 13:06:22.598 [INFO][4627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:22.602450 containerd[1464]: 2025-01-29 13:06:22.600 [INFO][4621] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Jan 29 13:06:22.602450 containerd[1464]: time="2025-01-29T13:06:22.602168421Z" level=info msg="TearDown network for sandbox \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\" successfully" Jan 29 13:06:22.602450 containerd[1464]: time="2025-01-29T13:06:22.602198838Z" level=info msg="StopPodSandbox for \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\" returns successfully" Jan 29 13:06:22.603743 containerd[1464]: time="2025-01-29T13:06:22.603690690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-crzf7,Uid:c60872ff-6905-49ac-9a5c-64272dbc73e4,Namespace:calico-system,Attempt:1,}" Jan 29 13:06:22.606630 systemd[1]: run-netns-cni\x2ddbafd466\x2dfc11\x2d25c6\x2d36f5\x2dbbcd46ce9a9d.mount: Deactivated successfully. Jan 29 13:06:22.878824 systemd-networkd[1375]: cali72cfb6e6a74: Link UP Jan 29 13:06:22.880536 systemd-networkd[1375]: cali72cfb6e6a74: Gained carrier Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.737 [INFO][4634] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0 csi-node-driver- calico-system c60872ff-6905-49ac-9a5c-64272dbc73e4 855 0 2025-01-29 13:05:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-e-f5d4e76a77.novalocal csi-node-driver-crzf7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali72cfb6e6a74 [] []}} ContainerID="638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" Namespace="calico-system" Pod="csi-node-driver-crzf7" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-" Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.737 [INFO][4634] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" Namespace="calico-system" Pod="csi-node-driver-crzf7" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.786 [INFO][4646] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" HandleID="k8s-pod-network.638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.797 [INFO][4646] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" HandleID="k8s-pod-network.638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-e-f5d4e76a77.novalocal", "pod":"csi-node-driver-crzf7", "timestamp":"2025-01-29 13:06:22.786491034 +0000 UTC"}, Hostname:"ci-4081-3-0-e-f5d4e76a77.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.797 [INFO][4646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.797 [INFO][4646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.797 [INFO][4646] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e-f5d4e76a77.novalocal' Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.799 [INFO][4646] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.804 [INFO][4646] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.809 [INFO][4646] ipam/ipam.go 489: Trying affinity for 192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.811 [INFO][4646] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.813 [INFO][4646] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.813 [INFO][4646] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.815 [INFO][4646] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.854 [INFO][4646] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.871 [INFO][4646] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.70/26] block=192.168.61.64/26 handle="k8s-pod-network.638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.871 [INFO][4646] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.70/26] handle="k8s-pod-network.638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" host="ci-4081-3-0-e-f5d4e76a77.novalocal" Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.871 [INFO][4646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:22.915217 containerd[1464]: 2025-01-29 13:06:22.871 [INFO][4646] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.70/26] IPv6=[] ContainerID="638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" HandleID="k8s-pod-network.638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:22.918205 containerd[1464]: 2025-01-29 13:06:22.874 [INFO][4634] cni-plugin/k8s.go 386: Populated endpoint ContainerID="638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" Namespace="calico-system" Pod="csi-node-driver-crzf7" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c60872ff-6905-49ac-9a5c-64272dbc73e4", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"", Pod:"csi-node-driver-crzf7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali72cfb6e6a74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:22.918205 containerd[1464]: 2025-01-29 13:06:22.875 [INFO][4634] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.70/32] ContainerID="638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" Namespace="calico-system" Pod="csi-node-driver-crzf7" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:22.918205 containerd[1464]: 2025-01-29 13:06:22.875 [INFO][4634] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72cfb6e6a74 ContainerID="638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" Namespace="calico-system" Pod="csi-node-driver-crzf7" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:22.918205 containerd[1464]: 2025-01-29 13:06:22.879 [INFO][4634] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" Namespace="calico-system" Pod="csi-node-driver-crzf7" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:22.918205 containerd[1464]: 2025-01-29 13:06:22.881 [INFO][4634] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" Namespace="calico-system" Pod="csi-node-driver-crzf7" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c60872ff-6905-49ac-9a5c-64272dbc73e4", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc", Pod:"csi-node-driver-crzf7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali72cfb6e6a74", MAC:"42:00:26:96:cf:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:22.918205 containerd[1464]: 2025-01-29 13:06:22.911 [INFO][4634] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc" Namespace="calico-system" Pod="csi-node-driver-crzf7" WorkloadEndpoint="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:22.963801 containerd[1464]: time="2025-01-29T13:06:22.962330763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:06:22.963801 containerd[1464]: time="2025-01-29T13:06:22.962416433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:06:22.963801 containerd[1464]: time="2025-01-29T13:06:22.962445768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:22.963801 containerd[1464]: time="2025-01-29T13:06:22.962574149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:23.007595 systemd[1]: Started cri-containerd-638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc.scope - libcontainer container 638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc. Jan 29 13:06:23.086782 containerd[1464]: time="2025-01-29T13:06:23.086698140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-crzf7,Uid:c60872ff-6905-49ac-9a5c-64272dbc73e4,Namespace:calico-system,Attempt:1,} returns sandbox id \"638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc\"" Jan 29 13:06:23.953162 containerd[1464]: time="2025-01-29T13:06:23.953094301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:23.954673 containerd[1464]: time="2025-01-29T13:06:23.954614365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 13:06:23.956590 containerd[1464]: time="2025-01-29T13:06:23.956545049Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:23.959204 containerd[1464]: time="2025-01-29T13:06:23.959142028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:23.960318 containerd[1464]: time="2025-01-29T13:06:23.959870752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 6.011565074s" Jan 29 13:06:23.960318 containerd[1464]: time="2025-01-29T13:06:23.959903563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 13:06:23.961842 containerd[1464]: time="2025-01-29T13:06:23.961813117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 13:06:23.963002 containerd[1464]: time="2025-01-29T13:06:23.962964080Z" level=info msg="CreateContainer within sandbox \"b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 13:06:23.988652 containerd[1464]: time="2025-01-29T13:06:23.988548609Z" level=info msg="CreateContainer within sandbox \"b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2c39be78bc6b8e71a3d09afbe058bdd8abf67e0e871318a5fd4d93db77b75a39\"" Jan 29 13:06:23.989220 containerd[1464]: time="2025-01-29T13:06:23.989137702Z" level=info msg="StartContainer for \"2c39be78bc6b8e71a3d09afbe058bdd8abf67e0e871318a5fd4d93db77b75a39\"" Jan 29 13:06:24.029544 systemd[1]: Started cri-containerd-2c39be78bc6b8e71a3d09afbe058bdd8abf67e0e871318a5fd4d93db77b75a39.scope - libcontainer container 2c39be78bc6b8e71a3d09afbe058bdd8abf67e0e871318a5fd4d93db77b75a39. Jan 29 13:06:24.081513 containerd[1464]: time="2025-01-29T13:06:24.081114076Z" level=info msg="StartContainer for \"2c39be78bc6b8e71a3d09afbe058bdd8abf67e0e871318a5fd4d93db77b75a39\" returns successfully" Jan 29 13:06:24.128579 systemd-networkd[1375]: cali72cfb6e6a74: Gained IPv6LL Jan 29 13:06:24.371466 containerd[1464]: time="2025-01-29T13:06:24.371373750Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:24.375653 containerd[1464]: time="2025-01-29T13:06:24.375587757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 13:06:24.380367 containerd[1464]: time="2025-01-29T13:06:24.380315535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 418.460019ms" Jan 29 13:06:24.380479 containerd[1464]: time="2025-01-29T13:06:24.380379174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 13:06:24.384763 containerd[1464]: time="2025-01-29T13:06:24.384719537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 13:06:24.388380 containerd[1464]: time="2025-01-29T13:06:24.388330585Z" level=info msg="CreateContainer within sandbox \"754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 13:06:24.421215 containerd[1464]: time="2025-01-29T13:06:24.421155207Z" level=info msg="CreateContainer within sandbox \"754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f2526f3c752c575c8e7e1852f2fbd889eceac63f3f2186101be69388401db247\"" Jan 29 13:06:24.427072 containerd[1464]: time="2025-01-29T13:06:24.427025463Z" level=info msg="StartContainer for \"f2526f3c752c575c8e7e1852f2fbd889eceac63f3f2186101be69388401db247\"" Jan 29 13:06:24.462570 systemd[1]: Started cri-containerd-f2526f3c752c575c8e7e1852f2fbd889eceac63f3f2186101be69388401db247.scope - libcontainer container f2526f3c752c575c8e7e1852f2fbd889eceac63f3f2186101be69388401db247. Jan 29 13:06:24.517010 containerd[1464]: time="2025-01-29T13:06:24.516968974Z" level=info msg="StartContainer for \"f2526f3c752c575c8e7e1852f2fbd889eceac63f3f2186101be69388401db247\" returns successfully" Jan 29 13:06:24.867143 kubelet[2660]: I0129 13:06:24.867082 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59f5d86475-4cr5v" podStartSLOduration=30.981960998 podStartE2EDuration="34.867064361s" podCreationTimestamp="2025-01-29 13:05:50 +0000 UTC" firstStartedPulling="2025-01-29 13:06:20.497076922 +0000 UTC m=+51.214066489" lastFinishedPulling="2025-01-29 13:06:24.382180285 +0000 UTC m=+55.099169852" observedRunningTime="2025-01-29 13:06:24.866596185 +0000 UTC m=+55.583585762" watchObservedRunningTime="2025-01-29 13:06:24.867064361 +0000 UTC m=+55.584053938" Jan 29 13:06:25.861929 kubelet[2660]: I0129 13:06:25.861714 2660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 13:06:25.862549 kubelet[2660]: I0129 13:06:25.861750 2660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 13:06:27.550933 containerd[1464]: time="2025-01-29T13:06:27.549583620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:27.552823 containerd[1464]: time="2025-01-29T13:06:27.552736932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 13:06:27.553438 containerd[1464]: time="2025-01-29T13:06:27.553358375Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:27.556241 containerd[1464]: time="2025-01-29T13:06:27.556160671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:27.557146 containerd[1464]: time="2025-01-29T13:06:27.556849009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.172071293s" Jan 29 13:06:27.557146 containerd[1464]: time="2025-01-29T13:06:27.556891959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 13:06:27.558558 containerd[1464]: time="2025-01-29T13:06:27.558524465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 13:06:27.573508 containerd[1464]: time="2025-01-29T13:06:27.571814712Z" level=info msg="CreateContainer within sandbox \"d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 13:06:27.601333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount62866818.mount: Deactivated successfully. Jan 29 13:06:27.617995 containerd[1464]: time="2025-01-29T13:06:27.617954617Z" level=info msg="CreateContainer within sandbox \"d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b410389a06a97a06e6307aedfc11b339cb9f6a7f123676258621830c124409f9\"" Jan 29 13:06:27.621421 containerd[1464]: time="2025-01-29T13:06:27.619549923Z" level=info msg="StartContainer for \"b410389a06a97a06e6307aedfc11b339cb9f6a7f123676258621830c124409f9\"" Jan 29 13:06:27.668560 systemd[1]: Started cri-containerd-b410389a06a97a06e6307aedfc11b339cb9f6a7f123676258621830c124409f9.scope - libcontainer container b410389a06a97a06e6307aedfc11b339cb9f6a7f123676258621830c124409f9. Jan 29 13:06:27.730228 containerd[1464]: time="2025-01-29T13:06:27.730116352Z" level=info msg="StartContainer for \"b410389a06a97a06e6307aedfc11b339cb9f6a7f123676258621830c124409f9\" returns successfully" Jan 29 13:06:27.888068 kubelet[2660]: I0129 13:06:27.887884 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59f5d86475-77ss4" podStartSLOduration=31.869128416 podStartE2EDuration="37.88786684s" podCreationTimestamp="2025-01-29 13:05:50 +0000 UTC" firstStartedPulling="2025-01-29 13:06:17.942169419 +0000 UTC m=+48.659158986" lastFinishedPulling="2025-01-29 13:06:23.960907843 +0000 UTC m=+54.677897410" observedRunningTime="2025-01-29 13:06:24.883492616 +0000 UTC m=+55.600482204" watchObservedRunningTime="2025-01-29 13:06:27.88786684 +0000 UTC m=+58.604856407" Jan 29 13:06:27.888674 kubelet[2660]: I0129 13:06:27.888249 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6cd8fd798f-wmf87" podStartSLOduration=30.846077038 podStartE2EDuration="37.888241211s" podCreationTimestamp="2025-01-29 13:05:50 +0000 UTC" firstStartedPulling="2025-01-29 13:06:20.515710627 +0000 UTC m=+51.232700364" lastFinishedPulling="2025-01-29 13:06:27.55787496 +0000 UTC m=+58.274864537" observedRunningTime="2025-01-29 13:06:27.887960776 +0000 UTC m=+58.604950343" watchObservedRunningTime="2025-01-29 13:06:27.888241211 +0000 UTC m=+58.605230788" Jan 29 13:06:28.935904 systemd[1]: run-containerd-runc-k8s.io-b410389a06a97a06e6307aedfc11b339cb9f6a7f123676258621830c124409f9-runc.5neLVu.mount: Deactivated successfully. Jan 29 13:06:29.347628 containerd[1464]: time="2025-01-29T13:06:29.347585316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:29.349605 containerd[1464]: time="2025-01-29T13:06:29.348833903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 13:06:29.351112 containerd[1464]: time="2025-01-29T13:06:29.350566296Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:29.354484 containerd[1464]: time="2025-01-29T13:06:29.354444756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:29.355578 containerd[1464]: time="2025-01-29T13:06:29.355514930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.796954167s" Jan 29 13:06:29.355676 containerd[1464]: time="2025-01-29T13:06:29.355658068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 13:06:29.358617 containerd[1464]: time="2025-01-29T13:06:29.358579106Z" level=info msg="CreateContainer within sandbox \"638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 13:06:29.384532 containerd[1464]: time="2025-01-29T13:06:29.384492396Z" level=info msg="CreateContainer within sandbox \"638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"93a0723cb26117f1b7080edc0effdddfcfeb2bb6f113da8ef6e5db7c0f907b1a\"" Jan 29 13:06:29.385316 containerd[1464]: time="2025-01-29T13:06:29.385280892Z" level=info msg="StartContainer for \"93a0723cb26117f1b7080edc0effdddfcfeb2bb6f113da8ef6e5db7c0f907b1a\"" Jan 29 13:06:29.418571 systemd[1]: Started cri-containerd-93a0723cb26117f1b7080edc0effdddfcfeb2bb6f113da8ef6e5db7c0f907b1a.scope - libcontainer container 93a0723cb26117f1b7080edc0effdddfcfeb2bb6f113da8ef6e5db7c0f907b1a. Jan 29 13:06:29.452062 containerd[1464]: time="2025-01-29T13:06:29.452011951Z" level=info msg="StopPodSandbox for \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\"" Jan 29 13:06:29.486217 containerd[1464]: time="2025-01-29T13:06:29.485926459Z" level=info msg="StartContainer for \"93a0723cb26117f1b7080edc0effdddfcfeb2bb6f113da8ef6e5db7c0f907b1a\" returns successfully" Jan 29 13:06:29.490590 containerd[1464]: time="2025-01-29T13:06:29.490233502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 13:06:29.607538 containerd[1464]: 2025-01-29 13:06:29.533 [WARNING][4908] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"42652645-3ddd-4845-94f8-f2a42fdbd94a", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b", Pod:"coredns-7db6d8ff4d-4pjxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali13e33985f66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:29.607538 containerd[1464]: 2025-01-29 13:06:29.533 [INFO][4908] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Jan 29 13:06:29.607538 containerd[1464]: 2025-01-29 13:06:29.533 [INFO][4908] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" iface="eth0" netns="" Jan 29 13:06:29.607538 containerd[1464]: 2025-01-29 13:06:29.533 [INFO][4908] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Jan 29 13:06:29.607538 containerd[1464]: 2025-01-29 13:06:29.533 [INFO][4908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Jan 29 13:06:29.607538 containerd[1464]: 2025-01-29 13:06:29.563 [INFO][4918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" HandleID="k8s-pod-network.b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:29.607538 containerd[1464]: 2025-01-29 13:06:29.565 [INFO][4918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:29.607538 containerd[1464]: 2025-01-29 13:06:29.565 [INFO][4918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:29.607538 containerd[1464]: 2025-01-29 13:06:29.583 [WARNING][4918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" HandleID="k8s-pod-network.b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:29.607538 containerd[1464]: 2025-01-29 13:06:29.584 [INFO][4918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" HandleID="k8s-pod-network.b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:29.607538 containerd[1464]: 2025-01-29 13:06:29.604 [INFO][4918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:29.607538 containerd[1464]: 2025-01-29 13:06:29.606 [INFO][4908] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Jan 29 13:06:29.608296 containerd[1464]: time="2025-01-29T13:06:29.607548694Z" level=info msg="TearDown network for sandbox \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\" successfully" Jan 29 13:06:29.608296 containerd[1464]: time="2025-01-29T13:06:29.607578080Z" level=info msg="StopPodSandbox for \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\" returns successfully" Jan 29 13:06:29.610972 containerd[1464]: time="2025-01-29T13:06:29.610345440Z" level=info msg="RemovePodSandbox for \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\"" Jan 29 13:06:29.610972 containerd[1464]: time="2025-01-29T13:06:29.610384112Z" level=info msg="Forcibly stopping sandbox \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\"" Jan 29 13:06:29.731038 containerd[1464]: 2025-01-29 13:06:29.677 [WARNING][4936] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"42652645-3ddd-4845-94f8-f2a42fdbd94a", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"9d0ef9373d3a13ea161613c266f9793d9d3963dca391e11a5c088684170f808b", Pod:"coredns-7db6d8ff4d-4pjxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali13e33985f66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:29.731038 containerd[1464]: 2025-01-29 13:06:29.678 [INFO][4936] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Jan 29 13:06:29.731038 containerd[1464]: 2025-01-29 13:06:29.678 [INFO][4936] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" iface="eth0" netns="" Jan 29 13:06:29.731038 containerd[1464]: 2025-01-29 13:06:29.678 [INFO][4936] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Jan 29 13:06:29.731038 containerd[1464]: 2025-01-29 13:06:29.678 [INFO][4936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Jan 29 13:06:29.731038 containerd[1464]: 2025-01-29 13:06:29.715 [INFO][4942] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" HandleID="k8s-pod-network.b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:29.731038 containerd[1464]: 2025-01-29 13:06:29.715 [INFO][4942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:29.731038 containerd[1464]: 2025-01-29 13:06:29.715 [INFO][4942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:29.731038 containerd[1464]: 2025-01-29 13:06:29.725 [WARNING][4942] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" HandleID="k8s-pod-network.b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:29.731038 containerd[1464]: 2025-01-29 13:06:29.725 [INFO][4942] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" HandleID="k8s-pod-network.b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--4pjxk-eth0" Jan 29 13:06:29.731038 containerd[1464]: 2025-01-29 13:06:29.727 [INFO][4942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:29.731038 containerd[1464]: 2025-01-29 13:06:29.729 [INFO][4936] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff" Jan 29 13:06:29.731543 containerd[1464]: time="2025-01-29T13:06:29.731059795Z" level=info msg="TearDown network for sandbox \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\" successfully" Jan 29 13:06:29.746180 containerd[1464]: time="2025-01-29T13:06:29.745500138Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:29.746180 containerd[1464]: time="2025-01-29T13:06:29.745608681Z" level=info msg="RemovePodSandbox \"b43036fe257fc772c16ec27a1646b283d327410d6d63c9cbebdbf9c54b79a7ff\" returns successfully" Jan 29 13:06:29.747817 containerd[1464]: time="2025-01-29T13:06:29.747783473Z" level=info msg="StopPodSandbox for \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\"" Jan 29 13:06:29.854947 containerd[1464]: 2025-01-29 13:06:29.801 [WARNING][4960] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0", GenerateName:"calico-kube-controllers-6cd8fd798f-", Namespace:"calico-system", SelfLink:"", UID:"2ce8b713-0eab-46e6-97eb-990957745903", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cd8fd798f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb", Pod:"calico-kube-controllers-6cd8fd798f-wmf87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia106cd2a277", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:29.854947 containerd[1464]: 2025-01-29 13:06:29.802 [INFO][4960] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Jan 29 13:06:29.854947 containerd[1464]: 2025-01-29 13:06:29.802 [INFO][4960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" iface="eth0" netns="" Jan 29 13:06:29.854947 containerd[1464]: 2025-01-29 13:06:29.802 [INFO][4960] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Jan 29 13:06:29.854947 containerd[1464]: 2025-01-29 13:06:29.802 [INFO][4960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Jan 29 13:06:29.854947 containerd[1464]: 2025-01-29 13:06:29.843 [INFO][4966] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" HandleID="k8s-pod-network.8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:29.854947 containerd[1464]: 2025-01-29 13:06:29.843 [INFO][4966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:29.854947 containerd[1464]: 2025-01-29 13:06:29.843 [INFO][4966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:29.854947 containerd[1464]: 2025-01-29 13:06:29.851 [WARNING][4966] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" HandleID="k8s-pod-network.8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:29.854947 containerd[1464]: 2025-01-29 13:06:29.851 [INFO][4966] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" HandleID="k8s-pod-network.8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:29.854947 containerd[1464]: 2025-01-29 13:06:29.852 [INFO][4966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:29.854947 containerd[1464]: 2025-01-29 13:06:29.853 [INFO][4960] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Jan 29 13:06:29.855674 containerd[1464]: time="2025-01-29T13:06:29.854975163Z" level=info msg="TearDown network for sandbox \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\" successfully" Jan 29 13:06:29.855674 containerd[1464]: time="2025-01-29T13:06:29.855001142Z" level=info msg="StopPodSandbox for \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\" returns successfully" Jan 29 13:06:29.855674 containerd[1464]: time="2025-01-29T13:06:29.855460593Z" level=info msg="RemovePodSandbox for \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\"" Jan 29 13:06:29.855674 containerd[1464]: time="2025-01-29T13:06:29.855485670Z" level=info msg="Forcibly stopping sandbox \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\"" Jan 29 13:06:29.934765 containerd[1464]: 2025-01-29 13:06:29.899 [WARNING][4987] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0", GenerateName:"calico-kube-controllers-6cd8fd798f-", Namespace:"calico-system", SelfLink:"", UID:"2ce8b713-0eab-46e6-97eb-990957745903", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cd8fd798f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"d8490ca523cc1edc56302e0c9d6011d9e8564a7f3eff0f3402893c9b8a1ed3eb", Pod:"calico-kube-controllers-6cd8fd798f-wmf87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia106cd2a277", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:29.934765 containerd[1464]: 2025-01-29 13:06:29.899 [INFO][4987] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Jan 29 13:06:29.934765 containerd[1464]: 2025-01-29 13:06:29.899 [INFO][4987] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" iface="eth0" netns="" Jan 29 13:06:29.934765 containerd[1464]: 2025-01-29 13:06:29.899 [INFO][4987] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Jan 29 13:06:29.934765 containerd[1464]: 2025-01-29 13:06:29.899 [INFO][4987] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Jan 29 13:06:29.934765 containerd[1464]: 2025-01-29 13:06:29.920 [INFO][4993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" HandleID="k8s-pod-network.8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:29.934765 containerd[1464]: 2025-01-29 13:06:29.920 [INFO][4993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:29.934765 containerd[1464]: 2025-01-29 13:06:29.921 [INFO][4993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:29.934765 containerd[1464]: 2025-01-29 13:06:29.930 [WARNING][4993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" HandleID="k8s-pod-network.8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:29.934765 containerd[1464]: 2025-01-29 13:06:29.930 [INFO][4993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" HandleID="k8s-pod-network.8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--kube--controllers--6cd8fd798f--wmf87-eth0" Jan 29 13:06:29.934765 containerd[1464]: 2025-01-29 13:06:29.932 [INFO][4993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:29.934765 containerd[1464]: 2025-01-29 13:06:29.933 [INFO][4987] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58" Jan 29 13:06:29.934765 containerd[1464]: time="2025-01-29T13:06:29.934681713Z" level=info msg="TearDown network for sandbox \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\" successfully" Jan 29 13:06:29.939955 containerd[1464]: time="2025-01-29T13:06:29.939857643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:29.940064 containerd[1464]: time="2025-01-29T13:06:29.939991934Z" level=info msg="RemovePodSandbox \"8ab1b8f3df198ecce76cb42cd98da1d02d179b989cac78db6acdc0eb8f501c58\" returns successfully" Jan 29 13:06:29.940848 containerd[1464]: time="2025-01-29T13:06:29.940608188Z" level=info msg="StopPodSandbox for \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\"" Jan 29 13:06:30.018513 containerd[1464]: 2025-01-29 13:06:29.981 [WARNING][5012] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555", Pod:"coredns-7db6d8ff4d-6b7pj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c1417fa83c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:30.018513 containerd[1464]: 2025-01-29 13:06:29.982 [INFO][5012] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Jan 29 13:06:30.018513 containerd[1464]: 2025-01-29 13:06:29.982 [INFO][5012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" iface="eth0" netns="" Jan 29 13:06:30.018513 containerd[1464]: 2025-01-29 13:06:29.982 [INFO][5012] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Jan 29 13:06:30.018513 containerd[1464]: 2025-01-29 13:06:29.982 [INFO][5012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Jan 29 13:06:30.018513 containerd[1464]: 2025-01-29 13:06:30.004 [INFO][5018] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" HandleID="k8s-pod-network.1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:30.018513 containerd[1464]: 2025-01-29 13:06:30.004 [INFO][5018] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:30.018513 containerd[1464]: 2025-01-29 13:06:30.004 [INFO][5018] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:30.018513 containerd[1464]: 2025-01-29 13:06:30.013 [WARNING][5018] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" HandleID="k8s-pod-network.1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:30.018513 containerd[1464]: 2025-01-29 13:06:30.013 [INFO][5018] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" HandleID="k8s-pod-network.1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:30.018513 containerd[1464]: 2025-01-29 13:06:30.015 [INFO][5018] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:30.018513 containerd[1464]: 2025-01-29 13:06:30.016 [INFO][5012] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Jan 29 13:06:30.019185 containerd[1464]: time="2025-01-29T13:06:30.018764878Z" level=info msg="TearDown network for sandbox \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\" successfully" Jan 29 13:06:30.019185 containerd[1464]: time="2025-01-29T13:06:30.018807428Z" level=info msg="StopPodSandbox for \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\" returns successfully" Jan 29 13:06:30.019712 containerd[1464]: time="2025-01-29T13:06:30.019682887Z" level=info msg="RemovePodSandbox for \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\"" Jan 29 13:06:30.019712 containerd[1464]: time="2025-01-29T13:06:30.019718043Z" level=info msg="Forcibly stopping sandbox \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\"" Jan 29 13:06:30.100266 containerd[1464]: 2025-01-29 13:06:30.062 [WARNING][5036] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2ca282d0-cdb1-4b7f-a6d5-0674baf19e5a", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"4fe52dd0d27f684e70fc80bbcdc60721cc6b8a247a3e32e997100cc03295b555", Pod:"coredns-7db6d8ff4d-6b7pj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c1417fa83c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:30.100266 containerd[1464]: 2025-01-29 13:06:30.062 [INFO][5036] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Jan 29 13:06:30.100266 containerd[1464]: 2025-01-29 13:06:30.062 [INFO][5036] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" iface="eth0" netns="" Jan 29 13:06:30.100266 containerd[1464]: 2025-01-29 13:06:30.062 [INFO][5036] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Jan 29 13:06:30.100266 containerd[1464]: 2025-01-29 13:06:30.062 [INFO][5036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Jan 29 13:06:30.100266 containerd[1464]: 2025-01-29 13:06:30.087 [INFO][5042] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" HandleID="k8s-pod-network.1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:30.100266 containerd[1464]: 2025-01-29 13:06:30.087 [INFO][5042] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:30.100266 containerd[1464]: 2025-01-29 13:06:30.087 [INFO][5042] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:30.100266 containerd[1464]: 2025-01-29 13:06:30.095 [WARNING][5042] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" HandleID="k8s-pod-network.1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:30.100266 containerd[1464]: 2025-01-29 13:06:30.095 [INFO][5042] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" HandleID="k8s-pod-network.1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-coredns--7db6d8ff4d--6b7pj-eth0" Jan 29 13:06:30.100266 containerd[1464]: 2025-01-29 13:06:30.097 [INFO][5042] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:30.100266 containerd[1464]: 2025-01-29 13:06:30.098 [INFO][5036] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc" Jan 29 13:06:30.100266 containerd[1464]: time="2025-01-29T13:06:30.099613522Z" level=info msg="TearDown network for sandbox \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\" successfully" Jan 29 13:06:30.103532 containerd[1464]: time="2025-01-29T13:06:30.103471144Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:30.103603 containerd[1464]: time="2025-01-29T13:06:30.103539633Z" level=info msg="RemovePodSandbox \"1c5077404ea442ad671f38a232fd87fddcde7cb3ce07dc1821e407f13f4a4cbc\" returns successfully" Jan 29 13:06:30.104334 containerd[1464]: time="2025-01-29T13:06:30.104057572Z" level=info msg="StopPodSandbox for \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\"" Jan 29 13:06:30.188610 containerd[1464]: 2025-01-29 13:06:30.146 [WARNING][5060] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0", GenerateName:"calico-apiserver-59f5d86475-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebf5e465-ca1d-4589-8d98-2c00876ac6ac", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f5d86475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896", Pod:"calico-apiserver-59f5d86475-77ss4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1497bf0ddab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:30.188610 containerd[1464]: 2025-01-29 13:06:30.147 [INFO][5060] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Jan 29 13:06:30.188610 containerd[1464]: 2025-01-29 13:06:30.147 [INFO][5060] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" iface="eth0" netns="" Jan 29 13:06:30.188610 containerd[1464]: 2025-01-29 13:06:30.147 [INFO][5060] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Jan 29 13:06:30.188610 containerd[1464]: 2025-01-29 13:06:30.147 [INFO][5060] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Jan 29 13:06:30.188610 containerd[1464]: 2025-01-29 13:06:30.178 [INFO][5066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" HandleID="k8s-pod-network.5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:30.188610 containerd[1464]: 2025-01-29 13:06:30.178 [INFO][5066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:30.188610 containerd[1464]: 2025-01-29 13:06:30.178 [INFO][5066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:30.188610 containerd[1464]: 2025-01-29 13:06:30.184 [WARNING][5066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" HandleID="k8s-pod-network.5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:30.188610 containerd[1464]: 2025-01-29 13:06:30.185 [INFO][5066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" HandleID="k8s-pod-network.5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:30.188610 containerd[1464]: 2025-01-29 13:06:30.186 [INFO][5066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:30.188610 containerd[1464]: 2025-01-29 13:06:30.187 [INFO][5060] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Jan 29 13:06:30.188610 containerd[1464]: time="2025-01-29T13:06:30.188504903Z" level=info msg="TearDown network for sandbox \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\" successfully" Jan 29 13:06:30.188610 containerd[1464]: time="2025-01-29T13:06:30.188556300Z" level=info msg="StopPodSandbox for \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\" returns successfully" Jan 29 13:06:30.190483 containerd[1464]: time="2025-01-29T13:06:30.189140524Z" level=info msg="RemovePodSandbox for \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\"" Jan 29 13:06:30.190483 containerd[1464]: time="2025-01-29T13:06:30.189170440Z" level=info msg="Forcibly stopping sandbox \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\"" Jan 29 13:06:30.269271 containerd[1464]: 2025-01-29 13:06:30.234 [WARNING][5085] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0", GenerateName:"calico-apiserver-59f5d86475-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebf5e465-ca1d-4589-8d98-2c00876ac6ac", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f5d86475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"b01eab91d73f353b8e6e2b7dd67c5273e4d7b461453c88f4409b29262c1a4896", Pod:"calico-apiserver-59f5d86475-77ss4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1497bf0ddab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:30.269271 containerd[1464]: 2025-01-29 13:06:30.234 [INFO][5085] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Jan 29 13:06:30.269271 containerd[1464]: 2025-01-29 13:06:30.234 [INFO][5085] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" iface="eth0" netns="" Jan 29 13:06:30.269271 containerd[1464]: 2025-01-29 13:06:30.234 [INFO][5085] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Jan 29 13:06:30.269271 containerd[1464]: 2025-01-29 13:06:30.234 [INFO][5085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Jan 29 13:06:30.269271 containerd[1464]: 2025-01-29 13:06:30.256 [INFO][5091] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" HandleID="k8s-pod-network.5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:30.269271 containerd[1464]: 2025-01-29 13:06:30.256 [INFO][5091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:30.269271 containerd[1464]: 2025-01-29 13:06:30.256 [INFO][5091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:30.269271 containerd[1464]: 2025-01-29 13:06:30.263 [WARNING][5091] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" HandleID="k8s-pod-network.5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:30.269271 containerd[1464]: 2025-01-29 13:06:30.263 [INFO][5091] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" HandleID="k8s-pod-network.5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--77ss4-eth0" Jan 29 13:06:30.269271 containerd[1464]: 2025-01-29 13:06:30.267 [INFO][5091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:30.269271 containerd[1464]: 2025-01-29 13:06:30.268 [INFO][5085] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee" Jan 29 13:06:30.269813 containerd[1464]: time="2025-01-29T13:06:30.269323392Z" level=info msg="TearDown network for sandbox \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\" successfully" Jan 29 13:06:30.273791 containerd[1464]: time="2025-01-29T13:06:30.273736935Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:30.273871 containerd[1464]: time="2025-01-29T13:06:30.273821423Z" level=info msg="RemovePodSandbox \"5da936b64e40a8f7a776202e6114813cbddc342df8defb071bf9d6f3a8aecfee\" returns successfully" Jan 29 13:06:30.274415 containerd[1464]: time="2025-01-29T13:06:30.274374949Z" level=info msg="StopPodSandbox for \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\"" Jan 29 13:06:30.369028 containerd[1464]: 2025-01-29 13:06:30.317 [WARNING][5109] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c60872ff-6905-49ac-9a5c-64272dbc73e4", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc", Pod:"csi-node-driver-crzf7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali72cfb6e6a74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:30.369028 containerd[1464]: 2025-01-29 13:06:30.317 [INFO][5109] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Jan 29 13:06:30.369028 containerd[1464]: 2025-01-29 13:06:30.317 [INFO][5109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" iface="eth0" netns="" Jan 29 13:06:30.369028 containerd[1464]: 2025-01-29 13:06:30.317 [INFO][5109] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Jan 29 13:06:30.369028 containerd[1464]: 2025-01-29 13:06:30.317 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Jan 29 13:06:30.369028 containerd[1464]: 2025-01-29 13:06:30.357 [INFO][5116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" HandleID="k8s-pod-network.3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:30.369028 containerd[1464]: 2025-01-29 13:06:30.357 [INFO][5116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:30.369028 containerd[1464]: 2025-01-29 13:06:30.357 [INFO][5116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:30.369028 containerd[1464]: 2025-01-29 13:06:30.364 [WARNING][5116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" HandleID="k8s-pod-network.3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:30.369028 containerd[1464]: 2025-01-29 13:06:30.364 [INFO][5116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" HandleID="k8s-pod-network.3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:30.369028 containerd[1464]: 2025-01-29 13:06:30.366 [INFO][5116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:30.369028 containerd[1464]: 2025-01-29 13:06:30.367 [INFO][5109] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Jan 29 13:06:30.370058 containerd[1464]: time="2025-01-29T13:06:30.369065998Z" level=info msg="TearDown network for sandbox \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\" successfully" Jan 29 13:06:30.370058 containerd[1464]: time="2025-01-29T13:06:30.369091606Z" level=info msg="StopPodSandbox for \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\" returns successfully" Jan 29 13:06:30.370058 containerd[1464]: time="2025-01-29T13:06:30.369757201Z" level=info msg="RemovePodSandbox for \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\"" Jan 29 13:06:30.370058 containerd[1464]: time="2025-01-29T13:06:30.369802887Z" level=info msg="Forcibly stopping sandbox \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\"" Jan 29 13:06:30.453015 containerd[1464]: 2025-01-29 13:06:30.412 [WARNING][5134] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c60872ff-6905-49ac-9a5c-64272dbc73e4", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc", Pod:"csi-node-driver-crzf7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali72cfb6e6a74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:30.453015 containerd[1464]: 2025-01-29 13:06:30.412 [INFO][5134] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Jan 29 13:06:30.453015 containerd[1464]: 2025-01-29 13:06:30.412 [INFO][5134] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" iface="eth0" netns="" Jan 29 13:06:30.453015 containerd[1464]: 2025-01-29 13:06:30.412 [INFO][5134] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Jan 29 13:06:30.453015 containerd[1464]: 2025-01-29 13:06:30.412 [INFO][5134] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Jan 29 13:06:30.453015 containerd[1464]: 2025-01-29 13:06:30.436 [INFO][5140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" HandleID="k8s-pod-network.3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:30.453015 containerd[1464]: 2025-01-29 13:06:30.436 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:30.453015 containerd[1464]: 2025-01-29 13:06:30.436 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:30.453015 containerd[1464]: 2025-01-29 13:06:30.446 [WARNING][5140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" HandleID="k8s-pod-network.3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:30.453015 containerd[1464]: 2025-01-29 13:06:30.446 [INFO][5140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" HandleID="k8s-pod-network.3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-csi--node--driver--crzf7-eth0" Jan 29 13:06:30.453015 containerd[1464]: 2025-01-29 13:06:30.449 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:30.453015 containerd[1464]: 2025-01-29 13:06:30.451 [INFO][5134] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4" Jan 29 13:06:30.453671 containerd[1464]: time="2025-01-29T13:06:30.452999478Z" level=info msg="TearDown network for sandbox \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\" successfully" Jan 29 13:06:30.457669 containerd[1464]: time="2025-01-29T13:06:30.457633633Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:30.457763 containerd[1464]: time="2025-01-29T13:06:30.457695439Z" level=info msg="RemovePodSandbox \"3758b99a20b463568bc03972ae2c8fb9c914eb99becc9ad85895ec6ae665c2b4\" returns successfully" Jan 29 13:06:30.458587 containerd[1464]: time="2025-01-29T13:06:30.458563314Z" level=info msg="StopPodSandbox for \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\"" Jan 29 13:06:30.536134 containerd[1464]: 2025-01-29 13:06:30.497 [WARNING][5158] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0", GenerateName:"calico-apiserver-59f5d86475-", Namespace:"calico-apiserver", SelfLink:"", UID:"33e601d8-e340-43d3-8175-0473e13a164d", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f5d86475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3", Pod:"calico-apiserver-59f5d86475-4cr5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6960f228e99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:30.536134 containerd[1464]: 2025-01-29 13:06:30.498 [INFO][5158] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Jan 29 13:06:30.536134 containerd[1464]: 2025-01-29 13:06:30.498 [INFO][5158] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" iface="eth0" netns="" Jan 29 13:06:30.536134 containerd[1464]: 2025-01-29 13:06:30.498 [INFO][5158] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Jan 29 13:06:30.536134 containerd[1464]: 2025-01-29 13:06:30.498 [INFO][5158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Jan 29 13:06:30.536134 containerd[1464]: 2025-01-29 13:06:30.520 [INFO][5164] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" HandleID="k8s-pod-network.c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:30.536134 containerd[1464]: 2025-01-29 13:06:30.520 [INFO][5164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:30.536134 containerd[1464]: 2025-01-29 13:06:30.520 [INFO][5164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:30.536134 containerd[1464]: 2025-01-29 13:06:30.531 [WARNING][5164] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" HandleID="k8s-pod-network.c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:30.536134 containerd[1464]: 2025-01-29 13:06:30.531 [INFO][5164] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" HandleID="k8s-pod-network.c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:30.536134 containerd[1464]: 2025-01-29 13:06:30.532 [INFO][5164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:30.536134 containerd[1464]: 2025-01-29 13:06:30.534 [INFO][5158] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Jan 29 13:06:30.536893 containerd[1464]: time="2025-01-29T13:06:30.536157315Z" level=info msg="TearDown network for sandbox \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\" successfully" Jan 29 13:06:30.536893 containerd[1464]: time="2025-01-29T13:06:30.536181961Z" level=info msg="StopPodSandbox for \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\" returns successfully" Jan 29 13:06:30.536893 containerd[1464]: time="2025-01-29T13:06:30.536865250Z" level=info msg="RemovePodSandbox for \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\"" Jan 29 13:06:30.538473 containerd[1464]: time="2025-01-29T13:06:30.536900306Z" level=info msg="Forcibly stopping sandbox \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\"" Jan 29 13:06:30.654787 containerd[1464]: 2025-01-29 13:06:30.614 [WARNING][5182] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0", GenerateName:"calico-apiserver-59f5d86475-", Namespace:"calico-apiserver", SelfLink:"", UID:"33e601d8-e340-43d3-8175-0473e13a164d", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f5d86475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e-f5d4e76a77.novalocal", ContainerID:"754e21613c09299b41f212869c87b54625ed7122e3761587a86aa9f7de39afd3", Pod:"calico-apiserver-59f5d86475-4cr5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6960f228e99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:30.654787 containerd[1464]: 2025-01-29 13:06:30.614 [INFO][5182] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Jan 29 13:06:30.654787 containerd[1464]: 2025-01-29 13:06:30.614 [INFO][5182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" iface="eth0" netns="" Jan 29 13:06:30.654787 containerd[1464]: 2025-01-29 13:06:30.614 [INFO][5182] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Jan 29 13:06:30.654787 containerd[1464]: 2025-01-29 13:06:30.614 [INFO][5182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Jan 29 13:06:30.654787 containerd[1464]: 2025-01-29 13:06:30.643 [INFO][5188] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" HandleID="k8s-pod-network.c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:30.654787 containerd[1464]: 2025-01-29 13:06:30.643 [INFO][5188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:30.654787 containerd[1464]: 2025-01-29 13:06:30.643 [INFO][5188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:30.654787 containerd[1464]: 2025-01-29 13:06:30.650 [WARNING][5188] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" HandleID="k8s-pod-network.c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:30.654787 containerd[1464]: 2025-01-29 13:06:30.650 [INFO][5188] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" HandleID="k8s-pod-network.c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Workload="ci--4081--3--0--e--f5d4e76a77.novalocal-k8s-calico--apiserver--59f5d86475--4cr5v-eth0" Jan 29 13:06:30.654787 containerd[1464]: 2025-01-29 13:06:30.652 [INFO][5188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:30.654787 containerd[1464]: 2025-01-29 13:06:30.653 [INFO][5182] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9" Jan 29 13:06:30.655220 containerd[1464]: time="2025-01-29T13:06:30.654855677Z" level=info msg="TearDown network for sandbox \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\" successfully" Jan 29 13:06:30.658984 containerd[1464]: time="2025-01-29T13:06:30.658938701Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:30.659044 containerd[1464]: time="2025-01-29T13:06:30.659030373Z" level=info msg="RemovePodSandbox \"c14ed1571a3d97f3bde190574b2a890eaedaf862af2d490477bf163ccfeba3e9\" returns successfully" Jan 29 13:06:31.634827 containerd[1464]: time="2025-01-29T13:06:31.634766216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:31.636326 containerd[1464]: time="2025-01-29T13:06:31.636270563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 13:06:31.637515 containerd[1464]: time="2025-01-29T13:06:31.637479606Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:31.640357 containerd[1464]: time="2025-01-29T13:06:31.640286010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:31.641358 containerd[1464]: time="2025-01-29T13:06:31.641046604Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.150771894s" Jan 29 13:06:31.641358 containerd[1464]: time="2025-01-29T13:06:31.641088543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 13:06:31.643771 containerd[1464]: time="2025-01-29T13:06:31.643543388Z" level=info msg="CreateContainer within sandbox \"638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 13:06:31.661274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558606752.mount: Deactivated successfully. Jan 29 13:06:31.664370 containerd[1464]: time="2025-01-29T13:06:31.664338671Z" level=info msg="CreateContainer within sandbox \"638a61a5697ddddc31a821ec5e1975244ea701d88614dcfdbaa55b1bca803cfc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"97b3e935d4605a0e6311edac1d79d3dbba1c1e65a63f6efdca2a27ef521bb817\"" Jan 29 13:06:31.665269 containerd[1464]: time="2025-01-29T13:06:31.665159858Z" level=info msg="StartContainer for \"97b3e935d4605a0e6311edac1d79d3dbba1c1e65a63f6efdca2a27ef521bb817\"" Jan 29 13:06:31.706558 systemd[1]: Started cri-containerd-97b3e935d4605a0e6311edac1d79d3dbba1c1e65a63f6efdca2a27ef521bb817.scope - libcontainer container 97b3e935d4605a0e6311edac1d79d3dbba1c1e65a63f6efdca2a27ef521bb817. Jan 29 13:06:31.823191 containerd[1464]: time="2025-01-29T13:06:31.823076442Z" level=info msg="StartContainer for \"97b3e935d4605a0e6311edac1d79d3dbba1c1e65a63f6efdca2a27ef521bb817\" returns successfully" Jan 29 13:06:32.617648 kubelet[2660]: I0129 13:06:32.617607 2660 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 13:06:32.617648 kubelet[2660]: I0129 13:06:32.617641 2660 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 13:06:35.507766 systemd[1]: run-containerd-runc-k8s.io-b410389a06a97a06e6307aedfc11b339cb9f6a7f123676258621830c124409f9-runc.8iPutR.mount: Deactivated successfully. Jan 29 13:06:40.096047 kubelet[2660]: I0129 13:06:40.095428 2660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 13:06:40.190316 kubelet[2660]: I0129 13:06:40.190170 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-crzf7" podStartSLOduration=41.637619654 podStartE2EDuration="50.190138403s" podCreationTimestamp="2025-01-29 13:05:50 +0000 UTC" firstStartedPulling="2025-01-29 13:06:23.089601234 +0000 UTC m=+53.806590811" lastFinishedPulling="2025-01-29 13:06:31.642119983 +0000 UTC m=+62.359109560" observedRunningTime="2025-01-29 13:06:31.941014472 +0000 UTC m=+62.658004039" watchObservedRunningTime="2025-01-29 13:06:40.190138403 +0000 UTC m=+70.907128020" Jan 29 13:06:42.661479 kubelet[2660]: I0129 13:06:42.660494 2660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 13:07:14.745691 systemd[1]: Started sshd@9-172.24.4.245:22-172.24.4.1:33740.service - OpenSSH per-connection server daemon (172.24.4.1:33740). Jan 29 13:07:16.039966 sshd[5347]: Accepted publickey for core from 172.24.4.1 port 33740 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:07:16.044949 sshd[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:07:16.062931 systemd-logind[1443]: New session 12 of user core. Jan 29 13:07:16.070671 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 13:07:16.786709 sshd[5347]: pam_unix(sshd:session): session closed for user core Jan 29 13:07:16.790406 systemd[1]: sshd@9-172.24.4.245:22-172.24.4.1:33740.service: Deactivated successfully. Jan 29 13:07:16.790680 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Jan 29 13:07:16.792583 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 13:07:16.795354 systemd-logind[1443]: Removed session 12. Jan 29 13:07:21.807964 systemd[1]: Started sshd@10-172.24.4.245:22-172.24.4.1:33746.service - OpenSSH per-connection server daemon (172.24.4.1:33746). Jan 29 13:07:23.016807 sshd[5383]: Accepted publickey for core from 172.24.4.1 port 33746 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:07:23.020771 sshd[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:07:23.034145 systemd-logind[1443]: New session 13 of user core. Jan 29 13:07:23.039749 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 13:07:23.764809 sshd[5383]: pam_unix(sshd:session): session closed for user core Jan 29 13:07:23.772622 systemd[1]: sshd@10-172.24.4.245:22-172.24.4.1:33746.service: Deactivated successfully. Jan 29 13:07:23.777342 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 13:07:23.779582 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Jan 29 13:07:23.782011 systemd-logind[1443]: Removed session 13. Jan 29 13:07:28.787940 systemd[1]: Started sshd@11-172.24.4.245:22-172.24.4.1:43518.service - OpenSSH per-connection server daemon (172.24.4.1:43518). Jan 29 13:07:30.133934 sshd[5397]: Accepted publickey for core from 172.24.4.1 port 43518 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:07:30.137582 sshd[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:07:30.156950 systemd-logind[1443]: New session 14 of user core. Jan 29 13:07:30.164170 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 13:07:30.934547 sshd[5397]: pam_unix(sshd:session): session closed for user core Jan 29 13:07:30.947012 systemd[1]: sshd@11-172.24.4.245:22-172.24.4.1:43518.service: Deactivated successfully. Jan 29 13:07:30.950497 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 13:07:30.952958 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Jan 29 13:07:30.962061 systemd[1]: Started sshd@12-172.24.4.245:22-172.24.4.1:43532.service - OpenSSH per-connection server daemon (172.24.4.1:43532). Jan 29 13:07:30.966621 systemd-logind[1443]: Removed session 14. Jan 29 13:07:32.123107 sshd[5412]: Accepted publickey for core from 172.24.4.1 port 43532 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:07:32.126146 sshd[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:07:32.135492 systemd-logind[1443]: New session 15 of user core. Jan 29 13:07:32.142724 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 13:07:32.955270 sshd[5412]: pam_unix(sshd:session): session closed for user core Jan 29 13:07:32.964943 systemd[1]: sshd@12-172.24.4.245:22-172.24.4.1:43532.service: Deactivated successfully. Jan 29 13:07:32.969197 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 13:07:32.971225 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Jan 29 13:07:32.982040 systemd[1]: Started sshd@13-172.24.4.245:22-172.24.4.1:43538.service - OpenSSH per-connection server daemon (172.24.4.1:43538). Jan 29 13:07:32.987592 systemd-logind[1443]: Removed session 15. Jan 29 13:07:34.225505 sshd[5423]: Accepted publickey for core from 172.24.4.1 port 43538 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:07:34.228479 sshd[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:07:34.241440 systemd-logind[1443]: New session 16 of user core. Jan 29 13:07:34.253691 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 13:07:34.987719 sshd[5423]: pam_unix(sshd:session): session closed for user core Jan 29 13:07:34.993906 systemd[1]: sshd@13-172.24.4.245:22-172.24.4.1:43538.service: Deactivated successfully. Jan 29 13:07:34.997926 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 13:07:35.002102 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Jan 29 13:07:35.004716 systemd-logind[1443]: Removed session 16. Jan 29 13:07:40.007972 systemd[1]: Started sshd@14-172.24.4.245:22-172.24.4.1:54846.service - OpenSSH per-connection server daemon (172.24.4.1:54846). Jan 29 13:07:41.175996 sshd[5464]: Accepted publickey for core from 172.24.4.1 port 54846 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:07:41.179854 sshd[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:07:41.191662 systemd-logind[1443]: New session 17 of user core. Jan 29 13:07:41.197700 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 13:07:41.796790 sshd[5464]: pam_unix(sshd:session): session closed for user core Jan 29 13:07:41.802804 systemd[1]: sshd@14-172.24.4.245:22-172.24.4.1:54846.service: Deactivated successfully. Jan 29 13:07:41.807265 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 13:07:41.810588 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Jan 29 13:07:41.813663 systemd-logind[1443]: Removed session 17. Jan 29 13:07:46.817989 systemd[1]: Started sshd@15-172.24.4.245:22-172.24.4.1:38968.service - OpenSSH per-connection server daemon (172.24.4.1:38968). Jan 29 13:07:48.223121 sshd[5481]: Accepted publickey for core from 172.24.4.1 port 38968 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:07:48.226359 sshd[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:07:48.247659 systemd-logind[1443]: New session 18 of user core. Jan 29 13:07:48.256640 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 13:07:48.977577 sshd[5481]: pam_unix(sshd:session): session closed for user core Jan 29 13:07:48.983238 systemd[1]: sshd@15-172.24.4.245:22-172.24.4.1:38968.service: Deactivated successfully. Jan 29 13:07:48.985114 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 13:07:48.987910 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Jan 29 13:07:48.989571 systemd-logind[1443]: Removed session 18. Jan 29 13:07:54.006144 systemd[1]: Started sshd@16-172.24.4.245:22-172.24.4.1:41328.service - OpenSSH per-connection server daemon (172.24.4.1:41328). Jan 29 13:07:55.057210 sshd[5551]: Accepted publickey for core from 172.24.4.1 port 41328 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:07:55.059971 sshd[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:07:55.070382 systemd-logind[1443]: New session 19 of user core. Jan 29 13:07:55.077704 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 13:07:55.845884 sshd[5551]: pam_unix(sshd:session): session closed for user core Jan 29 13:07:55.857981 systemd[1]: sshd@16-172.24.4.245:22-172.24.4.1:41328.service: Deactivated successfully. Jan 29 13:07:55.861364 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 13:07:55.865211 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Jan 29 13:07:55.870975 systemd[1]: Started sshd@17-172.24.4.245:22-172.24.4.1:41342.service - OpenSSH per-connection server daemon (172.24.4.1:41342). Jan 29 13:07:55.873671 systemd-logind[1443]: Removed session 19. Jan 29 13:07:57.095364 sshd[5564]: Accepted publickey for core from 172.24.4.1 port 41342 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:07:57.098541 sshd[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:07:57.108306 systemd-logind[1443]: New session 20 of user core. Jan 29 13:07:57.114701 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 13:07:58.080634 sshd[5564]: pam_unix(sshd:session): session closed for user core Jan 29 13:07:58.094126 systemd[1]: sshd@17-172.24.4.245:22-172.24.4.1:41342.service: Deactivated successfully. Jan 29 13:07:58.100538 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 13:07:58.102897 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Jan 29 13:07:58.113998 systemd[1]: Started sshd@18-172.24.4.245:22-172.24.4.1:41346.service - OpenSSH per-connection server daemon (172.24.4.1:41346). Jan 29 13:07:58.117484 systemd-logind[1443]: Removed session 20. Jan 29 13:07:59.449203 sshd[5575]: Accepted publickey for core from 172.24.4.1 port 41346 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:07:59.452727 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:07:59.467071 systemd-logind[1443]: New session 21 of user core. Jan 29 13:07:59.471722 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 13:08:02.253572 sshd[5575]: pam_unix(sshd:session): session closed for user core Jan 29 13:08:02.266448 systemd[1]: sshd@18-172.24.4.245:22-172.24.4.1:41346.service: Deactivated successfully. Jan 29 13:08:02.269985 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 13:08:02.273757 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Jan 29 13:08:02.285032 systemd[1]: Started sshd@19-172.24.4.245:22-172.24.4.1:41356.service - OpenSSH per-connection server daemon (172.24.4.1:41356). Jan 29 13:08:02.289261 systemd-logind[1443]: Removed session 21. Jan 29 13:08:03.418346 sshd[5593]: Accepted publickey for core from 172.24.4.1 port 41356 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:08:03.421353 sshd[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:08:03.432043 systemd-logind[1443]: New session 22 of user core. Jan 29 13:08:03.440767 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 13:08:04.475952 sshd[5593]: pam_unix(sshd:session): session closed for user core Jan 29 13:08:04.488193 systemd[1]: sshd@19-172.24.4.245:22-172.24.4.1:41356.service: Deactivated successfully. Jan 29 13:08:04.492734 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 13:08:04.495545 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Jan 29 13:08:04.505967 systemd[1]: Started sshd@20-172.24.4.245:22-172.24.4.1:56720.service - OpenSSH per-connection server daemon (172.24.4.1:56720). Jan 29 13:08:04.509625 systemd-logind[1443]: Removed session 22. Jan 29 13:08:05.717695 sshd[5604]: Accepted publickey for core from 172.24.4.1 port 56720 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:08:05.720552 sshd[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:08:05.729984 systemd-logind[1443]: New session 23 of user core. Jan 29 13:08:05.737728 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 13:08:06.464772 sshd[5604]: pam_unix(sshd:session): session closed for user core Jan 29 13:08:06.473218 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Jan 29 13:08:06.473891 systemd[1]: sshd@20-172.24.4.245:22-172.24.4.1:56720.service: Deactivated successfully. Jan 29 13:08:06.477987 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 13:08:06.481835 systemd-logind[1443]: Removed session 23. Jan 29 13:08:11.481486 systemd[1]: Started sshd@21-172.24.4.245:22-172.24.4.1:56726.service - OpenSSH per-connection server daemon (172.24.4.1:56726). Jan 29 13:08:12.798527 sshd[5640]: Accepted publickey for core from 172.24.4.1 port 56726 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:08:12.801275 sshd[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:08:12.813142 systemd-logind[1443]: New session 24 of user core. Jan 29 13:08:12.819768 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 13:08:13.595807 sshd[5640]: pam_unix(sshd:session): session closed for user core Jan 29 13:08:13.608493 systemd[1]: sshd@21-172.24.4.245:22-172.24.4.1:56726.service: Deactivated successfully. Jan 29 13:08:13.620012 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 13:08:13.626850 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Jan 29 13:08:13.633725 systemd-logind[1443]: Removed session 24. Jan 29 13:08:18.116506 update_engine[1445]: I20250129 13:08:18.115828 1445 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 13:08:18.116506 update_engine[1445]: I20250129 13:08:18.115911 1445 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 13:08:18.116506 update_engine[1445]: I20250129 13:08:18.116271 1445 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 13:08:18.120042 update_engine[1445]: I20250129 13:08:18.119961 1445 omaha_request_params.cc:62] Current group set to lts Jan 29 13:08:18.124011 update_engine[1445]: I20250129 13:08:18.123802 1445 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 13:08:18.124011 update_engine[1445]: I20250129 13:08:18.123844 1445 update_attempter.cc:643] Scheduling an action processor start. Jan 29 13:08:18.124011 update_engine[1445]: I20250129 13:08:18.123878 1445 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 13:08:18.124011 update_engine[1445]: I20250129 13:08:18.123950 1445 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 13:08:18.124293 update_engine[1445]: I20250129 13:08:18.124104 1445 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 13:08:18.124293 update_engine[1445]: I20250129 13:08:18.124128 1445 omaha_request_action.cc:272] Request: Jan 29 13:08:18.124293 update_engine[1445]: Jan 29 13:08:18.124293 update_engine[1445]: Jan 29 13:08:18.124293 update_engine[1445]: Jan 29 13:08:18.124293 update_engine[1445]: Jan 29 13:08:18.124293 update_engine[1445]: Jan 29 13:08:18.124293 update_engine[1445]: Jan 29 13:08:18.124293 update_engine[1445]: Jan 29 13:08:18.124293 update_engine[1445]: Jan 29 13:08:18.124293 update_engine[1445]: I20250129 13:08:18.124141 1445 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 13:08:18.125834 locksmithd[1470]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 13:08:18.130129 update_engine[1445]: I20250129 13:08:18.130048 1445 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 13:08:18.130727 update_engine[1445]: I20250129 13:08:18.130640 1445 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 13:08:18.143052 update_engine[1445]: E20250129 13:08:18.142786 1445 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 13:08:18.143052 update_engine[1445]: I20250129 13:08:18.142956 1445 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 13:08:18.618742 systemd[1]: Started sshd@22-172.24.4.245:22-172.24.4.1:56904.service - OpenSSH per-connection server daemon (172.24.4.1:56904). Jan 29 13:08:19.908620 sshd[5655]: Accepted publickey for core from 172.24.4.1 port 56904 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:08:19.911373 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:08:19.922307 systemd-logind[1443]: New session 25 of user core. Jan 29 13:08:19.933713 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 13:08:20.696662 sshd[5655]: pam_unix(sshd:session): session closed for user core Jan 29 13:08:20.703793 systemd[1]: sshd@22-172.24.4.245:22-172.24.4.1:56904.service: Deactivated successfully. Jan 29 13:08:20.710676 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 13:08:20.712948 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Jan 29 13:08:20.715317 systemd-logind[1443]: Removed session 25. Jan 29 13:08:25.720044 systemd[1]: Started sshd@23-172.24.4.245:22-172.24.4.1:53790.service - OpenSSH per-connection server daemon (172.24.4.1:53790). Jan 29 13:08:27.025660 sshd[5689]: Accepted publickey for core from 172.24.4.1 port 53790 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:08:27.028582 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:08:27.038378 systemd-logind[1443]: New session 26 of user core. Jan 29 13:08:27.044909 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 13:08:27.814268 sshd[5689]: pam_unix(sshd:session): session closed for user core Jan 29 13:08:27.818361 systemd[1]: sshd@23-172.24.4.245:22-172.24.4.1:53790.service: Deactivated successfully. Jan 29 13:08:27.821390 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 13:08:27.823912 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. Jan 29 13:08:27.825313 systemd-logind[1443]: Removed session 26. Jan 29 13:08:28.111089 update_engine[1445]: I20250129 13:08:28.110929 1445 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 13:08:28.111837 update_engine[1445]: I20250129 13:08:28.111124 1445 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 13:08:28.111837 update_engine[1445]: I20250129 13:08:28.111303 1445 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 13:08:28.121757 update_engine[1445]: E20250129 13:08:28.121559 1445 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 13:08:28.122126 update_engine[1445]: I20250129 13:08:28.121903 1445 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 29 13:08:32.839991 systemd[1]: Started sshd@24-172.24.4.245:22-172.24.4.1:53804.service - OpenSSH per-connection server daemon (172.24.4.1:53804). Jan 29 13:08:34.156478 sshd[5704]: Accepted publickey for core from 172.24.4.1 port 53804 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:08:34.159586 sshd[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:08:34.169718 systemd-logind[1443]: New session 27 of user core. Jan 29 13:08:34.176705 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 13:08:34.757951 sshd[5704]: pam_unix(sshd:session): session closed for user core Jan 29 13:08:34.763813 systemd[1]: sshd@24-172.24.4.245:22-172.24.4.1:53804.service: Deactivated successfully. Jan 29 13:08:34.768169 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 13:08:34.772494 systemd-logind[1443]: Session 27 logged out. Waiting for processes to exit. Jan 29 13:08:34.774754 systemd-logind[1443]: Removed session 27. Jan 29 13:08:38.113861 update_engine[1445]: I20250129 13:08:38.113634 1445 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 13:08:38.114576 update_engine[1445]: I20250129 13:08:38.114060 1445 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 13:08:38.114576 update_engine[1445]: I20250129 13:08:38.114512 1445 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 13:08:38.125552 update_engine[1445]: E20250129 13:08:38.125449 1445 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 13:08:38.125687 update_engine[1445]: I20250129 13:08:38.125566 1445 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 29 13:08:39.778953 systemd[1]: Started sshd@25-172.24.4.245:22-172.24.4.1:53436.service - OpenSSH per-connection server daemon (172.24.4.1:53436). Jan 29 13:08:41.083137 sshd[5737]: Accepted publickey for core from 172.24.4.1 port 53436 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 13:08:41.086011 sshd[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:08:41.096900 systemd-logind[1443]: New session 28 of user core. Jan 29 13:08:41.103900 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 13:08:41.832509 sshd[5737]: pam_unix(sshd:session): session closed for user core Jan 29 13:08:41.839382 systemd[1]: sshd@25-172.24.4.245:22-172.24.4.1:53436.service: Deactivated successfully. Jan 29 13:08:41.843371 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 13:08:41.846777 systemd-logind[1443]: Session 28 logged out. Waiting for processes to exit. Jan 29 13:08:41.849652 systemd-logind[1443]: Removed session 28. Jan 29 13:08:48.116361 update_engine[1445]: I20250129 13:08:48.116194 1445 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 13:08:48.116944 update_engine[1445]: I20250129 13:08:48.116688 1445 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 13:08:48.117177 update_engine[1445]: I20250129 13:08:48.117107 1445 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 13:08:48.127601 update_engine[1445]: E20250129 13:08:48.127511 1445 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 13:08:48.127786 update_engine[1445]: I20250129 13:08:48.127616 1445 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 13:08:48.127786 update_engine[1445]: I20250129 13:08:48.127637 1445 omaha_request_action.cc:617] Omaha request response: Jan 29 13:08:48.127786 update_engine[1445]: E20250129 13:08:48.127771 1445 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 29 13:08:48.127941 update_engine[1445]: I20250129 13:08:48.127810 1445 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 29 13:08:48.127941 update_engine[1445]: I20250129 13:08:48.127822 1445 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 13:08:48.127941 update_engine[1445]: I20250129 13:08:48.127833 1445 update_attempter.cc:306] Processing Done. Jan 29 13:08:48.127941 update_engine[1445]: E20250129 13:08:48.127856 1445 update_attempter.cc:619] Update failed. Jan 29 13:08:48.127941 update_engine[1445]: I20250129 13:08:48.127868 1445 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 29 13:08:48.127941 update_engine[1445]: I20250129 13:08:48.127879 1445 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 29 13:08:48.127941 update_engine[1445]: I20250129 13:08:48.127890 1445 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 29 13:08:48.128271 update_engine[1445]: I20250129 13:08:48.128026 1445 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 13:08:48.128271 update_engine[1445]: I20250129 13:08:48.128072 1445 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 13:08:48.128271 update_engine[1445]: I20250129 13:08:48.128089 1445 omaha_request_action.cc:272] Request: Jan 29 13:08:48.128271 update_engine[1445]: Jan 29 13:08:48.128271 update_engine[1445]: Jan 29 13:08:48.128271 update_engine[1445]: Jan 29 13:08:48.128271 update_engine[1445]: Jan 29 13:08:48.128271 update_engine[1445]: Jan 29 13:08:48.128271 update_engine[1445]: Jan 29 13:08:48.128271 update_engine[1445]: I20250129 13:08:48.128103 1445 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 13:08:48.128873 update_engine[1445]: I20250129 13:08:48.128365 1445 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 13:08:48.128873 update_engine[1445]: I20250129 13:08:48.128756 1445 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 13:08:48.129352 locksmithd[1470]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 29 13:08:48.139313 update_engine[1445]: E20250129 13:08:48.139219 1445 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 13:08:48.139485 update_engine[1445]: I20250129 13:08:48.139321 1445 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 13:08:48.139485 update_engine[1445]: I20250129 13:08:48.139340 1445 omaha_request_action.cc:617] Omaha request response: Jan 29 13:08:48.139485 update_engine[1445]: I20250129 13:08:48.139355 1445 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 13:08:48.139485 update_engine[1445]: I20250129 13:08:48.139366 1445 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 13:08:48.139485 update_engine[1445]: I20250129 13:08:48.139376 1445 update_attempter.cc:306] Processing Done. Jan 29 13:08:48.139485 update_engine[1445]: I20250129 13:08:48.139389 1445 update_attempter.cc:310] Error event sent. Jan 29 13:08:48.139485 update_engine[1445]: I20250129 13:08:48.139461 1445 update_check_scheduler.cc:74] Next update check in 46m51s Jan 29 13:08:48.140076 locksmithd[1470]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0