Dec 13 02:25:05.958243 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 02:25:05.958270 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:25:05.958282 kernel: BIOS-provided physical RAM map: Dec 13 02:25:05.958291 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:25:05.958298 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:25:05.958305 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:25:05.958314 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 02:25:05.958322 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 02:25:05.958330 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 02:25:05.958340 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:25:05.958348 kernel: NX (Execute Disable) protection: active Dec 13 02:25:05.958355 kernel: APIC: Static calls initialized Dec 13 02:25:05.958363 kernel: SMBIOS 2.8 present. Dec 13 02:25:05.958371 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 02:25:05.958380 kernel: Hypervisor detected: KVM Dec 13 02:25:05.958391 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:25:05.958399 kernel: kvm-clock: using sched offset of 8357713046 cycles Dec 13 02:25:05.958408 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:25:05.958417 kernel: tsc: Detected 1996.249 MHz processor Dec 13 02:25:05.958425 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:25:05.958434 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:25:05.958442 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 02:25:05.958451 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 02:25:05.958459 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:25:05.958470 kernel: ACPI: Early table checksum verification disabled Dec 13 02:25:05.958478 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 02:25:05.958487 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:25:05.958495 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:25:05.960535 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:25:05.960547 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 02:25:05.960556 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:25:05.960564 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:25:05.960588 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 02:25:05.960611 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 02:25:05.960620 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 02:25:05.960628 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 02:25:05.960636 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 02:25:05.960643 kernel: No NUMA configuration found Dec 13 02:25:05.960651 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 02:25:05.960660 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 02:25:05.960672 kernel: Zone ranges: Dec 13 02:25:05.960683 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:25:05.960692 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 02:25:05.960701 kernel: Normal empty Dec 13 02:25:05.960709 kernel: Movable zone start for each node Dec 13 02:25:05.960719 kernel: Early memory node ranges Dec 13 02:25:05.960727 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:25:05.960738 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 02:25:05.960747 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 02:25:05.960756 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:25:05.960765 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:25:05.960774 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 02:25:05.960783 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 02:25:05.960791 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:25:05.960800 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:25:05.960809 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 02:25:05.960820 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:25:05.960829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:25:05.960838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:25:05.960847 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:25:05.960856 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:25:05.960865 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:25:05.960874 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 02:25:05.960883 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 02:25:05.960891 kernel: Booting paravirtualized kernel on KVM Dec 13 02:25:05.960900 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:25:05.960912 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:25:05.960921 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 02:25:05.960930 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 02:25:05.960939 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:25:05.960948 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 02:25:05.960958 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:25:05.960967 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:25:05.960979 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:25:05.960988 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:25:05.960997 kernel: Fallback order for Node 0: 0 Dec 13 02:25:05.961006 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 02:25:05.961014 kernel: Policy zone: DMA32 Dec 13 02:25:05.961023 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:25:05.961032 kernel: Memory: 1971212K/2096620K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 02:25:05.961041 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:25:05.961050 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 02:25:05.961060 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 02:25:05.961069 kernel: Dynamic Preempt: voluntary Dec 13 02:25:05.961077 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 02:25:05.961087 kernel: rcu: RCU event tracing is enabled. Dec 13 02:25:05.961096 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:25:05.961105 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 02:25:05.961113 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:25:05.961122 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:25:05.961131 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:25:05.961140 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:25:05.961151 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:25:05.961160 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 02:25:05.961168 kernel: Console: colour VGA+ 80x25 Dec 13 02:25:05.961177 kernel: printk: console [tty0] enabled Dec 13 02:25:05.961186 kernel: printk: console [ttyS0] enabled Dec 13 02:25:05.961194 kernel: ACPI: Core revision 20230628 Dec 13 02:25:05.961203 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:25:05.961212 kernel: x2apic enabled Dec 13 02:25:05.961221 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 02:25:05.961231 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 02:25:05.961240 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 02:25:05.961249 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 02:25:05.961258 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 02:25:05.961267 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 02:25:05.961276 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:25:05.961284 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:25:05.961293 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:25:05.961302 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:25:05.961313 kernel: Speculative Store Bypass: Vulnerable Dec 13 02:25:05.961321 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 02:25:05.961330 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:25:05.961339 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:25:05.961348 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 02:25:05.961357 kernel: landlock: Up and running. Dec 13 02:25:05.961365 kernel: SELinux: Initializing. Dec 13 02:25:05.961374 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:25:05.961392 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:25:05.961401 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 02:25:05.961411 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:25:05.961422 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:25:05.961431 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:25:05.961441 kernel: Performance Events: AMD PMU driver. Dec 13 02:25:05.961450 kernel: ... version: 0 Dec 13 02:25:05.961459 kernel: ... bit width: 48 Dec 13 02:25:05.961470 kernel: ... generic registers: 4 Dec 13 02:25:05.961479 kernel: ... value mask: 0000ffffffffffff Dec 13 02:25:05.961488 kernel: ... max period: 00007fffffffffff Dec 13 02:25:05.961498 kernel: ... fixed-purpose events: 0 Dec 13 02:25:05.961525 kernel: ... event mask: 000000000000000f Dec 13 02:25:05.961534 kernel: signal: max sigframe size: 1440 Dec 13 02:25:05.961543 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:25:05.961553 kernel: rcu: Max phase no-delay instances is 400. Dec 13 02:25:05.961563 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:25:05.961574 kernel: smpboot: x86: Booting SMP configuration: Dec 13 02:25:05.961583 kernel: .... node #0, CPUs: #1 Dec 13 02:25:05.961592 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:25:05.961602 kernel: smpboot: Max logical packages: 2 Dec 13 02:25:05.961611 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 02:25:05.961620 kernel: devtmpfs: initialized Dec 13 02:25:05.961629 kernel: x86/mm: Memory block size: 128MB Dec 13 02:25:05.961639 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:25:05.961648 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:25:05.961657 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:25:05.961669 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:25:05.961678 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:25:05.961688 kernel: audit: type=2000 audit(1734056705.195:1): state=initialized audit_enabled=0 res=1 Dec 13 02:25:05.961697 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:25:05.961707 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:25:05.961716 kernel: cpuidle: using governor menu Dec 13 02:25:05.961725 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:25:05.961734 kernel: dca service started, version 1.12.1 Dec 13 02:25:05.961744 kernel: PCI: Using configuration type 1 for base access Dec 13 02:25:05.961755 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:25:05.961765 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:25:05.961774 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 02:25:05.961783 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:25:05.961792 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:25:05.961802 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:25:05.961811 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:25:05.961820 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 02:25:05.961829 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 02:25:05.961841 kernel: ACPI: Interpreter enabled Dec 13 02:25:05.961850 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 02:25:05.961859 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:25:05.961869 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:25:05.961878 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 02:25:05.961888 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 02:25:05.961897 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:25:05.962062 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:25:05.962171 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 02:25:05.962270 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 02:25:05.962284 kernel: acpiphp: Slot [3] registered Dec 13 02:25:05.962294 kernel: acpiphp: Slot [4] registered Dec 13 02:25:05.962303 kernel: acpiphp: Slot [5] registered Dec 13 02:25:05.962313 kernel: acpiphp: Slot [6] registered Dec 13 02:25:05.962322 kernel: acpiphp: Slot [7] registered Dec 13 02:25:05.962331 kernel: acpiphp: Slot [8] registered Dec 13 02:25:05.962343 kernel: acpiphp: Slot [9] registered Dec 13 02:25:05.962352 kernel: acpiphp: Slot [10] registered Dec 13 02:25:05.962361 kernel: acpiphp: Slot [11] registered Dec 13 02:25:05.962370 kernel: acpiphp: Slot [12] registered Dec 13 02:25:05.962379 kernel: acpiphp: Slot [13] registered Dec 13 02:25:05.962389 kernel: acpiphp: Slot [14] registered Dec 13 02:25:05.962398 kernel: acpiphp: Slot [15] registered Dec 13 02:25:05.962406 kernel: acpiphp: Slot [16] registered Dec 13 02:25:05.962415 kernel: acpiphp: Slot [17] registered Dec 13 02:25:05.962427 kernel: acpiphp: Slot [18] registered Dec 13 02:25:05.962436 kernel: acpiphp: Slot [19] registered Dec 13 02:25:05.962445 kernel: acpiphp: Slot [20] registered Dec 13 02:25:05.962454 kernel: acpiphp: Slot [21] registered Dec 13 02:25:05.962463 kernel: acpiphp: Slot [22] registered Dec 13 02:25:05.962471 kernel: acpiphp: Slot [23] registered Dec 13 02:25:05.962480 kernel: acpiphp: Slot [24] registered Dec 13 02:25:05.962489 kernel: acpiphp: Slot [25] registered Dec 13 02:25:05.962498 kernel: acpiphp: Slot [26] registered Dec 13 02:25:05.962528 kernel: acpiphp: Slot [27] registered Dec 13 02:25:05.962539 kernel: acpiphp: Slot [28] registered Dec 13 02:25:05.962548 kernel: acpiphp: Slot [29] registered Dec 13 02:25:05.962556 kernel: acpiphp: Slot [30] registered Dec 13 02:25:05.962565 kernel: acpiphp: Slot [31] registered Dec 13 02:25:05.962573 kernel: PCI host bridge to bus 0000:00 Dec 13 02:25:05.962676 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:25:05.962760 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:25:05.962842 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:25:05.962932 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 02:25:05.963018 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 02:25:05.963105 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:25:05.963223 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:25:05.963332 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 02:25:05.963440 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 02:25:05.963569 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 02:25:05.963671 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 02:25:05.963769 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 02:25:05.963866 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 02:25:05.963961 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 02:25:05.964069 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 02:25:05.964167 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 02:25:05.964269 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 02:25:05.964378 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 02:25:05.964476 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 02:25:05.964665 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 02:25:05.964768 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 02:25:05.964866 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 02:25:05.964963 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:25:05.965081 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 02:25:05.965179 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 02:25:05.965278 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 02:25:05.965384 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 02:25:05.965481 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 02:25:05.965612 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 02:25:05.965711 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 02:25:05.965809 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 02:25:05.965907 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 02:25:05.966014 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 02:25:05.966112 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 02:25:05.966212 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 02:25:05.966318 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 02:25:05.966422 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 02:25:05.968326 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 02:25:05.968347 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:25:05.968357 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:25:05.968367 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:25:05.968376 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:25:05.968386 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:25:05.968395 kernel: iommu: Default domain type: Translated Dec 13 02:25:05.968405 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:25:05.968418 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:25:05.968428 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:25:05.968437 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:25:05.968447 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 02:25:05.968584 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 02:25:05.968693 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 02:25:05.968791 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:25:05.968806 kernel: vgaarb: loaded Dec 13 02:25:05.968820 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:25:05.968829 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:25:05.968838 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:25:05.968847 kernel: pnp: PnP ACPI init Dec 13 02:25:05.968949 kernel: pnp 00:03: [dma 2] Dec 13 02:25:05.968964 kernel: pnp: PnP ACPI: found 5 devices Dec 13 02:25:05.968974 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:25:05.968984 kernel: NET: Registered PF_INET protocol family Dec 13 02:25:05.968993 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:25:05.969006 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 02:25:05.969016 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:25:05.969025 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:25:05.969034 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 02:25:05.969044 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 02:25:05.969053 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:25:05.969062 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:25:05.969072 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:25:05.969081 kernel: NET: Registered PF_XDP protocol family Dec 13 02:25:05.969176 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:25:05.969265 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:25:05.969354 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:25:05.969442 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 02:25:05.969600 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 02:25:05.969721 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 02:25:05.969817 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:25:05.969835 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:25:05.969844 kernel: Initialise system trusted keyrings Dec 13 02:25:05.969853 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 02:25:05.969862 kernel: Key type asymmetric registered Dec 13 02:25:05.969871 kernel: Asymmetric key parser 'x509' registered Dec 13 02:25:05.969880 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 02:25:05.969888 kernel: io scheduler mq-deadline registered Dec 13 02:25:05.969897 kernel: io scheduler kyber registered Dec 13 02:25:05.969905 kernel: io scheduler bfq registered Dec 13 02:25:05.969917 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:25:05.969926 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 02:25:05.969935 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 02:25:05.969944 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 02:25:05.969953 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 02:25:05.969961 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:25:05.969970 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:25:05.969979 kernel: random: crng init done Dec 13 02:25:05.969987 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:25:05.969998 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:25:05.970007 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:25:05.970100 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 02:25:05.970114 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:25:05.970197 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 02:25:05.970280 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T02:25:05 UTC (1734056705) Dec 13 02:25:05.970364 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 02:25:05.970377 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 02:25:05.970390 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:25:05.970399 kernel: Segment Routing with IPv6 Dec 13 02:25:05.970407 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:25:05.970416 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:25:05.970424 kernel: Key type dns_resolver registered Dec 13 02:25:05.970433 kernel: IPI shorthand broadcast: enabled Dec 13 02:25:05.970441 kernel: sched_clock: Marking stable (976007830, 124072145)->(1103175114, -3095139) Dec 13 02:25:05.970450 kernel: registered taskstats version 1 Dec 13 02:25:05.970458 kernel: Loading compiled-in X.509 certificates Dec 13 02:25:05.970469 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 02:25:05.970478 kernel: Key type .fscrypt registered Dec 13 02:25:05.970486 kernel: Key type fscrypt-provisioning registered Dec 13 02:25:05.970495 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:25:05.970616 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:25:05.970626 kernel: ima: No architecture policies found Dec 13 02:25:05.970634 kernel: clk: Disabling unused clocks Dec 13 02:25:05.970643 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 02:25:05.970651 kernel: Write protecting the kernel read-only data: 36864k Dec 13 02:25:05.970664 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 02:25:05.970673 kernel: Run /init as init process Dec 13 02:25:05.970682 kernel: with arguments: Dec 13 02:25:05.970690 kernel: /init Dec 13 02:25:05.970699 kernel: with environment: Dec 13 02:25:05.970707 kernel: HOME=/ Dec 13 02:25:05.970715 kernel: TERM=linux Dec 13 02:25:05.970724 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:25:05.970735 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:25:05.970749 systemd[1]: Detected virtualization kvm. Dec 13 02:25:05.970759 systemd[1]: Detected architecture x86-64. Dec 13 02:25:05.970768 systemd[1]: Running in initrd. Dec 13 02:25:05.970777 systemd[1]: No hostname configured, using default hostname. Dec 13 02:25:05.970786 systemd[1]: Hostname set to . Dec 13 02:25:05.970796 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:25:05.970805 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:25:05.970817 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:25:05.970826 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:25:05.970836 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 02:25:05.970846 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:25:05.970855 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 02:25:05.970864 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 02:25:05.970875 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 02:25:05.970887 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 02:25:05.970897 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:25:05.970906 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:25:05.970916 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:25:05.970936 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:25:05.970948 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:25:05.970960 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:25:05.970969 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:25:05.970979 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:25:05.970988 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 02:25:05.971000 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 02:25:05.971010 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:25:05.971020 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:25:05.971031 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:25:05.971043 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:25:05.971053 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 02:25:05.971063 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:25:05.971074 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 02:25:05.971084 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:25:05.971094 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:25:05.971105 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:25:05.971138 systemd-journald[185]: Collecting audit messages is disabled. Dec 13 02:25:05.971166 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:25:05.971178 systemd-journald[185]: Journal started Dec 13 02:25:05.971202 systemd-journald[185]: Runtime Journal (/run/log/journal/17bb280735104834b15b765837cb60c9) is 4.9M, max 39.3M, 34.4M free. Dec 13 02:25:05.980554 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:25:05.988631 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 02:25:05.990157 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:25:05.993950 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 02:25:05.996629 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:25:06.013811 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 02:25:06.056357 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:25:06.056402 kernel: Bridge firewalling registered Dec 13 02:25:06.036149 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 02:25:06.064713 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:25:06.065642 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:25:06.071047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:25:06.071991 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 02:25:06.078757 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:25:06.084747 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:25:06.089825 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:25:06.094559 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:25:06.104987 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:25:06.108780 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 02:25:06.114588 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:25:06.122801 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:25:06.125642 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:25:06.135928 dracut-cmdline[215]: dracut-dracut-053 Dec 13 02:25:06.139377 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:25:06.168262 systemd-resolved[219]: Positive Trust Anchors: Dec 13 02:25:06.168301 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:25:06.168346 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:25:06.175533 systemd-resolved[219]: Defaulting to hostname 'linux'. Dec 13 02:25:06.178071 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:25:06.178729 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:25:06.240574 kernel: SCSI subsystem initialized Dec 13 02:25:06.251576 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:25:06.264555 kernel: iscsi: registered transport (tcp) Dec 13 02:25:06.288667 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:25:06.288737 kernel: QLogic iSCSI HBA Driver Dec 13 02:25:06.355677 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 02:25:06.366835 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 02:25:06.431091 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:25:06.431832 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:25:06.433859 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 02:25:06.484564 kernel: raid6: sse2x4 gen() 12717 MB/s Dec 13 02:25:06.501570 kernel: raid6: sse2x2 gen() 14569 MB/s Dec 13 02:25:06.518630 kernel: raid6: sse2x1 gen() 9843 MB/s Dec 13 02:25:06.518717 kernel: raid6: using algorithm sse2x2 gen() 14569 MB/s Dec 13 02:25:06.536722 kernel: raid6: .... xor() 9359 MB/s, rmw enabled Dec 13 02:25:06.536871 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 02:25:06.559739 kernel: xor: measuring software checksum speed Dec 13 02:25:06.559843 kernel: prefetch64-sse : 18434 MB/sec Dec 13 02:25:06.560697 kernel: generic_sse : 16834 MB/sec Dec 13 02:25:06.562172 kernel: xor: using function: prefetch64-sse (18434 MB/sec) Dec 13 02:25:06.742557 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 02:25:06.755605 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:25:06.761697 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:25:06.810178 systemd-udevd[402]: Using default interface naming scheme 'v255'. Dec 13 02:25:06.821690 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:25:06.830807 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 02:25:06.860373 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Dec 13 02:25:06.900855 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:25:06.910775 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:25:06.958344 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:25:06.970180 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 02:25:07.017277 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 02:25:07.019443 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:25:07.021089 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:25:07.022372 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:25:07.028725 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 02:25:07.044103 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:25:07.054227 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Dec 13 02:25:07.155988 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 02:25:07.156137 kernel: libata version 3.00 loaded. Dec 13 02:25:07.156152 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 02:25:07.156300 kernel: scsi host0: ata_piix Dec 13 02:25:07.156423 kernel: scsi host1: ata_piix Dec 13 02:25:07.156578 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 02:25:07.156592 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 02:25:07.156621 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:25:07.156633 kernel: GPT:17805311 != 41943039 Dec 13 02:25:07.156648 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:25:07.156663 kernel: GPT:17805311 != 41943039 Dec 13 02:25:07.156678 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:25:07.156689 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:25:07.074303 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:25:07.074491 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:25:07.075383 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:25:07.075889 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:25:07.076010 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:25:07.076649 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:25:07.084044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:25:07.148852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:25:07.161488 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:25:07.187559 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:25:07.297597 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (445) Dec 13 02:25:07.305593 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (448) Dec 13 02:25:07.337833 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 02:25:07.343449 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 02:25:07.349334 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 02:25:07.354005 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 02:25:07.354634 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 02:25:07.367670 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 02:25:07.540808 disk-uuid[505]: Primary Header is updated. Dec 13 02:25:07.540808 disk-uuid[505]: Secondary Entries is updated. Dec 13 02:25:07.540808 disk-uuid[505]: Secondary Header is updated. Dec 13 02:25:07.668691 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:25:07.679574 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:25:07.697579 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:25:08.887708 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:25:08.889128 disk-uuid[506]: The operation has completed successfully. Dec 13 02:25:09.277095 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:25:09.277372 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 02:25:09.311771 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 02:25:09.330151 sh[520]: Success Dec 13 02:25:09.360590 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 02:25:09.462110 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 02:25:09.484837 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 02:25:09.491225 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 02:25:09.529937 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 02:25:09.529997 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:25:09.533692 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 02:25:09.537332 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 02:25:09.540054 kernel: BTRFS info (device dm-0): using free space tree Dec 13 02:25:09.558785 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 02:25:09.561388 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 02:25:09.576867 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 02:25:09.581818 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 02:25:09.602567 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:25:09.607721 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:25:09.607750 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:25:09.622595 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 02:25:09.646213 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:25:09.651092 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:25:09.666137 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 02:25:09.671744 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 02:25:09.732740 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:25:09.742879 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:25:09.767997 systemd-networkd[704]: lo: Link UP Dec 13 02:25:09.768008 systemd-networkd[704]: lo: Gained carrier Dec 13 02:25:09.770083 systemd-networkd[704]: Enumeration completed Dec 13 02:25:09.770184 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:25:09.770979 systemd-networkd[704]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:25:09.770983 systemd-networkd[704]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:25:09.772474 systemd-networkd[704]: eth0: Link UP Dec 13 02:25:09.772480 systemd-networkd[704]: eth0: Gained carrier Dec 13 02:25:09.772488 systemd-networkd[704]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:25:09.774644 systemd[1]: Reached target network.target - Network. Dec 13 02:25:09.789636 systemd-networkd[704]: eth0: DHCPv4 address 172.24.4.31/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 02:25:09.828405 ignition[629]: Ignition 2.19.0 Dec 13 02:25:09.828419 ignition[629]: Stage: fetch-offline Dec 13 02:25:09.828461 ignition[629]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:25:09.828472 ignition[629]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:25:09.830759 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:25:09.828641 ignition[629]: parsed url from cmdline: "" Dec 13 02:25:09.828646 ignition[629]: no config URL provided Dec 13 02:25:09.828653 ignition[629]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:25:09.828665 ignition[629]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:25:09.828671 ignition[629]: failed to fetch config: resource requires networking Dec 13 02:25:09.828886 ignition[629]: Ignition finished successfully Dec 13 02:25:09.837765 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 02:25:09.851470 ignition[712]: Ignition 2.19.0 Dec 13 02:25:09.851484 ignition[712]: Stage: fetch Dec 13 02:25:09.851701 ignition[712]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:25:09.851714 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:25:09.851817 ignition[712]: parsed url from cmdline: "" Dec 13 02:25:09.851821 ignition[712]: no config URL provided Dec 13 02:25:09.851827 ignition[712]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:25:09.851836 ignition[712]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:25:09.851982 ignition[712]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 02:25:09.852004 ignition[712]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 02:25:09.852038 ignition[712]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 02:25:10.172071 ignition[712]: GET result: OK Dec 13 02:25:10.172259 ignition[712]: parsing config with SHA512: 2bfd130e27943a88832113c0b087fa9e8d9e138f38dc4cba753b9d0a1873a68188121a391a432df88fdf86c3f4eb88959ddd8125455daf96278837c0b3ce3c9a Dec 13 02:25:10.182956 unknown[712]: fetched base config from "system" Dec 13 02:25:10.182983 unknown[712]: fetched base config from "system" Dec 13 02:25:10.183950 ignition[712]: fetch: fetch complete Dec 13 02:25:10.183009 unknown[712]: fetched user config from "openstack" Dec 13 02:25:10.183962 ignition[712]: fetch: fetch passed Dec 13 02:25:10.187647 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 02:25:10.184054 ignition[712]: Ignition finished successfully Dec 13 02:25:10.197858 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 02:25:10.242833 ignition[718]: Ignition 2.19.0 Dec 13 02:25:10.242861 ignition[718]: Stage: kargs Dec 13 02:25:10.243317 ignition[718]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:25:10.243345 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:25:10.245646 ignition[718]: kargs: kargs passed Dec 13 02:25:10.248247 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 02:25:10.245790 ignition[718]: Ignition finished successfully Dec 13 02:25:10.257865 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 02:25:10.301682 ignition[724]: Ignition 2.19.0 Dec 13 02:25:10.301711 ignition[724]: Stage: disks Dec 13 02:25:10.302132 ignition[724]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:25:10.302159 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:25:10.304452 ignition[724]: disks: disks passed Dec 13 02:25:10.307049 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 02:25:10.304633 ignition[724]: Ignition finished successfully Dec 13 02:25:10.309850 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 02:25:10.311774 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 02:25:10.313601 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:25:10.315429 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:25:10.317366 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:25:10.330800 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 02:25:10.356214 systemd-fsck[732]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 02:25:10.366292 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 02:25:10.372702 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 02:25:10.527731 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 02:25:10.529085 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 02:25:10.530326 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 02:25:10.538710 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:25:10.542043 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 02:25:10.545841 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 02:25:10.563694 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (740) Dec 13 02:25:10.563751 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:25:10.563786 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:25:10.563818 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:25:10.563849 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 02:25:10.555755 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 02:25:10.564076 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:25:10.564115 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:25:10.571670 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:25:10.572294 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 02:25:10.591756 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 02:25:10.710696 initrd-setup-root[768]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:25:10.718323 initrd-setup-root[775]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:25:10.724802 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:25:10.729530 initrd-setup-root[789]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:25:10.836523 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 02:25:10.843636 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 02:25:10.847530 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 02:25:10.855534 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:25:10.855895 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 02:25:10.888845 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 02:25:10.908776 ignition[858]: INFO : Ignition 2.19.0 Dec 13 02:25:10.908776 ignition[858]: INFO : Stage: mount Dec 13 02:25:10.910147 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:25:10.910147 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:25:10.912862 ignition[858]: INFO : mount: mount passed Dec 13 02:25:10.914232 ignition[858]: INFO : Ignition finished successfully Dec 13 02:25:10.914950 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 02:25:11.082012 systemd-networkd[704]: eth0: Gained IPv6LL Dec 13 02:25:17.822128 coreos-metadata[742]: Dec 13 02:25:17.822 WARN failed to locate config-drive, using the metadata service API instead Dec 13 02:25:17.862052 coreos-metadata[742]: Dec 13 02:25:17.861 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 02:25:17.875253 coreos-metadata[742]: Dec 13 02:25:17.875 INFO Fetch successful Dec 13 02:25:17.877581 coreos-metadata[742]: Dec 13 02:25:17.876 INFO wrote hostname ci-4081-2-1-b-462e46fdf9.novalocal to /sysroot/etc/hostname Dec 13 02:25:17.881071 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 02:25:17.881780 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 02:25:17.895766 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 02:25:17.938986 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:25:17.956587 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (874) Dec 13 02:25:17.964896 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:25:17.964956 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:25:17.964980 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:25:17.970593 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 02:25:17.976461 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:25:18.017398 ignition[892]: INFO : Ignition 2.19.0 Dec 13 02:25:18.017398 ignition[892]: INFO : Stage: files Dec 13 02:25:18.020108 ignition[892]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:25:18.020108 ignition[892]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:25:18.020108 ignition[892]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:25:18.025828 ignition[892]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:25:18.025828 ignition[892]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:25:18.029566 ignition[892]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:25:18.029566 ignition[892]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:25:18.033252 ignition[892]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:25:18.029758 unknown[892]: wrote ssh authorized keys file for user: core Dec 13 02:25:18.036460 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:25:18.036460 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:25:19.129768 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 02:25:19.428817 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:25:19.428817 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:25:19.433340 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:25:19.433340 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:25:19.433340 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:25:19.433340 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:25:19.433340 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:25:19.433340 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:25:19.433340 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:25:19.433340 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:25:19.433340 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:25:19.433340 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:25:19.433340 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:25:19.433340 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:25:19.433340 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 02:25:19.802199 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 02:25:21.351140 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:25:21.351140 ignition[892]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 02:25:21.357024 ignition[892]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:25:21.357024 ignition[892]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:25:21.357024 ignition[892]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 02:25:21.357024 ignition[892]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:25:21.357024 ignition[892]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:25:21.357024 ignition[892]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:25:21.357024 ignition[892]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:25:21.357024 ignition[892]: INFO : files: files passed Dec 13 02:25:21.357024 ignition[892]: INFO : Ignition finished successfully Dec 13 02:25:21.355814 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 02:25:21.368266 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 02:25:21.371890 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 02:25:21.379088 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:25:21.379196 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 02:25:21.390788 initrd-setup-root-after-ignition[921]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:25:21.390788 initrd-setup-root-after-ignition[921]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:25:21.395090 initrd-setup-root-after-ignition[925]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:25:21.393162 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:25:21.396074 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 02:25:21.409766 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 02:25:21.442018 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:25:21.442294 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 02:25:21.446145 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 02:25:21.447625 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 02:25:21.449754 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 02:25:21.462691 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 02:25:21.474059 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:25:21.476680 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 02:25:21.501468 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:25:21.503244 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:25:21.505330 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 02:25:21.507156 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:25:21.507610 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:25:21.510019 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 02:25:21.511972 systemd[1]: Stopped target basic.target - Basic System. Dec 13 02:25:21.513875 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 02:25:21.515630 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:25:21.516787 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 02:25:21.518154 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 02:25:21.519266 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:25:21.520414 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 02:25:21.521590 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 02:25:21.522671 systemd[1]: Stopped target swap.target - Swaps. Dec 13 02:25:21.523616 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:25:21.523731 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:25:21.524926 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:25:21.525661 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:25:21.526741 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 02:25:21.526851 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:25:21.527911 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:25:21.528068 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 02:25:21.529366 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:25:21.529493 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:25:21.530915 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:25:21.531071 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 02:25:21.544054 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 02:25:21.546746 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 02:25:21.547284 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:25:21.547463 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:25:21.551170 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:25:21.551336 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:25:21.563573 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:25:21.563694 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 02:25:21.574908 ignition[945]: INFO : Ignition 2.19.0 Dec 13 02:25:21.574908 ignition[945]: INFO : Stage: umount Dec 13 02:25:21.578600 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:25:21.578600 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:25:21.578600 ignition[945]: INFO : umount: umount passed Dec 13 02:25:21.578600 ignition[945]: INFO : Ignition finished successfully Dec 13 02:25:21.580238 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:25:21.581580 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 02:25:21.583354 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:25:21.583447 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 02:25:21.586925 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:25:21.587046 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 02:25:21.588998 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:25:21.589077 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 02:25:21.591276 systemd[1]: Stopped target network.target - Network. Dec 13 02:25:21.593161 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:25:21.593254 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:25:21.594642 systemd[1]: Stopped target paths.target - Path Units. Dec 13 02:25:21.595557 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:25:21.597576 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:25:21.598755 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 02:25:21.599879 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 02:25:21.601151 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:25:21.601283 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:25:21.602675 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:25:21.602740 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:25:21.604175 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:25:21.604233 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 02:25:21.605195 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 02:25:21.605240 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 02:25:21.606331 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 02:25:21.607604 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 02:25:21.609710 systemd-networkd[704]: eth0: DHCPv6 lease lost Dec 13 02:25:21.609754 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:25:21.610325 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:25:21.610415 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 02:25:21.611452 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:25:21.611549 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 02:25:21.613967 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:25:21.614123 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 02:25:21.615766 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:25:21.615873 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 02:25:21.618811 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:25:21.618855 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:25:21.624646 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 02:25:21.627208 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:25:21.627265 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:25:21.628375 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:25:21.628418 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:25:21.629429 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:25:21.629472 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 02:25:21.630667 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 02:25:21.630709 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:25:21.632018 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:25:21.641484 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:25:21.641668 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:25:21.642458 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:25:21.642553 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 02:25:21.643917 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:25:21.643971 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 02:25:21.645213 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:25:21.645254 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:25:21.646188 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:25:21.646232 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:25:21.647755 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:25:21.647799 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 02:25:21.648786 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:25:21.648829 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:25:21.655679 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 02:25:21.658345 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 02:25:21.658403 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:25:21.659846 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 02:25:21.659891 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 02:25:21.663720 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:25:21.663767 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:25:21.664910 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:25:21.664952 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:25:21.666490 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:25:21.666607 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 02:25:21.667961 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 02:25:21.671687 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 02:25:21.681004 systemd[1]: Switching root. Dec 13 02:25:21.710786 systemd-journald[185]: Journal stopped Dec 13 02:25:24.109840 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 02:25:24.109904 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:25:24.109923 kernel: SELinux: policy capability open_perms=1 Dec 13 02:25:24.109943 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:25:24.109956 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:25:24.109967 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:25:24.109981 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:25:24.109992 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:25:24.110003 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:25:24.110015 kernel: audit: type=1403 audit(1734056722.722:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:25:24.110027 systemd[1]: Successfully loaded SELinux policy in 69.768ms. Dec 13 02:25:24.110046 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.652ms. Dec 13 02:25:24.110060 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:25:24.110072 systemd[1]: Detected virtualization kvm. Dec 13 02:25:24.110089 systemd[1]: Detected architecture x86-64. Dec 13 02:25:24.110100 systemd[1]: Detected first boot. Dec 13 02:25:24.110112 systemd[1]: Hostname set to . Dec 13 02:25:24.110125 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:25:24.110136 zram_generator::config[987]: No configuration found. Dec 13 02:25:24.110149 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:25:24.110162 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:25:24.110176 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 02:25:24.110188 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:25:24.110204 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 02:25:24.110217 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 02:25:24.110229 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 02:25:24.110242 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 02:25:24.110254 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 02:25:24.110266 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 02:25:24.110278 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 02:25:24.110294 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 02:25:24.110307 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:25:24.110320 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:25:24.110333 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 02:25:24.110351 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 02:25:24.110369 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 02:25:24.110381 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:25:24.110394 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 02:25:24.110407 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:25:24.110422 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 02:25:24.110435 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 02:25:24.110447 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 02:25:24.110459 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 02:25:24.110472 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:25:24.110484 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:25:24.110499 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:25:24.110529 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:25:24.110542 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 02:25:24.110554 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 02:25:24.110566 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:25:24.110579 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:25:24.110592 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:25:24.110605 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 02:25:24.110617 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 02:25:24.110633 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 02:25:24.110645 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 02:25:24.110658 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:25:24.110670 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 02:25:24.110682 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 02:25:24.110694 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 02:25:24.110707 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:25:24.110719 systemd[1]: Reached target machines.target - Containers. Dec 13 02:25:24.110732 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 02:25:24.110747 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:25:24.110759 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:25:24.110772 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 02:25:24.110784 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:25:24.110796 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:25:24.110808 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:25:24.110820 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 02:25:24.110832 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:25:24.110849 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:25:24.110861 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:25:24.110873 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 02:25:24.110885 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:25:24.110897 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:25:24.110909 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:25:24.110922 kernel: fuse: init (API version 7.39) Dec 13 02:25:24.110933 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:25:24.110946 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 02:25:24.110960 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 02:25:24.110973 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:25:24.110985 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:25:24.110998 systemd[1]: Stopped verity-setup.service. Dec 13 02:25:24.111010 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:25:24.111023 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 02:25:24.111035 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 02:25:24.111047 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 02:25:24.111059 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 02:25:24.111074 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 02:25:24.111086 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 02:25:24.111098 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:25:24.111112 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:25:24.111125 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 02:25:24.111139 kernel: ACPI: bus type drm_connector registered Dec 13 02:25:24.111168 systemd-journald[1076]: Collecting audit messages is disabled. Dec 13 02:25:24.111194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:25:24.111208 systemd-journald[1076]: Journal started Dec 13 02:25:24.111237 systemd-journald[1076]: Runtime Journal (/run/log/journal/17bb280735104834b15b765837cb60c9) is 4.9M, max 39.3M, 34.4M free. Dec 13 02:25:23.753196 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:25:23.782608 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 02:25:23.783030 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:25:24.113584 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:25:24.115546 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:25:24.116849 kernel: loop: module loaded Dec 13 02:25:24.118882 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:25:24.120604 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:25:24.121475 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:25:24.122033 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:25:24.123348 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:25:24.123480 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 02:25:24.125841 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:25:24.125988 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:25:24.126949 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:25:24.127783 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 02:25:24.129800 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 02:25:24.142268 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 02:25:24.148162 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 02:25:24.155616 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 02:25:24.159590 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 02:25:24.161659 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:25:24.161703 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:25:24.165057 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 02:25:24.176456 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 02:25:24.178924 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 02:25:24.179975 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:25:24.185715 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 02:25:24.190689 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 02:25:24.191353 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:25:24.199723 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 02:25:24.200393 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:25:24.205010 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:25:24.209806 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 02:25:24.217673 systemd-journald[1076]: Time spent on flushing to /var/log/journal/17bb280735104834b15b765837cb60c9 is 23.101ms for 935 entries. Dec 13 02:25:24.217673 systemd-journald[1076]: System Journal (/var/log/journal/17bb280735104834b15b765837cb60c9) is 8.0M, max 584.8M, 576.8M free. Dec 13 02:25:24.277496 systemd-journald[1076]: Received client request to flush runtime journal. Dec 13 02:25:24.219109 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 02:25:24.222628 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:25:24.223484 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 02:25:24.224618 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 02:25:24.227662 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 02:25:24.240478 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 02:25:24.244123 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 02:25:24.246637 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 02:25:24.256792 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 02:25:24.280590 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 02:25:24.289110 udevadm[1126]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 02:25:24.388670 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 02:25:24.398071 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:25:24.415367 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:25:24.419610 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 02:25:24.431147 systemd-tmpfiles[1121]: ACLs are not supported, ignoring. Dec 13 02:25:24.431167 systemd-tmpfiles[1121]: ACLs are not supported, ignoring. Dec 13 02:25:24.443040 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 02:25:24.454718 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 02:25:24.461527 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:25:24.491543 kernel: loop1: detected capacity change from 0 to 8 Dec 13 02:25:24.514540 kernel: loop2: detected capacity change from 0 to 205544 Dec 13 02:25:24.525400 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 02:25:24.532739 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:25:24.569811 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Dec 13 02:25:24.569836 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Dec 13 02:25:24.576734 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:25:24.581530 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 02:25:24.648545 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 02:25:24.681567 kernel: loop5: detected capacity change from 0 to 8 Dec 13 02:25:24.684538 kernel: loop6: detected capacity change from 0 to 205544 Dec 13 02:25:24.751985 kernel: loop7: detected capacity change from 0 to 142488 Dec 13 02:25:24.835364 (sd-merge)[1149]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 02:25:24.836157 (sd-merge)[1149]: Merged extensions into '/usr'. Dec 13 02:25:24.840886 systemd[1]: Reloading requested from client PID 1120 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 02:25:24.841002 systemd[1]: Reloading... Dec 13 02:25:24.940577 zram_generator::config[1171]: No configuration found. Dec 13 02:25:25.214297 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:25:25.276106 systemd[1]: Reloading finished in 434 ms. Dec 13 02:25:25.301432 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 02:25:25.310720 systemd[1]: Starting ensure-sysext.service... Dec 13 02:25:25.313683 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:25:25.363221 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:25:25.363928 systemd[1]: Reloading requested from client PID 1230 ('systemctl') (unit ensure-sysext.service)... Dec 13 02:25:25.363942 systemd[1]: Reloading... Dec 13 02:25:25.365296 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 02:25:25.367867 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:25:25.368889 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Dec 13 02:25:25.369217 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Dec 13 02:25:25.398726 systemd-tmpfiles[1231]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:25:25.399352 systemd-tmpfiles[1231]: Skipping /boot Dec 13 02:25:25.431287 systemd-tmpfiles[1231]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:25:25.432576 systemd-tmpfiles[1231]: Skipping /boot Dec 13 02:25:25.469575 zram_generator::config[1256]: No configuration found. Dec 13 02:25:25.625970 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:25:25.684881 systemd[1]: Reloading finished in 320 ms. Dec 13 02:25:25.703164 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 02:25:25.704361 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:25:25.721840 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:25:25.732751 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 02:25:25.752131 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 02:25:25.755704 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:25:25.761728 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:25:25.767230 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 02:25:25.777316 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:25:25.777542 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:25:25.779243 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:25:25.780935 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:25:25.785062 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:25:25.786759 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:25:25.786892 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:25:25.788993 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:25:25.789180 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:25:25.789343 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:25:25.789449 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:25:25.793917 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:25:25.794137 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:25:25.802208 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:25:25.803850 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:25:25.804239 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:25:25.808890 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:25:25.809133 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:25:25.812025 systemd[1]: Finished ensure-sysext.service. Dec 13 02:25:25.818777 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 02:25:25.831881 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 02:25:25.836924 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 02:25:25.840484 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:25:25.840723 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:25:25.845930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:25:25.846095 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:25:25.846967 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:25:25.852544 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:25:25.852760 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:25:25.854771 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:25:25.864444 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Dec 13 02:25:25.890175 augenrules[1351]: No rules Dec 13 02:25:25.890936 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:25:25.891934 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 02:25:25.918450 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 02:25:25.940319 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:25:25.951809 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:25:25.982605 ldconfig[1115]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:25:26.002241 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 02:25:26.009838 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 02:25:26.049447 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 02:25:26.067518 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 02:25:26.070876 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:25:26.085631 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 02:25:26.085790 systemd-resolved[1322]: Positive Trust Anchors: Dec 13 02:25:26.085807 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:25:26.085853 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:25:26.087067 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 02:25:26.098242 systemd-resolved[1322]: Using system hostname 'ci-4081-2-1-b-462e46fdf9.novalocal'. Dec 13 02:25:26.100307 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:25:26.101320 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:25:26.109310 systemd-networkd[1365]: lo: Link UP Dec 13 02:25:26.109318 systemd-networkd[1365]: lo: Gained carrier Dec 13 02:25:26.111409 systemd-networkd[1365]: Enumeration completed Dec 13 02:25:26.111707 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:25:26.112361 systemd[1]: Reached target network.target - Network. Dec 13 02:25:26.122984 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 02:25:26.144566 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1379) Dec 13 02:25:26.147606 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1379) Dec 13 02:25:26.148660 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 02:25:26.167553 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1371) Dec 13 02:25:26.215546 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 02:25:26.222545 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:25:26.229205 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:25:26.229215 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:25:26.230991 systemd-networkd[1365]: eth0: Link UP Dec 13 02:25:26.231001 systemd-networkd[1365]: eth0: Gained carrier Dec 13 02:25:26.231016 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:25:26.241581 systemd-networkd[1365]: eth0: DHCPv4 address 172.24.4.31/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 02:25:26.243216 systemd-timesyncd[1336]: Network configuration changed, trying to establish connection. Dec 13 02:25:26.249832 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 02:25:26.252540 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 02:25:26.259740 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 02:25:26.284692 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 02:25:26.308565 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 02:25:26.310962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:25:26.318556 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:25:26.325250 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 02:25:26.325352 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 02:25:26.328681 kernel: Console: switching to colour dummy device 80x25 Dec 13 02:25:26.329795 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 02:25:26.329834 kernel: [drm] features: -context_init Dec 13 02:25:26.331315 kernel: [drm] number of scanouts: 1 Dec 13 02:25:26.331355 kernel: [drm] number of cap sets: 0 Dec 13 02:25:26.332801 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 02:25:26.345563 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 02:25:26.349757 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 02:25:26.359032 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 02:25:26.360365 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:25:26.360785 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:25:26.371802 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:25:26.373151 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:25:26.373526 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:25:26.378300 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:25:26.383026 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 02:25:26.393707 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 02:25:26.429567 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:25:26.462877 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 02:25:26.465170 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:25:26.470792 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 02:25:26.494168 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:25:26.568884 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 02:25:27.131123 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:25:27.133460 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:25:27.135341 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 02:25:27.135716 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 02:25:27.136297 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 02:25:27.137172 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 02:25:27.138923 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 02:25:27.139109 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:25:27.139175 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:25:27.139310 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:25:27.142660 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 02:25:27.149369 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 02:25:27.159363 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 02:25:27.164473 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 02:25:27.168004 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:25:27.171287 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:25:27.174612 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:25:27.174778 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:25:27.186742 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 02:25:27.194029 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 02:25:27.201874 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 02:25:27.215735 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 02:25:27.228681 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 02:25:27.233600 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 02:25:27.239974 jq[1424]: false Dec 13 02:25:27.242212 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 02:25:27.251702 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 02:25:27.254979 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 02:25:27.264805 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 02:25:27.274132 extend-filesystems[1425]: Found loop4 Dec 13 02:25:27.279215 extend-filesystems[1425]: Found loop5 Dec 13 02:25:27.279215 extend-filesystems[1425]: Found loop6 Dec 13 02:25:27.279215 extend-filesystems[1425]: Found loop7 Dec 13 02:25:27.279215 extend-filesystems[1425]: Found vda Dec 13 02:25:27.279215 extend-filesystems[1425]: Found vda1 Dec 13 02:25:27.279215 extend-filesystems[1425]: Found vda2 Dec 13 02:25:27.279215 extend-filesystems[1425]: Found vda3 Dec 13 02:25:27.279215 extend-filesystems[1425]: Found usr Dec 13 02:25:27.279215 extend-filesystems[1425]: Found vda4 Dec 13 02:25:27.279215 extend-filesystems[1425]: Found vda6 Dec 13 02:25:27.279215 extend-filesystems[1425]: Found vda7 Dec 13 02:25:27.279215 extend-filesystems[1425]: Found vda9 Dec 13 02:25:27.279215 extend-filesystems[1425]: Checking size of /dev/vda9 Dec 13 02:25:27.417010 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 02:25:27.417088 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1373) Dec 13 02:25:27.274835 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 02:25:27.312637 dbus-daemon[1421]: [system] SELinux support is enabled Dec 13 02:25:27.418693 extend-filesystems[1425]: Resized partition /dev/vda9 Dec 13 02:25:27.278331 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:25:27.433469 extend-filesystems[1443]: resize2fs 1.47.1 (20-May-2024) Dec 13 02:25:27.283862 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:25:27.443473 update_engine[1434]: I20241213 02:25:27.357393 1434 main.cc:92] Flatcar Update Engine starting Dec 13 02:25:27.443473 update_engine[1434]: I20241213 02:25:27.373056 1434 update_check_scheduler.cc:74] Next update check in 7m27s Dec 13 02:25:27.297883 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 02:25:27.317778 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 02:25:27.452173 jq[1441]: true Dec 13 02:25:27.334685 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 02:25:27.371094 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:25:27.371362 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 02:25:27.373613 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:25:27.373799 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 02:25:27.408797 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:25:27.409044 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 02:25:27.415151 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 02:25:27.417101 systemd[1]: Started update-engine.service - Update Engine. Dec 13 02:25:27.450834 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:25:27.450872 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 02:25:27.453366 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:25:27.453392 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 02:25:27.464883 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 02:25:27.476100 jq[1451]: true Dec 13 02:25:27.476367 tar[1449]: linux-amd64/helm Dec 13 02:25:27.574471 systemd-logind[1431]: New seat seat0. Dec 13 02:25:27.619587 systemd-logind[1431]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:25:27.619638 systemd-logind[1431]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:25:27.620012 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 02:25:27.676535 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 02:25:27.691772 locksmithd[1459]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:25:27.836282 bash[1477]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:25:27.839398 extend-filesystems[1443]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 02:25:27.839398 extend-filesystems[1443]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 02:25:27.839398 extend-filesystems[1443]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 02:25:27.861420 extend-filesystems[1425]: Resized filesystem in /dev/vda9 Dec 13 02:25:27.840253 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 02:25:27.848206 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:25:27.849668 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 02:25:27.880136 systemd[1]: Starting sshkeys.service... Dec 13 02:25:27.894871 sshd_keygen[1437]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:25:27.908992 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 02:25:27.913886 systemd-networkd[1365]: eth0: Gained IPv6LL Dec 13 02:25:27.915022 systemd-timesyncd[1336]: Network configuration changed, trying to establish connection. Dec 13 02:25:27.916935 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 02:25:27.920716 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 02:25:27.930165 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 02:25:27.943197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:25:27.947856 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 02:25:27.965527 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 02:25:27.976886 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 02:25:27.988420 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:25:27.988937 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 02:25:28.001320 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 02:25:28.044238 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 02:25:28.055006 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 02:25:28.068110 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 02:25:28.070262 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 02:25:28.117963 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 02:25:28.182295 containerd[1455]: time="2024-12-13T02:25:28.182203427Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 02:25:28.216790 containerd[1455]: time="2024-12-13T02:25:28.216477639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:25:28.219521 containerd[1455]: time="2024-12-13T02:25:28.218739320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:25:28.219521 containerd[1455]: time="2024-12-13T02:25:28.218771831Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:25:28.219521 containerd[1455]: time="2024-12-13T02:25:28.218789354Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:25:28.219521 containerd[1455]: time="2024-12-13T02:25:28.218945106Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 02:25:28.219521 containerd[1455]: time="2024-12-13T02:25:28.218963942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 02:25:28.219521 containerd[1455]: time="2024-12-13T02:25:28.219025747Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:25:28.219521 containerd[1455]: time="2024-12-13T02:25:28.219041337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:25:28.219521 containerd[1455]: time="2024-12-13T02:25:28.219205515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:25:28.219521 containerd[1455]: time="2024-12-13T02:25:28.219223839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:25:28.219521 containerd[1455]: time="2024-12-13T02:25:28.219238436Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:25:28.219521 containerd[1455]: time="2024-12-13T02:25:28.219251761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:25:28.219770 containerd[1455]: time="2024-12-13T02:25:28.219329126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:25:28.219865 containerd[1455]: time="2024-12-13T02:25:28.219846717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:25:28.220019 containerd[1455]: time="2024-12-13T02:25:28.219998822Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:25:28.220082 containerd[1455]: time="2024-12-13T02:25:28.220068704Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:25:28.220231 containerd[1455]: time="2024-12-13T02:25:28.220213265Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:25:28.220342 containerd[1455]: time="2024-12-13T02:25:28.220324744Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:25:28.373650 containerd[1455]: time="2024-12-13T02:25:28.373447556Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:25:28.377555 containerd[1455]: time="2024-12-13T02:25:28.375829513Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:25:28.377555 containerd[1455]: time="2024-12-13T02:25:28.375905706Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 02:25:28.377555 containerd[1455]: time="2024-12-13T02:25:28.375954177Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 02:25:28.377555 containerd[1455]: time="2024-12-13T02:25:28.375996256Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:25:28.377555 containerd[1455]: time="2024-12-13T02:25:28.376322197Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:25:28.377555 containerd[1455]: time="2024-12-13T02:25:28.377003885Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:25:28.377555 containerd[1455]: time="2024-12-13T02:25:28.377254846Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 02:25:28.377555 containerd[1455]: time="2024-12-13T02:25:28.377305501Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 02:25:28.377555 containerd[1455]: time="2024-12-13T02:25:28.377340827Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 02:25:28.377555 containerd[1455]: time="2024-12-13T02:25:28.377376404Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:25:28.377555 containerd[1455]: time="2024-12-13T02:25:28.377409466Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:25:28.377555 containerd[1455]: time="2024-12-13T02:25:28.377441356Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:25:28.377555 containerd[1455]: time="2024-12-13T02:25:28.377485399Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:25:28.378552 containerd[1455]: time="2024-12-13T02:25:28.378461529Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:25:28.378763 containerd[1455]: time="2024-12-13T02:25:28.378723390Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:25:28.378976 containerd[1455]: time="2024-12-13T02:25:28.378934106Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:25:28.383069 containerd[1455]: time="2024-12-13T02:25:28.381595858Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:25:28.383069 containerd[1455]: time="2024-12-13T02:25:28.381662843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383069 containerd[1455]: time="2024-12-13T02:25:28.381699402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383069 containerd[1455]: time="2024-12-13T02:25:28.381734398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383069 containerd[1455]: time="2024-12-13T02:25:28.381819187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383069 containerd[1455]: time="2024-12-13T02:25:28.381856967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383069 containerd[1455]: time="2024-12-13T02:25:28.381890891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383069 containerd[1455]: time="2024-12-13T02:25:28.381920927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383069 containerd[1455]: time="2024-12-13T02:25:28.381952677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383069 containerd[1455]: time="2024-12-13T02:25:28.381989807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383069 containerd[1455]: time="2024-12-13T02:25:28.382028138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383069 containerd[1455]: time="2024-12-13T02:25:28.382059397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383069 containerd[1455]: time="2024-12-13T02:25:28.382090746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383069 containerd[1455]: time="2024-12-13T02:25:28.382131923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383069 containerd[1455]: time="2024-12-13T02:25:28.382226420Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 02:25:28.383856 containerd[1455]: time="2024-12-13T02:25:28.382286172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383856 containerd[1455]: time="2024-12-13T02:25:28.382321238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383856 containerd[1455]: time="2024-12-13T02:25:28.382350723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:25:28.383856 containerd[1455]: time="2024-12-13T02:25:28.382463465Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:25:28.383856 containerd[1455]: time="2024-12-13T02:25:28.382543395Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 02:25:28.383856 containerd[1455]: time="2024-12-13T02:25:28.382578190Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:25:28.383856 containerd[1455]: time="2024-12-13T02:25:28.382613797Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 02:25:28.383856 containerd[1455]: time="2024-12-13T02:25:28.382642150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.383856 containerd[1455]: time="2024-12-13T02:25:28.382673318Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 02:25:28.383856 containerd[1455]: time="2024-12-13T02:25:28.382699578Z" level=info msg="NRI interface is disabled by configuration." Dec 13 02:25:28.383856 containerd[1455]: time="2024-12-13T02:25:28.382725877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:25:28.385153 containerd[1455]: time="2024-12-13T02:25:28.384984834Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:25:28.385985 containerd[1455]: time="2024-12-13T02:25:28.385566214Z" level=info msg="Connect containerd service" Dec 13 02:25:28.385985 containerd[1455]: time="2024-12-13T02:25:28.385668366Z" level=info msg="using legacy CRI server" Dec 13 02:25:28.385985 containerd[1455]: time="2024-12-13T02:25:28.385689656Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 02:25:28.388137 containerd[1455]: time="2024-12-13T02:25:28.386998160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:25:28.389932 containerd[1455]: time="2024-12-13T02:25:28.389869876Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:25:28.390319 containerd[1455]: time="2024-12-13T02:25:28.390203161Z" level=info msg="Start subscribing containerd event" Dec 13 02:25:28.390443 containerd[1455]: time="2024-12-13T02:25:28.390331722Z" level=info msg="Start recovering state" Dec 13 02:25:28.390539 containerd[1455]: time="2024-12-13T02:25:28.390447609Z" level=info msg="Start event monitor" Dec 13 02:25:28.390539 containerd[1455]: time="2024-12-13T02:25:28.390470783Z" level=info msg="Start snapshots syncer" Dec 13 02:25:28.390539 containerd[1455]: time="2024-12-13T02:25:28.390490289Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:25:28.390539 containerd[1455]: time="2024-12-13T02:25:28.390517771Z" level=info msg="Start streaming server" Dec 13 02:25:28.391288 containerd[1455]: time="2024-12-13T02:25:28.391242229Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:25:28.399257 containerd[1455]: time="2024-12-13T02:25:28.391311449Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:25:28.399257 containerd[1455]: time="2024-12-13T02:25:28.391378235Z" level=info msg="containerd successfully booted in 0.210615s" Dec 13 02:25:28.391616 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 02:25:28.442170 tar[1449]: linux-amd64/LICENSE Dec 13 02:25:28.442170 tar[1449]: linux-amd64/README.md Dec 13 02:25:28.451883 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 02:25:29.210917 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 02:25:29.222174 systemd[1]: Started sshd@0-172.24.4.31:22-172.24.4.1:44276.service - OpenSSH per-connection server daemon (172.24.4.1:44276). Dec 13 02:25:29.995835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:25:30.012476 (kubelet)[1538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:25:30.419205 sshd[1531]: Accepted publickey for core from 172.24.4.1 port 44276 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:25:30.423299 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:25:30.453664 systemd-logind[1431]: New session 1 of user core. Dec 13 02:25:30.456671 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 02:25:30.467937 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 02:25:30.502224 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 02:25:30.513838 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 02:25:30.538219 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:25:30.689259 systemd[1547]: Queued start job for default target default.target. Dec 13 02:25:30.697534 systemd[1547]: Created slice app.slice - User Application Slice. Dec 13 02:25:30.697564 systemd[1547]: Reached target paths.target - Paths. Dec 13 02:25:30.697580 systemd[1547]: Reached target timers.target - Timers. Dec 13 02:25:30.699306 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 02:25:30.729437 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 02:25:30.730375 systemd[1547]: Reached target sockets.target - Sockets. Dec 13 02:25:30.730766 systemd[1547]: Reached target basic.target - Basic System. Dec 13 02:25:30.730890 systemd[1547]: Reached target default.target - Main User Target. Dec 13 02:25:30.730940 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 02:25:30.730959 systemd[1547]: Startup finished in 185ms. Dec 13 02:25:30.740964 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 02:25:31.254249 systemd[1]: Started sshd@1-172.24.4.31:22-172.24.4.1:46892.service - OpenSSH per-connection server daemon (172.24.4.1:46892). Dec 13 02:25:31.266472 kubelet[1538]: E1213 02:25:31.266349 1538 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:25:31.270250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:25:31.270415 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:25:31.270778 systemd[1]: kubelet.service: Consumed 1.868s CPU time. Dec 13 02:25:32.786915 sshd[1559]: Accepted publickey for core from 172.24.4.1 port 46892 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:25:32.789769 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:25:32.800994 systemd-logind[1431]: New session 2 of user core. Dec 13 02:25:32.809249 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 02:25:33.109762 login[1513]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:25:33.123386 systemd-logind[1431]: New session 3 of user core. Dec 13 02:25:33.124559 login[1514]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:25:33.134929 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 02:25:33.147957 systemd-logind[1431]: New session 4 of user core. Dec 13 02:25:33.155908 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 02:25:33.385016 sshd[1559]: pam_unix(sshd:session): session closed for user core Dec 13 02:25:33.395955 systemd[1]: sshd@1-172.24.4.31:22-172.24.4.1:46892.service: Deactivated successfully. Dec 13 02:25:33.399985 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:25:33.402673 systemd-logind[1431]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:25:33.413304 systemd[1]: Started sshd@2-172.24.4.31:22-172.24.4.1:46906.service - OpenSSH per-connection server daemon (172.24.4.1:46906). Dec 13 02:25:33.416090 systemd-logind[1431]: Removed session 2. Dec 13 02:25:34.278009 coreos-metadata[1420]: Dec 13 02:25:34.277 WARN failed to locate config-drive, using the metadata service API instead Dec 13 02:25:34.322389 coreos-metadata[1420]: Dec 13 02:25:34.322 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 02:25:34.514965 sshd[1593]: Accepted publickey for core from 172.24.4.1 port 46906 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:25:34.517981 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:25:34.527669 systemd-logind[1431]: New session 5 of user core. Dec 13 02:25:34.536891 coreos-metadata[1420]: Dec 13 02:25:34.536 INFO Fetch successful Dec 13 02:25:34.536891 coreos-metadata[1420]: Dec 13 02:25:34.536 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 02:25:34.537458 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 02:25:34.553240 coreos-metadata[1420]: Dec 13 02:25:34.552 INFO Fetch successful Dec 13 02:25:34.553605 coreos-metadata[1420]: Dec 13 02:25:34.553 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 02:25:34.569225 coreos-metadata[1420]: Dec 13 02:25:34.568 INFO Fetch successful Dec 13 02:25:34.569225 coreos-metadata[1420]: Dec 13 02:25:34.569 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 02:25:34.580278 coreos-metadata[1420]: Dec 13 02:25:34.580 INFO Fetch successful Dec 13 02:25:34.580278 coreos-metadata[1420]: Dec 13 02:25:34.580 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 02:25:34.594396 coreos-metadata[1420]: Dec 13 02:25:34.594 INFO Fetch successful Dec 13 02:25:34.594396 coreos-metadata[1420]: Dec 13 02:25:34.594 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 02:25:34.611165 coreos-metadata[1420]: Dec 13 02:25:34.611 INFO Fetch successful Dec 13 02:25:34.647872 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 02:25:34.649681 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 02:25:35.028802 coreos-metadata[1496]: Dec 13 02:25:35.028 WARN failed to locate config-drive, using the metadata service API instead Dec 13 02:25:35.047289 coreos-metadata[1496]: Dec 13 02:25:35.047 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 02:25:35.064167 coreos-metadata[1496]: Dec 13 02:25:35.064 INFO Fetch successful Dec 13 02:25:35.064350 coreos-metadata[1496]: Dec 13 02:25:35.064 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 02:25:35.080122 coreos-metadata[1496]: Dec 13 02:25:35.079 INFO Fetch successful Dec 13 02:25:35.084420 unknown[1496]: wrote ssh authorized keys file for user: core Dec 13 02:25:35.110774 sshd[1593]: pam_unix(sshd:session): session closed for user core Dec 13 02:25:35.118460 systemd[1]: sshd@2-172.24.4.31:22-172.24.4.1:46906.service: Deactivated successfully. Dec 13 02:25:35.123338 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:25:35.128050 systemd-logind[1431]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:25:35.130904 systemd-logind[1431]: Removed session 5. Dec 13 02:25:35.134600 update-ssh-keys[1606]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:25:35.135796 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 02:25:35.140751 systemd[1]: Finished sshkeys.service. Dec 13 02:25:35.146858 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 02:25:35.147478 systemd[1]: Startup finished in 1.111s (kernel) + 16.993s (initrd) + 12.492s (userspace) = 30.597s. Dec 13 02:25:41.292880 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:25:41.303063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:25:41.498271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:25:41.508983 (kubelet)[1620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:25:41.571627 kubelet[1620]: E1213 02:25:41.571260 1620 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:25:41.579356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:25:41.579884 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:25:45.136226 systemd[1]: Started sshd@3-172.24.4.31:22-172.24.4.1:39814.service - OpenSSH per-connection server daemon (172.24.4.1:39814). Dec 13 02:25:46.298128 sshd[1628]: Accepted publickey for core from 172.24.4.1 port 39814 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:25:46.301760 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:25:46.314800 systemd-logind[1431]: New session 6 of user core. Dec 13 02:25:46.322840 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 02:25:46.933888 sshd[1628]: pam_unix(sshd:session): session closed for user core Dec 13 02:25:46.948870 systemd[1]: sshd@3-172.24.4.31:22-172.24.4.1:39814.service: Deactivated successfully. Dec 13 02:25:46.952701 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:25:46.956854 systemd-logind[1431]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:25:46.964271 systemd[1]: Started sshd@4-172.24.4.31:22-172.24.4.1:39818.service - OpenSSH per-connection server daemon (172.24.4.1:39818). Dec 13 02:25:46.966979 systemd-logind[1431]: Removed session 6. Dec 13 02:25:48.132796 sshd[1635]: Accepted publickey for core from 172.24.4.1 port 39818 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:25:48.136023 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:25:48.145307 systemd-logind[1431]: New session 7 of user core. Dec 13 02:25:48.153871 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 02:25:48.868767 sshd[1635]: pam_unix(sshd:session): session closed for user core Dec 13 02:25:48.880981 systemd[1]: sshd@4-172.24.4.31:22-172.24.4.1:39818.service: Deactivated successfully. Dec 13 02:25:48.885584 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:25:48.888119 systemd-logind[1431]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:25:48.896186 systemd[1]: Started sshd@5-172.24.4.31:22-172.24.4.1:39828.service - OpenSSH per-connection server daemon (172.24.4.1:39828). Dec 13 02:25:48.900135 systemd-logind[1431]: Removed session 7. Dec 13 02:25:50.269859 sshd[1642]: Accepted publickey for core from 172.24.4.1 port 39828 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:25:50.271976 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:25:50.278906 systemd-logind[1431]: New session 8 of user core. Dec 13 02:25:50.289895 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 02:25:50.907427 sshd[1642]: pam_unix(sshd:session): session closed for user core Dec 13 02:25:50.920877 systemd[1]: sshd@5-172.24.4.31:22-172.24.4.1:39828.service: Deactivated successfully. Dec 13 02:25:50.924911 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:25:50.928920 systemd-logind[1431]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:25:50.935233 systemd[1]: Started sshd@6-172.24.4.31:22-172.24.4.1:39840.service - OpenSSH per-connection server daemon (172.24.4.1:39840). Dec 13 02:25:50.938787 systemd-logind[1431]: Removed session 8. Dec 13 02:25:51.792686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:25:51.801022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:25:52.141954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:25:52.142132 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:25:52.215226 sshd[1649]: Accepted publickey for core from 172.24.4.1 port 39840 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:25:52.219267 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:25:52.230994 systemd-logind[1431]: New session 9 of user core. Dec 13 02:25:52.238872 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 02:25:52.715845 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 02:25:52.716163 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:25:52.729476 sudo[1665]: pam_unix(sudo:session): session closed for user root Dec 13 02:25:52.742660 kubelet[1659]: E1213 02:25:52.742496 1659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:25:52.744713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:25:52.744879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:25:52.904811 sshd[1649]: pam_unix(sshd:session): session closed for user core Dec 13 02:25:52.918249 systemd[1]: sshd@6-172.24.4.31:22-172.24.4.1:39840.service: Deactivated successfully. Dec 13 02:25:52.923597 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:25:52.925721 systemd-logind[1431]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:25:52.936158 systemd[1]: Started sshd@7-172.24.4.31:22-172.24.4.1:39842.service - OpenSSH per-connection server daemon (172.24.4.1:39842). Dec 13 02:25:52.938767 systemd-logind[1431]: Removed session 9. Dec 13 02:25:54.748911 sshd[1672]: Accepted publickey for core from 172.24.4.1 port 39842 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:25:54.751833 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:25:54.761283 systemd-logind[1431]: New session 10 of user core. Dec 13 02:25:54.767882 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 02:25:55.175286 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 02:25:55.176103 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:25:55.184175 sudo[1676]: pam_unix(sudo:session): session closed for user root Dec 13 02:25:55.196049 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 02:25:55.196787 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:25:55.225168 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 02:25:55.230714 auditctl[1679]: No rules Dec 13 02:25:55.231384 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 02:25:55.231882 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 02:25:55.242299 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:25:55.297585 augenrules[1697]: No rules Dec 13 02:25:55.300268 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:25:55.302777 sudo[1675]: pam_unix(sudo:session): session closed for user root Dec 13 02:25:55.471810 sshd[1672]: pam_unix(sshd:session): session closed for user core Dec 13 02:25:55.484947 systemd[1]: sshd@7-172.24.4.31:22-172.24.4.1:39842.service: Deactivated successfully. Dec 13 02:25:55.488044 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:25:55.492842 systemd-logind[1431]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:25:55.500232 systemd[1]: Started sshd@8-172.24.4.31:22-172.24.4.1:46916.service - OpenSSH per-connection server daemon (172.24.4.1:46916). Dec 13 02:25:55.503809 systemd-logind[1431]: Removed session 10. Dec 13 02:25:56.783281 sshd[1705]: Accepted publickey for core from 172.24.4.1 port 46916 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:25:56.786646 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:25:56.796227 systemd-logind[1431]: New session 11 of user core. Dec 13 02:25:56.808054 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 02:25:57.197285 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:25:57.198017 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:25:57.828897 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 02:25:57.832706 (dockerd)[1725]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 02:25:58.680214 systemd-resolved[1322]: Clock change detected. Flushing caches. Dec 13 02:25:58.681075 systemd-timesyncd[1336]: Contacted time server 95.81.173.8:123 (2.flatcar.pool.ntp.org). Dec 13 02:25:58.681199 systemd-timesyncd[1336]: Initial clock synchronization to Fri 2024-12-13 02:25:58.680039 UTC. Dec 13 02:25:59.074167 dockerd[1725]: time="2024-12-13T02:25:59.073043079Z" level=info msg="Starting up" Dec 13 02:25:59.258841 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1681893526-merged.mount: Deactivated successfully. Dec 13 02:25:59.329150 dockerd[1725]: time="2024-12-13T02:25:59.328815450Z" level=info msg="Loading containers: start." Dec 13 02:25:59.470525 kernel: Initializing XFRM netlink socket Dec 13 02:25:59.627889 systemd-networkd[1365]: docker0: Link UP Dec 13 02:25:59.646645 dockerd[1725]: time="2024-12-13T02:25:59.646432561Z" level=info msg="Loading containers: done." Dec 13 02:25:59.675635 dockerd[1725]: time="2024-12-13T02:25:59.675307315Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:25:59.675635 dockerd[1725]: time="2024-12-13T02:25:59.675539350Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 02:25:59.676016 dockerd[1725]: time="2024-12-13T02:25:59.675791874Z" level=info msg="Daemon has completed initialization" Dec 13 02:25:59.754824 dockerd[1725]: time="2024-12-13T02:25:59.753153386Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:25:59.755711 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 02:26:01.248014 containerd[1455]: time="2024-12-13T02:26:01.247344691Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 02:26:02.059410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3168740995.mount: Deactivated successfully. Dec 13 02:26:03.333898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 02:26:03.342486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:26:03.475793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:26:03.485655 (kubelet)[1925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:26:03.837189 kubelet[1925]: E1213 02:26:03.837007 1925 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:26:03.841769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:26:03.842058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:26:04.070631 containerd[1455]: time="2024-12-13T02:26:04.070533321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:04.072598 containerd[1455]: time="2024-12-13T02:26:04.072268345Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975491" Dec 13 02:26:04.075442 containerd[1455]: time="2024-12-13T02:26:04.075374009Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:04.079802 containerd[1455]: time="2024-12-13T02:26:04.079748744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:04.081291 containerd[1455]: time="2024-12-13T02:26:04.081049113Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 2.833621697s" Dec 13 02:26:04.081291 containerd[1455]: time="2024-12-13T02:26:04.081087906Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 02:26:04.083665 containerd[1455]: time="2024-12-13T02:26:04.083642617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 02:26:07.010903 containerd[1455]: time="2024-12-13T02:26:07.010736511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:07.014368 containerd[1455]: time="2024-12-13T02:26:07.014040628Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702165" Dec 13 02:26:07.016991 containerd[1455]: time="2024-12-13T02:26:07.016935086Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:07.031154 containerd[1455]: time="2024-12-13T02:26:07.030406903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:07.040097 containerd[1455]: time="2024-12-13T02:26:07.039997640Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 2.956095506s" Dec 13 02:26:07.040097 containerd[1455]: time="2024-12-13T02:26:07.040082910Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 02:26:07.041901 containerd[1455]: time="2024-12-13T02:26:07.041712546Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 02:26:09.594451 containerd[1455]: time="2024-12-13T02:26:09.594239640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:09.597233 containerd[1455]: time="2024-12-13T02:26:09.597033039Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652075" Dec 13 02:26:09.604222 containerd[1455]: time="2024-12-13T02:26:09.604072883Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:09.617524 containerd[1455]: time="2024-12-13T02:26:09.617408203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:09.621961 containerd[1455]: time="2024-12-13T02:26:09.621704821Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 2.57991454s" Dec 13 02:26:09.621961 containerd[1455]: time="2024-12-13T02:26:09.621784931Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 02:26:09.622977 containerd[1455]: time="2024-12-13T02:26:09.622897899Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 02:26:11.410844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1221241835.mount: Deactivated successfully. Dec 13 02:26:12.516214 containerd[1455]: time="2024-12-13T02:26:12.515790338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:12.518772 containerd[1455]: time="2024-12-13T02:26:12.518022745Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230251" Dec 13 02:26:12.525677 containerd[1455]: time="2024-12-13T02:26:12.525558378Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:12.531304 containerd[1455]: time="2024-12-13T02:26:12.531109800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:12.533612 containerd[1455]: time="2024-12-13T02:26:12.533024981Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.910039218s" Dec 13 02:26:12.533612 containerd[1455]: time="2024-12-13T02:26:12.533111854Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 02:26:12.535110 containerd[1455]: time="2024-12-13T02:26:12.534982262Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:26:13.107224 update_engine[1434]: I20241213 02:26:13.105458 1434 update_attempter.cc:509] Updating boot flags... Dec 13 02:26:13.193159 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1956) Dec 13 02:26:13.230807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3599960933.mount: Deactivated successfully. Dec 13 02:26:13.283180 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1954) Dec 13 02:26:14.084026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 02:26:14.091855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:26:14.221525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:26:14.233506 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:26:14.689840 containerd[1455]: time="2024-12-13T02:26:14.689761641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:14.691111 containerd[1455]: time="2024-12-13T02:26:14.691063963Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Dec 13 02:26:14.694076 containerd[1455]: time="2024-12-13T02:26:14.694031939Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:14.702758 containerd[1455]: time="2024-12-13T02:26:14.702706368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:14.706346 containerd[1455]: time="2024-12-13T02:26:14.705915437Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.170801748s" Dec 13 02:26:14.706346 containerd[1455]: time="2024-12-13T02:26:14.705995477Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:26:14.707923 containerd[1455]: time="2024-12-13T02:26:14.707386005Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 02:26:14.721020 kubelet[2015]: E1213 02:26:14.720950 2015 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:26:14.725671 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:26:14.726671 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:26:15.278983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount452787630.mount: Deactivated successfully. Dec 13 02:26:15.293230 containerd[1455]: time="2024-12-13T02:26:15.293002529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:15.295763 containerd[1455]: time="2024-12-13T02:26:15.295628955Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Dec 13 02:26:15.298303 containerd[1455]: time="2024-12-13T02:26:15.298205296Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:15.304311 containerd[1455]: time="2024-12-13T02:26:15.304181094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:15.306817 containerd[1455]: time="2024-12-13T02:26:15.306543435Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 599.094ms" Dec 13 02:26:15.306817 containerd[1455]: time="2024-12-13T02:26:15.306623665Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 02:26:15.308402 containerd[1455]: time="2024-12-13T02:26:15.307782198Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 02:26:16.004493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount796281970.mount: Deactivated successfully. Dec 13 02:26:19.547651 containerd[1455]: time="2024-12-13T02:26:19.547556283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:19.549253 containerd[1455]: time="2024-12-13T02:26:19.549180950Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Dec 13 02:26:19.550482 containerd[1455]: time="2024-12-13T02:26:19.550406198Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:19.561102 containerd[1455]: time="2024-12-13T02:26:19.559256176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:19.561102 containerd[1455]: time="2024-12-13T02:26:19.560462619Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.252628053s" Dec 13 02:26:19.561102 containerd[1455]: time="2024-12-13T02:26:19.560493908Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 02:26:24.310892 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:26:24.322483 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:26:24.368053 systemd[1]: Reloading requested from client PID 2107 ('systemctl') (unit session-11.scope)... Dec 13 02:26:24.368071 systemd[1]: Reloading... Dec 13 02:26:24.465178 zram_generator::config[2142]: No configuration found. Dec 13 02:26:24.661527 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:26:24.750375 systemd[1]: Reloading finished in 381 ms. Dec 13 02:26:24.806492 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 02:26:24.806585 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 02:26:24.807082 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:26:24.809684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:26:25.467957 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:26:25.468404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:26:25.545772 kubelet[2211]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:26:25.548219 kubelet[2211]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:26:25.550142 kubelet[2211]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:26:25.550142 kubelet[2211]: I1213 02:26:25.548507 2211 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:26:26.044705 kubelet[2211]: I1213 02:26:26.044654 2211 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 02:26:26.044705 kubelet[2211]: I1213 02:26:26.044691 2211 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:26:26.045055 kubelet[2211]: I1213 02:26:26.045028 2211 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 02:26:26.079791 kubelet[2211]: E1213 02:26:26.079685 2211 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:26:26.084559 kubelet[2211]: I1213 02:26:26.084334 2211 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:26:26.109508 kubelet[2211]: E1213 02:26:26.109427 2211 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 02:26:26.109508 kubelet[2211]: I1213 02:26:26.109500 2211 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 02:26:26.120339 kubelet[2211]: I1213 02:26:26.120239 2211 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:26:26.120747 kubelet[2211]: I1213 02:26:26.120470 2211 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 02:26:26.120835 kubelet[2211]: I1213 02:26:26.120771 2211 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:26:26.121302 kubelet[2211]: I1213 02:26:26.120827 2211 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-b-462e46fdf9.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 02:26:26.121302 kubelet[2211]: I1213 02:26:26.121298 2211 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:26:26.121594 kubelet[2211]: I1213 02:26:26.121323 2211 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 02:26:26.121594 kubelet[2211]: I1213 02:26:26.121585 2211 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:26:26.126568 kubelet[2211]: I1213 02:26:26.126068 2211 kubelet.go:408] "Attempting to sync node with API server" Dec 13 02:26:26.126568 kubelet[2211]: I1213 02:26:26.126154 2211 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:26:26.126568 kubelet[2211]: I1213 02:26:26.126217 2211 kubelet.go:314] "Adding apiserver pod source" Dec 13 02:26:26.126568 kubelet[2211]: I1213 02:26:26.126248 2211 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:26:26.136891 kubelet[2211]: W1213 02:26:26.135050 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-462e46fdf9.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused Dec 13 02:26:26.136891 kubelet[2211]: E1213 02:26:26.135287 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-462e46fdf9.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:26:26.138151 kubelet[2211]: W1213 02:26:26.138027 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused Dec 13 02:26:26.138409 kubelet[2211]: E1213 02:26:26.138368 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:26:26.138739 kubelet[2211]: I1213 02:26:26.138706 2211 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:26:26.143082 kubelet[2211]: I1213 02:26:26.143042 2211 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:26:26.146097 kubelet[2211]: W1213 02:26:26.146062 2211 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:26:26.150467 kubelet[2211]: I1213 02:26:26.150438 2211 server.go:1269] "Started kubelet" Dec 13 02:26:26.153976 kubelet[2211]: I1213 02:26:26.152836 2211 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:26:26.155633 kubelet[2211]: I1213 02:26:26.154892 2211 server.go:460] "Adding debug handlers to kubelet server" Dec 13 02:26:26.159887 kubelet[2211]: I1213 02:26:26.158969 2211 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:26:26.159887 kubelet[2211]: I1213 02:26:26.159303 2211 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:26:26.159887 kubelet[2211]: I1213 02:26:26.159557 2211 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:26:26.165578 kubelet[2211]: E1213 02:26:26.161358 2211 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.31:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-b-462e46fdf9.novalocal.18109b75a6d99845 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-b-462e46fdf9.novalocal,UID:ci-4081-2-1-b-462e46fdf9.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-b-462e46fdf9.novalocal,},FirstTimestamp:2024-12-13 02:26:26.150398021 +0000 UTC m=+0.668524649,LastTimestamp:2024-12-13 02:26:26.150398021 +0000 UTC m=+0.668524649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-b-462e46fdf9.novalocal,}" Dec 13 02:26:26.167247 kubelet[2211]: I1213 02:26:26.166398 2211 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 02:26:26.168576 kubelet[2211]: E1213 02:26:26.168559 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-462e46fdf9.novalocal\" not found" Dec 13 02:26:26.168675 kubelet[2211]: I1213 02:26:26.168665 2211 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 02:26:26.168933 kubelet[2211]: I1213 02:26:26.168917 2211 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 02:26:26.169054 kubelet[2211]: I1213 02:26:26.169042 2211 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:26:26.172047 kubelet[2211]: E1213 02:26:26.171978 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-462e46fdf9.novalocal?timeout=10s\": dial tcp 172.24.4.31:6443: connect: connection refused" interval="200ms" Dec 13 02:26:26.172320 kubelet[2211]: W1213 02:26:26.172192 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused Dec 13 02:26:26.172320 kubelet[2211]: E1213 02:26:26.172288 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:26:26.173676 kubelet[2211]: I1213 02:26:26.172787 2211 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:26:26.173676 kubelet[2211]: I1213 02:26:26.172913 2211 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:26:26.174978 kubelet[2211]: E1213 02:26:26.174940 2211 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:26:26.177183 kubelet[2211]: I1213 02:26:26.176350 2211 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:26:26.192367 kubelet[2211]: I1213 02:26:26.192296 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:26:26.193515 kubelet[2211]: I1213 02:26:26.193482 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:26:26.193515 kubelet[2211]: I1213 02:26:26.193517 2211 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:26:26.193613 kubelet[2211]: I1213 02:26:26.193544 2211 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 02:26:26.193645 kubelet[2211]: E1213 02:26:26.193606 2211 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:26:26.205405 kubelet[2211]: W1213 02:26:26.205365 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused Dec 13 02:26:26.205538 kubelet[2211]: E1213 02:26:26.205420 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:26:26.255495 kubelet[2211]: I1213 02:26:26.255424 2211 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:26:26.255495 kubelet[2211]: I1213 02:26:26.255445 2211 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:26:26.255495 kubelet[2211]: I1213 02:26:26.255462 2211 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:26:26.269738 kubelet[2211]: E1213 02:26:26.269669 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-462e46fdf9.novalocal\" not found" Dec 13 02:26:26.294240 kubelet[2211]: E1213 02:26:26.294173 2211 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 02:26:26.327273 kubelet[2211]: I1213 02:26:26.326400 2211 policy_none.go:49] "None policy: Start" Dec 13 02:26:26.329272 kubelet[2211]: I1213 02:26:26.329213 2211 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:26:26.330300 kubelet[2211]: I1213 02:26:26.329827 2211 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:26:26.358788 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 02:26:26.371344 kubelet[2211]: E1213 02:26:26.370610 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-462e46fdf9.novalocal\" not found" Dec 13 02:26:26.373721 kubelet[2211]: E1213 02:26:26.373559 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-462e46fdf9.novalocal?timeout=10s\": dial tcp 172.24.4.31:6443: connect: connection refused" interval="400ms" Dec 13 02:26:26.379485 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 02:26:26.387308 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 02:26:26.403650 kubelet[2211]: I1213 02:26:26.402717 2211 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:26:26.403650 kubelet[2211]: I1213 02:26:26.403106 2211 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 02:26:26.403650 kubelet[2211]: I1213 02:26:26.403178 2211 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:26:26.403650 kubelet[2211]: I1213 02:26:26.403673 2211 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:26:26.408632 kubelet[2211]: E1213 02:26:26.408593 2211 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-b-462e46fdf9.novalocal\" not found" Dec 13 02:26:26.507694 kubelet[2211]: I1213 02:26:26.507552 2211 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:26.511231 kubelet[2211]: E1213 02:26:26.510662 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.31:6443/api/v1/nodes\": dial tcp 172.24.4.31:6443: connect: connection refused" node="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:26.521451 systemd[1]: Created slice kubepods-burstable-podc2ee18998d844636d2243564be580cc0.slice - libcontainer container kubepods-burstable-podc2ee18998d844636d2243564be580cc0.slice. Dec 13 02:26:26.544348 systemd[1]: Created slice kubepods-burstable-pod399c1f96fee819939ca0bd606e6d93c8.slice - libcontainer container kubepods-burstable-pod399c1f96fee819939ca0bd606e6d93c8.slice. Dec 13 02:26:26.556474 systemd[1]: Created slice kubepods-burstable-pode615c53510615278b7e2f183f01c2252.slice - libcontainer container kubepods-burstable-pode615c53510615278b7e2f183f01c2252.slice. Dec 13 02:26:26.572189 kubelet[2211]: I1213 02:26:26.571861 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2ee18998d844636d2243564be580cc0-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"c2ee18998d844636d2243564be580cc0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:26.572189 kubelet[2211]: I1213 02:26:26.572093 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2ee18998d844636d2243564be580cc0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"c2ee18998d844636d2243564be580cc0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:26.572189 kubelet[2211]: I1213 02:26:26.572191 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/399c1f96fee819939ca0bd606e6d93c8-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"399c1f96fee819939ca0bd606e6d93c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:26.572966 kubelet[2211]: I1213 02:26:26.572249 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/399c1f96fee819939ca0bd606e6d93c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"399c1f96fee819939ca0bd606e6d93c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:26.572966 kubelet[2211]: I1213 02:26:26.572299 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e615c53510615278b7e2f183f01c2252-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"e615c53510615278b7e2f183f01c2252\") " pod="kube-system/kube-scheduler-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:26.572966 kubelet[2211]: I1213 02:26:26.572343 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2ee18998d844636d2243564be580cc0-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"c2ee18998d844636d2243564be580cc0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:26.572966 kubelet[2211]: I1213 02:26:26.572386 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/399c1f96fee819939ca0bd606e6d93c8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"399c1f96fee819939ca0bd606e6d93c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:26.573291 kubelet[2211]: I1213 02:26:26.572435 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/399c1f96fee819939ca0bd606e6d93c8-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"399c1f96fee819939ca0bd606e6d93c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:26.573291 kubelet[2211]: I1213 02:26:26.572477 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/399c1f96fee819939ca0bd606e6d93c8-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"399c1f96fee819939ca0bd606e6d93c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:26.715786 kubelet[2211]: I1213 02:26:26.715648 2211 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:26.716601 kubelet[2211]: E1213 02:26:26.716453 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.31:6443/api/v1/nodes\": dial tcp 172.24.4.31:6443: connect: connection refused" node="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:26.775495 kubelet[2211]: E1213 02:26:26.775400 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-462e46fdf9.novalocal?timeout=10s\": dial tcp 172.24.4.31:6443: connect: connection refused" interval="800ms" Dec 13 02:26:26.838642 containerd[1455]: time="2024-12-13T02:26:26.838499615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-b-462e46fdf9.novalocal,Uid:c2ee18998d844636d2243564be580cc0,Namespace:kube-system,Attempt:0,}" Dec 13 02:26:27.050765 containerd[1455]: time="2024-12-13T02:26:27.050503899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-b-462e46fdf9.novalocal,Uid:e615c53510615278b7e2f183f01c2252,Namespace:kube-system,Attempt:0,}" Dec 13 02:26:27.051197 containerd[1455]: time="2024-12-13T02:26:27.050997074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal,Uid:399c1f96fee819939ca0bd606e6d93c8,Namespace:kube-system,Attempt:0,}" Dec 13 02:26:27.120087 kubelet[2211]: I1213 02:26:27.119892 2211 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:27.121194 kubelet[2211]: E1213 02:26:27.121079 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.31:6443/api/v1/nodes\": dial tcp 172.24.4.31:6443: connect: connection refused" node="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:27.476364 kubelet[2211]: W1213 02:26:27.476214 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused Dec 13 02:26:27.476364 kubelet[2211]: E1213 02:26:27.476373 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:26:27.533295 kubelet[2211]: W1213 02:26:27.532620 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused Dec 13 02:26:27.533295 kubelet[2211]: E1213 02:26:27.532789 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:26:27.577315 kubelet[2211]: E1213 02:26:27.577236 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-462e46fdf9.novalocal?timeout=10s\": dial tcp 172.24.4.31:6443: connect: connection refused" interval="1.6s" Dec 13 02:26:27.672577 kubelet[2211]: W1213 02:26:27.672431 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused Dec 13 02:26:27.672577 kubelet[2211]: E1213 02:26:27.672520 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:26:27.695971 kubelet[2211]: W1213 02:26:27.695774 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-462e46fdf9.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused Dec 13 02:26:27.695971 kubelet[2211]: E1213 02:26:27.695908 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-462e46fdf9.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:26:27.805832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896597061.mount: Deactivated successfully. Dec 13 02:26:27.814880 containerd[1455]: time="2024-12-13T02:26:27.814516705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:26:27.819178 containerd[1455]: time="2024-12-13T02:26:27.818937266Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 02:26:27.822149 containerd[1455]: time="2024-12-13T02:26:27.821972248Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:26:27.825796 containerd[1455]: time="2024-12-13T02:26:27.825543616Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:26:27.828532 containerd[1455]: time="2024-12-13T02:26:27.827668782Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:26:27.828532 containerd[1455]: time="2024-12-13T02:26:27.828442964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:26:27.832905 containerd[1455]: time="2024-12-13T02:26:27.832838237Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:26:27.840632 containerd[1455]: time="2024-12-13T02:26:27.840185938Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 789.060573ms" Dec 13 02:26:27.842769 containerd[1455]: time="2024-12-13T02:26:27.842359815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:26:27.851706 containerd[1455]: time="2024-12-13T02:26:27.851472676Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 800.783529ms" Dec 13 02:26:27.855403 containerd[1455]: time="2024-12-13T02:26:27.855311556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.016570327s" Dec 13 02:26:27.977518 kubelet[2211]: I1213 02:26:27.976791 2211 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:27.977807 kubelet[2211]: E1213 02:26:27.977719 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.31:6443/api/v1/nodes\": dial tcp 172.24.4.31:6443: connect: connection refused" node="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:28.195932 kubelet[2211]: E1213 02:26:28.195675 2211 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:26:28.404950 containerd[1455]: time="2024-12-13T02:26:28.403863705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:26:28.404950 containerd[1455]: time="2024-12-13T02:26:28.404034175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:26:28.404950 containerd[1455]: time="2024-12-13T02:26:28.404070934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:28.404950 containerd[1455]: time="2024-12-13T02:26:28.404350579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:28.404950 containerd[1455]: time="2024-12-13T02:26:28.402876784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:26:28.404950 containerd[1455]: time="2024-12-13T02:26:28.403316289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:26:28.404950 containerd[1455]: time="2024-12-13T02:26:28.403532214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:28.404950 containerd[1455]: time="2024-12-13T02:26:28.403633995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:26:28.404950 containerd[1455]: time="2024-12-13T02:26:28.403795007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:26:28.404950 containerd[1455]: time="2024-12-13T02:26:28.403810045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:28.404950 containerd[1455]: time="2024-12-13T02:26:28.404088978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:28.406878 containerd[1455]: time="2024-12-13T02:26:28.406168688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:28.443386 systemd[1]: Started cri-containerd-2ffedb50d61685c0b316691ab1e4c6255edad7882d67b99ccbcc5cd7def8b209.scope - libcontainer container 2ffedb50d61685c0b316691ab1e4c6255edad7882d67b99ccbcc5cd7def8b209. Dec 13 02:26:28.450819 systemd[1]: Started cri-containerd-450f74bb370d96097eb23bcbfed85af60fd4e9b08b44ad26aeded2f2aa6bf09d.scope - libcontainer container 450f74bb370d96097eb23bcbfed85af60fd4e9b08b44ad26aeded2f2aa6bf09d. Dec 13 02:26:28.453232 systemd[1]: Started cri-containerd-8e7ee6ab903de15d4ee1581e003ae87b541ef697d5561dcc4d8231674d15a2ac.scope - libcontainer container 8e7ee6ab903de15d4ee1581e003ae87b541ef697d5561dcc4d8231674d15a2ac. Dec 13 02:26:28.545468 containerd[1455]: time="2024-12-13T02:26:28.545271516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal,Uid:399c1f96fee819939ca0bd606e6d93c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ffedb50d61685c0b316691ab1e4c6255edad7882d67b99ccbcc5cd7def8b209\"" Dec 13 02:26:28.546113 containerd[1455]: time="2024-12-13T02:26:28.545890266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-b-462e46fdf9.novalocal,Uid:c2ee18998d844636d2243564be580cc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e7ee6ab903de15d4ee1581e003ae87b541ef697d5561dcc4d8231674d15a2ac\"" Dec 13 02:26:28.559565 containerd[1455]: time="2024-12-13T02:26:28.559253389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-b-462e46fdf9.novalocal,Uid:e615c53510615278b7e2f183f01c2252,Namespace:kube-system,Attempt:0,} returns sandbox id \"450f74bb370d96097eb23bcbfed85af60fd4e9b08b44ad26aeded2f2aa6bf09d\"" Dec 13 02:26:28.564447 containerd[1455]: time="2024-12-13T02:26:28.564294473Z" level=info msg="CreateContainer within sandbox \"450f74bb370d96097eb23bcbfed85af60fd4e9b08b44ad26aeded2f2aa6bf09d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:26:28.564616 containerd[1455]: time="2024-12-13T02:26:28.564587122Z" level=info msg="CreateContainer within sandbox \"8e7ee6ab903de15d4ee1581e003ae87b541ef697d5561dcc4d8231674d15a2ac\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:26:28.565073 containerd[1455]: time="2024-12-13T02:26:28.565045893Z" level=info msg="CreateContainer within sandbox \"2ffedb50d61685c0b316691ab1e4c6255edad7882d67b99ccbcc5cd7def8b209\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:26:28.692367 containerd[1455]: time="2024-12-13T02:26:28.692231359Z" level=info msg="CreateContainer within sandbox \"8e7ee6ab903de15d4ee1581e003ae87b541ef697d5561dcc4d8231674d15a2ac\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"31bb85b2c814cebfb7c7ad260c0503c363c1441a43a43a7264b99263870aa259\"" Dec 13 02:26:28.696025 containerd[1455]: time="2024-12-13T02:26:28.695967797Z" level=info msg="StartContainer for \"31bb85b2c814cebfb7c7ad260c0503c363c1441a43a43a7264b99263870aa259\"" Dec 13 02:26:28.703176 containerd[1455]: time="2024-12-13T02:26:28.702359725Z" level=info msg="CreateContainer within sandbox \"450f74bb370d96097eb23bcbfed85af60fd4e9b08b44ad26aeded2f2aa6bf09d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3be2ce1013672e903626ee9635ff6d581dd7adddce986e4595532831e807d85b\"" Dec 13 02:26:28.706252 containerd[1455]: time="2024-12-13T02:26:28.705878875Z" level=info msg="StartContainer for \"3be2ce1013672e903626ee9635ff6d581dd7adddce986e4595532831e807d85b\"" Dec 13 02:26:28.723508 containerd[1455]: time="2024-12-13T02:26:28.722805901Z" level=info msg="CreateContainer within sandbox \"2ffedb50d61685c0b316691ab1e4c6255edad7882d67b99ccbcc5cd7def8b209\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e1185a1ef8aa0ce2263c847c8d30f155fcadb777bd875591d5c363b76c40097\"" Dec 13 02:26:28.726270 containerd[1455]: time="2024-12-13T02:26:28.724070874Z" level=info msg="StartContainer for \"6e1185a1ef8aa0ce2263c847c8d30f155fcadb777bd875591d5c363b76c40097\"" Dec 13 02:26:28.765646 systemd[1]: Started cri-containerd-31bb85b2c814cebfb7c7ad260c0503c363c1441a43a43a7264b99263870aa259.scope - libcontainer container 31bb85b2c814cebfb7c7ad260c0503c363c1441a43a43a7264b99263870aa259. Dec 13 02:26:28.775515 systemd[1]: Started cri-containerd-3be2ce1013672e903626ee9635ff6d581dd7adddce986e4595532831e807d85b.scope - libcontainer container 3be2ce1013672e903626ee9635ff6d581dd7adddce986e4595532831e807d85b. Dec 13 02:26:28.810280 systemd[1]: Started cri-containerd-6e1185a1ef8aa0ce2263c847c8d30f155fcadb777bd875591d5c363b76c40097.scope - libcontainer container 6e1185a1ef8aa0ce2263c847c8d30f155fcadb777bd875591d5c363b76c40097. Dec 13 02:26:28.819213 kubelet[2211]: E1213 02:26:28.819051 2211 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.31:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-b-462e46fdf9.novalocal.18109b75a6d99845 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-b-462e46fdf9.novalocal,UID:ci-4081-2-1-b-462e46fdf9.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-b-462e46fdf9.novalocal,},FirstTimestamp:2024-12-13 02:26:26.150398021 +0000 UTC m=+0.668524649,LastTimestamp:2024-12-13 02:26:26.150398021 +0000 UTC m=+0.668524649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-b-462e46fdf9.novalocal,}" Dec 13 02:26:28.896820 containerd[1455]: time="2024-12-13T02:26:28.896756576Z" level=info msg="StartContainer for \"31bb85b2c814cebfb7c7ad260c0503c363c1441a43a43a7264b99263870aa259\" returns successfully" Dec 13 02:26:28.897418 containerd[1455]: time="2024-12-13T02:26:28.896968173Z" level=info msg="StartContainer for \"3be2ce1013672e903626ee9635ff6d581dd7adddce986e4595532831e807d85b\" returns successfully" Dec 13 02:26:28.897418 containerd[1455]: time="2024-12-13T02:26:28.897008999Z" level=info msg="StartContainer for \"6e1185a1ef8aa0ce2263c847c8d30f155fcadb777bd875591d5c363b76c40097\" returns successfully" Dec 13 02:26:29.579799 kubelet[2211]: I1213 02:26:29.579759 2211 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:31.617570 kubelet[2211]: E1213 02:26:31.617537 2211 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-b-462e46fdf9.novalocal\" not found" node="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:31.656490 kubelet[2211]: I1213 02:26:31.656329 2211 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:31.656490 kubelet[2211]: E1213 02:26:31.656360 2211 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-2-1-b-462e46fdf9.novalocal\": node \"ci-4081-2-1-b-462e46fdf9.novalocal\" not found" Dec 13 02:26:32.154810 kubelet[2211]: I1213 02:26:32.154312 2211 apiserver.go:52] "Watching apiserver" Dec 13 02:26:32.170358 kubelet[2211]: I1213 02:26:32.170250 2211 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 02:26:32.463557 kubelet[2211]: E1213 02:26:32.462789 2211 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:34.118589 systemd[1]: Reloading requested from client PID 2485 ('systemctl') (unit session-11.scope)... Dec 13 02:26:34.119290 systemd[1]: Reloading... Dec 13 02:26:34.248366 zram_generator::config[2524]: No configuration found. Dec 13 02:26:34.423215 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:26:34.535515 systemd[1]: Reloading finished in 415 ms. Dec 13 02:26:34.582526 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:26:34.600825 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:26:34.601152 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:26:34.601227 systemd[1]: kubelet.service: Consumed 1.293s CPU time, 112.0M memory peak, 0B memory swap peak. Dec 13 02:26:34.607379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:26:35.013934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:26:35.036102 (kubelet)[2588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:26:35.342588 kubelet[2588]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:26:35.343589 kubelet[2588]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:26:35.343589 kubelet[2588]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:26:35.343589 kubelet[2588]: I1213 02:26:35.343154 2588 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:26:35.351898 kubelet[2588]: I1213 02:26:35.351712 2588 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 02:26:35.351898 kubelet[2588]: I1213 02:26:35.351740 2588 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:26:35.352069 kubelet[2588]: I1213 02:26:35.351966 2588 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 02:26:35.353331 kubelet[2588]: I1213 02:26:35.353307 2588 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:26:35.355864 kubelet[2588]: I1213 02:26:35.355551 2588 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:26:35.363713 kubelet[2588]: E1213 02:26:35.362663 2588 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 02:26:35.363713 kubelet[2588]: I1213 02:26:35.363703 2588 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 02:26:35.369836 kubelet[2588]: I1213 02:26:35.369716 2588 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:26:35.369836 kubelet[2588]: I1213 02:26:35.369836 2588 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 02:26:35.370057 kubelet[2588]: I1213 02:26:35.369949 2588 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:26:35.371163 kubelet[2588]: I1213 02:26:35.369992 2588 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-b-462e46fdf9.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 02:26:35.371163 kubelet[2588]: I1213 02:26:35.370393 2588 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:26:35.371163 kubelet[2588]: I1213 02:26:35.370406 2588 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 02:26:35.371163 kubelet[2588]: I1213 02:26:35.370444 2588 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:26:35.374599 kubelet[2588]: I1213 02:26:35.374070 2588 kubelet.go:408] "Attempting to sync node with API server" Dec 13 02:26:35.374599 kubelet[2588]: I1213 02:26:35.374105 2588 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:26:35.374599 kubelet[2588]: I1213 02:26:35.374160 2588 kubelet.go:314] "Adding apiserver pod source" Dec 13 02:26:35.374599 kubelet[2588]: I1213 02:26:35.374180 2588 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:26:35.378002 kubelet[2588]: I1213 02:26:35.377982 2588 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:26:35.379392 kubelet[2588]: I1213 02:26:35.378617 2588 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:26:35.380009 kubelet[2588]: I1213 02:26:35.379993 2588 server.go:1269] "Started kubelet" Dec 13 02:26:35.384091 kubelet[2588]: I1213 02:26:35.383591 2588 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:26:35.385839 kubelet[2588]: I1213 02:26:35.385807 2588 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 02:26:35.386005 kubelet[2588]: I1213 02:26:35.385907 2588 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:26:35.388594 kubelet[2588]: I1213 02:26:35.388342 2588 server.go:460] "Adding debug handlers to kubelet server" Dec 13 02:26:35.392663 kubelet[2588]: I1213 02:26:35.392038 2588 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:26:35.392663 kubelet[2588]: I1213 02:26:35.392268 2588 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:26:35.399647 kubelet[2588]: I1213 02:26:35.399610 2588 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 02:26:35.401360 kubelet[2588]: E1213 02:26:35.399977 2588 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-462e46fdf9.novalocal\" not found" Dec 13 02:26:35.401360 kubelet[2588]: I1213 02:26:35.400196 2588 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 02:26:35.401360 kubelet[2588]: I1213 02:26:35.400338 2588 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:26:35.411766 kubelet[2588]: I1213 02:26:35.411601 2588 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:26:35.417158 kubelet[2588]: I1213 02:26:35.415770 2588 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:26:35.417158 kubelet[2588]: I1213 02:26:35.415791 2588 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:26:35.438059 kubelet[2588]: I1213 02:26:35.437992 2588 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:26:35.438965 kubelet[2588]: I1213 02:26:35.438936 2588 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:26:35.439021 kubelet[2588]: I1213 02:26:35.438975 2588 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:26:35.439021 kubelet[2588]: I1213 02:26:35.438997 2588 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 02:26:35.439075 kubelet[2588]: E1213 02:26:35.439043 2588 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:26:35.439323 kubelet[2588]: E1213 02:26:35.439298 2588 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:26:35.504493 kubelet[2588]: I1213 02:26:35.504458 2588 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:26:35.504493 kubelet[2588]: I1213 02:26:35.504484 2588 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:26:35.504646 kubelet[2588]: I1213 02:26:35.504506 2588 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:26:35.504712 kubelet[2588]: I1213 02:26:35.504687 2588 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:26:35.504747 kubelet[2588]: I1213 02:26:35.504707 2588 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:26:35.504747 kubelet[2588]: I1213 02:26:35.504735 2588 policy_none.go:49] "None policy: Start" Dec 13 02:26:35.505356 kubelet[2588]: I1213 02:26:35.505337 2588 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:26:35.505429 kubelet[2588]: I1213 02:26:35.505361 2588 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:26:35.505548 kubelet[2588]: I1213 02:26:35.505515 2588 state_mem.go:75] "Updated machine memory state" Dec 13 02:26:35.509983 kubelet[2588]: I1213 02:26:35.509940 2588 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:26:35.510182 kubelet[2588]: I1213 02:26:35.510157 2588 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 02:26:35.510228 kubelet[2588]: I1213 02:26:35.510180 2588 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:26:35.531276 kubelet[2588]: I1213 02:26:35.530505 2588 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:26:35.575462 kubelet[2588]: W1213 02:26:35.574949 2588 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:26:35.575462 kubelet[2588]: W1213 02:26:35.575238 2588 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:26:35.580363 kubelet[2588]: W1213 02:26:35.580275 2588 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:26:35.604955 kubelet[2588]: I1213 02:26:35.604871 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/399c1f96fee819939ca0bd606e6d93c8-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"399c1f96fee819939ca0bd606e6d93c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:35.605309 kubelet[2588]: I1213 02:26:35.605068 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/399c1f96fee819939ca0bd606e6d93c8-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"399c1f96fee819939ca0bd606e6d93c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:35.605309 kubelet[2588]: I1213 02:26:35.605105 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/399c1f96fee819939ca0bd606e6d93c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"399c1f96fee819939ca0bd606e6d93c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:35.605309 kubelet[2588]: I1213 02:26:35.605149 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2ee18998d844636d2243564be580cc0-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"c2ee18998d844636d2243564be580cc0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:35.605309 kubelet[2588]: I1213 02:26:35.605168 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2ee18998d844636d2243564be580cc0-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"c2ee18998d844636d2243564be580cc0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:35.605435 kubelet[2588]: I1213 02:26:35.605192 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2ee18998d844636d2243564be580cc0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"c2ee18998d844636d2243564be580cc0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:35.605435 kubelet[2588]: I1213 02:26:35.605213 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/399c1f96fee819939ca0bd606e6d93c8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"399c1f96fee819939ca0bd606e6d93c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:35.605435 kubelet[2588]: I1213 02:26:35.605231 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/399c1f96fee819939ca0bd606e6d93c8-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"399c1f96fee819939ca0bd606e6d93c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:35.605435 kubelet[2588]: I1213 02:26:35.605248 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e615c53510615278b7e2f183f01c2252-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-b-462e46fdf9.novalocal\" (UID: \"e615c53510615278b7e2f183f01c2252\") " pod="kube-system/kube-scheduler-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:35.640648 kubelet[2588]: I1213 02:26:35.640356 2588 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:35.667990 kubelet[2588]: I1213 02:26:35.667935 2588 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:35.668189 kubelet[2588]: I1213 02:26:35.668116 2588 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:36.376945 kubelet[2588]: I1213 02:26:36.376701 2588 apiserver.go:52] "Watching apiserver" Dec 13 02:26:36.401018 kubelet[2588]: I1213 02:26:36.400952 2588 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 02:26:36.496959 kubelet[2588]: W1213 02:26:36.496724 2588 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:26:36.496959 kubelet[2588]: E1213 02:26:36.496808 2588 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:26:36.511681 kubelet[2588]: I1213 02:26:36.511277 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-b-462e46fdf9.novalocal" podStartSLOduration=1.5112580740000001 podStartE2EDuration="1.511258074s" podCreationTimestamp="2024-12-13 02:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:26:36.509252155 +0000 UTC m=+1.252082484" watchObservedRunningTime="2024-12-13 02:26:36.511258074 +0000 UTC m=+1.254088413" Dec 13 02:26:36.539694 kubelet[2588]: I1213 02:26:36.539514 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-b-462e46fdf9.novalocal" podStartSLOduration=1.5394913749999999 podStartE2EDuration="1.539491375s" podCreationTimestamp="2024-12-13 02:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:26:36.526056412 +0000 UTC m=+1.268886751" watchObservedRunningTime="2024-12-13 02:26:36.539491375 +0000 UTC m=+1.282321705" Dec 13 02:26:36.555858 kubelet[2588]: I1213 02:26:36.555783 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-b-462e46fdf9.novalocal" podStartSLOduration=1.555765183 podStartE2EDuration="1.555765183s" podCreationTimestamp="2024-12-13 02:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:26:36.540238293 +0000 UTC m=+1.283068622" watchObservedRunningTime="2024-12-13 02:26:36.555765183 +0000 UTC m=+1.298595512" Dec 13 02:26:40.325735 kubelet[2588]: I1213 02:26:40.325350 2588 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:26:40.328024 containerd[1455]: time="2024-12-13T02:26:40.327828453Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:26:40.329219 kubelet[2588]: I1213 02:26:40.328979 2588 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:26:40.343311 sudo[1708]: pam_unix(sudo:session): session closed for user root Dec 13 02:26:40.546336 sshd[1705]: pam_unix(sshd:session): session closed for user core Dec 13 02:26:40.553979 systemd[1]: sshd@8-172.24.4.31:22-172.24.4.1:46916.service: Deactivated successfully. Dec 13 02:26:40.557396 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:26:40.557711 systemd[1]: session-11.scope: Consumed 7.189s CPU time, 151.4M memory peak, 0B memory swap peak. Dec 13 02:26:40.558807 systemd-logind[1431]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:26:40.561604 systemd-logind[1431]: Removed session 11. Dec 13 02:26:41.307669 systemd[1]: Created slice kubepods-besteffort-pod6040b96b_bfdf_4c66_9a3a_ceeb748da80e.slice - libcontainer container kubepods-besteffort-pod6040b96b_bfdf_4c66_9a3a_ceeb748da80e.slice. Dec 13 02:26:41.343619 kubelet[2588]: I1213 02:26:41.343289 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6040b96b-bfdf-4c66-9a3a-ceeb748da80e-kube-proxy\") pod \"kube-proxy-nc2m9\" (UID: \"6040b96b-bfdf-4c66-9a3a-ceeb748da80e\") " pod="kube-system/kube-proxy-nc2m9" Dec 13 02:26:41.343619 kubelet[2588]: I1213 02:26:41.343338 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6040b96b-bfdf-4c66-9a3a-ceeb748da80e-xtables-lock\") pod \"kube-proxy-nc2m9\" (UID: \"6040b96b-bfdf-4c66-9a3a-ceeb748da80e\") " pod="kube-system/kube-proxy-nc2m9" Dec 13 02:26:41.343619 kubelet[2588]: I1213 02:26:41.343361 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6040b96b-bfdf-4c66-9a3a-ceeb748da80e-lib-modules\") pod \"kube-proxy-nc2m9\" (UID: \"6040b96b-bfdf-4c66-9a3a-ceeb748da80e\") " pod="kube-system/kube-proxy-nc2m9" Dec 13 02:26:41.343619 kubelet[2588]: I1213 02:26:41.343385 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8jls\" (UniqueName: \"kubernetes.io/projected/6040b96b-bfdf-4c66-9a3a-ceeb748da80e-kube-api-access-n8jls\") pod \"kube-proxy-nc2m9\" (UID: \"6040b96b-bfdf-4c66-9a3a-ceeb748da80e\") " pod="kube-system/kube-proxy-nc2m9" Dec 13 02:26:41.438505 systemd[1]: Created slice kubepods-besteffort-podb4d89faf_0906_44e7_b358_b82707dff325.slice - libcontainer container kubepods-besteffort-podb4d89faf_0906_44e7_b358_b82707dff325.slice. Dec 13 02:26:41.444262 kubelet[2588]: I1213 02:26:41.443842 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79kdt\" (UniqueName: \"kubernetes.io/projected/b4d89faf-0906-44e7-b358-b82707dff325-kube-api-access-79kdt\") pod \"tigera-operator-76c4976dd7-k9zrf\" (UID: \"b4d89faf-0906-44e7-b358-b82707dff325\") " pod="tigera-operator/tigera-operator-76c4976dd7-k9zrf" Dec 13 02:26:41.444262 kubelet[2588]: I1213 02:26:41.443938 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b4d89faf-0906-44e7-b358-b82707dff325-var-lib-calico\") pod \"tigera-operator-76c4976dd7-k9zrf\" (UID: \"b4d89faf-0906-44e7-b358-b82707dff325\") " pod="tigera-operator/tigera-operator-76c4976dd7-k9zrf" Dec 13 02:26:41.625769 containerd[1455]: time="2024-12-13T02:26:41.625675673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nc2m9,Uid:6040b96b-bfdf-4c66-9a3a-ceeb748da80e,Namespace:kube-system,Attempt:0,}" Dec 13 02:26:41.710088 containerd[1455]: time="2024-12-13T02:26:41.709811300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:26:41.710746 containerd[1455]: time="2024-12-13T02:26:41.710213453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:26:41.711097 containerd[1455]: time="2024-12-13T02:26:41.710643118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:41.711603 containerd[1455]: time="2024-12-13T02:26:41.711402588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:41.747227 containerd[1455]: time="2024-12-13T02:26:41.747020394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-k9zrf,Uid:b4d89faf-0906-44e7-b358-b82707dff325,Namespace:tigera-operator,Attempt:0,}" Dec 13 02:26:41.766306 systemd[1]: Started cri-containerd-58ad0f0174c53aa633404d01d021b8667411625a2e9ac2d23da11267a1273e75.scope - libcontainer container 58ad0f0174c53aa633404d01d021b8667411625a2e9ac2d23da11267a1273e75. Dec 13 02:26:41.797360 containerd[1455]: time="2024-12-13T02:26:41.797233561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:26:41.797360 containerd[1455]: time="2024-12-13T02:26:41.797295140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:26:41.797360 containerd[1455]: time="2024-12-13T02:26:41.797314616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:41.797651 containerd[1455]: time="2024-12-13T02:26:41.797409880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:41.813560 containerd[1455]: time="2024-12-13T02:26:41.813524188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nc2m9,Uid:6040b96b-bfdf-4c66-9a3a-ceeb748da80e,Namespace:kube-system,Attempt:0,} returns sandbox id \"58ad0f0174c53aa633404d01d021b8667411625a2e9ac2d23da11267a1273e75\"" Dec 13 02:26:41.816699 containerd[1455]: time="2024-12-13T02:26:41.816489524Z" level=info msg="CreateContainer within sandbox \"58ad0f0174c53aa633404d01d021b8667411625a2e9ac2d23da11267a1273e75\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:26:41.829366 systemd[1]: Started cri-containerd-6b9980019e09d044c1afdf4e33449a78ad7458a5a9825026753f1f185077228a.scope - libcontainer container 6b9980019e09d044c1afdf4e33449a78ad7458a5a9825026753f1f185077228a. Dec 13 02:26:41.850054 containerd[1455]: time="2024-12-13T02:26:41.850003810Z" level=info msg="CreateContainer within sandbox \"58ad0f0174c53aa633404d01d021b8667411625a2e9ac2d23da11267a1273e75\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"360fde4568d2cdc1160375d4f1db196ba2f86f9f79550b1ec36c2fc9d3f357b7\"" Dec 13 02:26:41.852689 containerd[1455]: time="2024-12-13T02:26:41.851037576Z" level=info msg="StartContainer for \"360fde4568d2cdc1160375d4f1db196ba2f86f9f79550b1ec36c2fc9d3f357b7\"" Dec 13 02:26:41.882454 containerd[1455]: time="2024-12-13T02:26:41.882330625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-k9zrf,Uid:b4d89faf-0906-44e7-b358-b82707dff325,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6b9980019e09d044c1afdf4e33449a78ad7458a5a9825026753f1f185077228a\"" Dec 13 02:26:41.890361 systemd[1]: Started cri-containerd-360fde4568d2cdc1160375d4f1db196ba2f86f9f79550b1ec36c2fc9d3f357b7.scope - libcontainer container 360fde4568d2cdc1160375d4f1db196ba2f86f9f79550b1ec36c2fc9d3f357b7. Dec 13 02:26:41.893832 containerd[1455]: time="2024-12-13T02:26:41.893697232Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 02:26:41.936242 containerd[1455]: time="2024-12-13T02:26:41.936185358Z" level=info msg="StartContainer for \"360fde4568d2cdc1160375d4f1db196ba2f86f9f79550b1ec36c2fc9d3f357b7\" returns successfully" Dec 13 02:26:43.828026 kubelet[2588]: I1213 02:26:43.826787 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nc2m9" podStartSLOduration=2.826683488 podStartE2EDuration="2.826683488s" podCreationTimestamp="2024-12-13 02:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:26:42.512683309 +0000 UTC m=+7.255513658" watchObservedRunningTime="2024-12-13 02:26:43.826683488 +0000 UTC m=+8.569513877" Dec 13 02:26:47.593756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2399629101.mount: Deactivated successfully. Dec 13 02:26:48.341464 containerd[1455]: time="2024-12-13T02:26:48.341341306Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:48.343680 containerd[1455]: time="2024-12-13T02:26:48.343377103Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764317" Dec 13 02:26:48.345141 containerd[1455]: time="2024-12-13T02:26:48.345067672Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:48.352163 containerd[1455]: time="2024-12-13T02:26:48.351960480Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:48.356112 containerd[1455]: time="2024-12-13T02:26:48.356077572Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 6.46233399s" Dec 13 02:26:48.356758 containerd[1455]: time="2024-12-13T02:26:48.356252825Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 02:26:48.468843 containerd[1455]: time="2024-12-13T02:26:48.468793645Z" level=info msg="CreateContainer within sandbox \"6b9980019e09d044c1afdf4e33449a78ad7458a5a9825026753f1f185077228a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 02:26:48.491641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126694014.mount: Deactivated successfully. Dec 13 02:26:48.494973 containerd[1455]: time="2024-12-13T02:26:48.494793654Z" level=info msg="CreateContainer within sandbox \"6b9980019e09d044c1afdf4e33449a78ad7458a5a9825026753f1f185077228a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"db1ca395c8a14c474b618ce11a972e9b7e8909a18463c3892ddf91ada91e3e76\"" Dec 13 02:26:48.496607 containerd[1455]: time="2024-12-13T02:26:48.496545060Z" level=info msg="StartContainer for \"db1ca395c8a14c474b618ce11a972e9b7e8909a18463c3892ddf91ada91e3e76\"" Dec 13 02:26:48.545725 systemd[1]: run-containerd-runc-k8s.io-db1ca395c8a14c474b618ce11a972e9b7e8909a18463c3892ddf91ada91e3e76-runc.CEaIba.mount: Deactivated successfully. Dec 13 02:26:48.559303 systemd[1]: Started cri-containerd-db1ca395c8a14c474b618ce11a972e9b7e8909a18463c3892ddf91ada91e3e76.scope - libcontainer container db1ca395c8a14c474b618ce11a972e9b7e8909a18463c3892ddf91ada91e3e76. Dec 13 02:26:48.608806 containerd[1455]: time="2024-12-13T02:26:48.608635803Z" level=info msg="StartContainer for \"db1ca395c8a14c474b618ce11a972e9b7e8909a18463c3892ddf91ada91e3e76\" returns successfully" Dec 13 02:26:49.632340 kubelet[2588]: I1213 02:26:49.629964 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-k9zrf" podStartSLOduration=2.108353598 podStartE2EDuration="8.629924108s" podCreationTimestamp="2024-12-13 02:26:41 +0000 UTC" firstStartedPulling="2024-12-13 02:26:41.893093142 +0000 UTC m=+6.635923471" lastFinishedPulling="2024-12-13 02:26:48.414663602 +0000 UTC m=+13.157493981" observedRunningTime="2024-12-13 02:26:49.629053101 +0000 UTC m=+14.371883480" watchObservedRunningTime="2024-12-13 02:26:49.629924108 +0000 UTC m=+14.372754557" Dec 13 02:26:52.827081 systemd[1]: Created slice kubepods-besteffort-podd3313764_c25a_40ad_9f30_c39affa2bbac.slice - libcontainer container kubepods-besteffort-podd3313764_c25a_40ad_9f30_c39affa2bbac.slice. Dec 13 02:26:52.940747 kubelet[2588]: I1213 02:26:52.940513 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d3313764-c25a-40ad-9f30-c39affa2bbac-typha-certs\") pod \"calico-typha-6798bdff7f-zlgg7\" (UID: \"d3313764-c25a-40ad-9f30-c39affa2bbac\") " pod="calico-system/calico-typha-6798bdff7f-zlgg7" Dec 13 02:26:52.940747 kubelet[2588]: I1213 02:26:52.940603 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mghdw\" (UniqueName: \"kubernetes.io/projected/d3313764-c25a-40ad-9f30-c39affa2bbac-kube-api-access-mghdw\") pod \"calico-typha-6798bdff7f-zlgg7\" (UID: \"d3313764-c25a-40ad-9f30-c39affa2bbac\") " pod="calico-system/calico-typha-6798bdff7f-zlgg7" Dec 13 02:26:52.940747 kubelet[2588]: I1213 02:26:52.940664 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3313764-c25a-40ad-9f30-c39affa2bbac-tigera-ca-bundle\") pod \"calico-typha-6798bdff7f-zlgg7\" (UID: \"d3313764-c25a-40ad-9f30-c39affa2bbac\") " pod="calico-system/calico-typha-6798bdff7f-zlgg7" Dec 13 02:26:53.135692 containerd[1455]: time="2024-12-13T02:26:53.135601747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6798bdff7f-zlgg7,Uid:d3313764-c25a-40ad-9f30-c39affa2bbac,Namespace:calico-system,Attempt:0,}" Dec 13 02:26:53.256795 systemd[1]: Created slice kubepods-besteffort-podd7e8fe28_38c6_4e23_979c_3075ce4b6bf5.slice - libcontainer container kubepods-besteffort-podd7e8fe28_38c6_4e23_979c_3075ce4b6bf5.slice. Dec 13 02:26:53.345418 kubelet[2588]: I1213 02:26:53.345285 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-cni-bin-dir\") pod \"calico-node-mngdp\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " pod="calico-system/calico-node-mngdp" Dec 13 02:26:53.345418 kubelet[2588]: I1213 02:26:53.345380 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-flexvol-driver-host\") pod \"calico-node-mngdp\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " pod="calico-system/calico-node-mngdp" Dec 13 02:26:53.345418 kubelet[2588]: I1213 02:26:53.345432 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2g8d\" (UniqueName: \"kubernetes.io/projected/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-kube-api-access-s2g8d\") pod \"calico-node-mngdp\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " pod="calico-system/calico-node-mngdp" Dec 13 02:26:53.346045 kubelet[2588]: I1213 02:26:53.345512 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-var-lib-calico\") pod \"calico-node-mngdp\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " pod="calico-system/calico-node-mngdp" Dec 13 02:26:53.346045 kubelet[2588]: I1213 02:26:53.345557 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-xtables-lock\") pod \"calico-node-mngdp\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " pod="calico-system/calico-node-mngdp" Dec 13 02:26:53.346045 kubelet[2588]: I1213 02:26:53.345603 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-cni-net-dir\") pod \"calico-node-mngdp\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " pod="calico-system/calico-node-mngdp" Dec 13 02:26:53.346045 kubelet[2588]: I1213 02:26:53.345649 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-lib-modules\") pod \"calico-node-mngdp\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " pod="calico-system/calico-node-mngdp" Dec 13 02:26:53.346045 kubelet[2588]: I1213 02:26:53.345690 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-policysync\") pod \"calico-node-mngdp\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " pod="calico-system/calico-node-mngdp" Dec 13 02:26:53.346440 kubelet[2588]: I1213 02:26:53.345882 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-tigera-ca-bundle\") pod \"calico-node-mngdp\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " pod="calico-system/calico-node-mngdp" Dec 13 02:26:53.346440 kubelet[2588]: I1213 02:26:53.346089 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-cni-log-dir\") pod \"calico-node-mngdp\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " pod="calico-system/calico-node-mngdp" Dec 13 02:26:53.346440 kubelet[2588]: I1213 02:26:53.346395 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-node-certs\") pod \"calico-node-mngdp\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " pod="calico-system/calico-node-mngdp" Dec 13 02:26:53.346616 kubelet[2588]: I1213 02:26:53.346585 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-var-run-calico\") pod \"calico-node-mngdp\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " pod="calico-system/calico-node-mngdp" Dec 13 02:26:53.557376 containerd[1455]: time="2024-12-13T02:26:53.556691491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:26:53.557376 containerd[1455]: time="2024-12-13T02:26:53.556842718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:26:53.557376 containerd[1455]: time="2024-12-13T02:26:53.556895798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:53.558879 containerd[1455]: time="2024-12-13T02:26:53.557667722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:53.569046 kubelet[2588]: E1213 02:26:53.568975 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:53.569046 kubelet[2588]: W1213 02:26:53.569045 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:53.574728 kubelet[2588]: E1213 02:26:53.569116 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:53.602562 systemd[1]: Started cri-containerd-1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd.scope - libcontainer container 1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd. Dec 13 02:26:53.654892 containerd[1455]: time="2024-12-13T02:26:53.654834133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6798bdff7f-zlgg7,Uid:d3313764-c25a-40ad-9f30-c39affa2bbac,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd\"" Dec 13 02:26:53.658084 containerd[1455]: time="2024-12-13T02:26:53.657277687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 02:26:53.676158 kubelet[2588]: E1213 02:26:53.675619 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:53.676390 kubelet[2588]: W1213 02:26:53.676306 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:53.676390 kubelet[2588]: E1213 02:26:53.676335 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:53.779925 kubelet[2588]: E1213 02:26:53.779573 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:53.779925 kubelet[2588]: W1213 02:26:53.779623 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:53.779925 kubelet[2588]: E1213 02:26:53.779668 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:53.882486 kubelet[2588]: E1213 02:26:53.882276 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:53.882486 kubelet[2588]: W1213 02:26:53.882325 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:53.882486 kubelet[2588]: E1213 02:26:53.882371 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:53.983791 kubelet[2588]: E1213 02:26:53.983709 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:53.983791 kubelet[2588]: W1213 02:26:53.983757 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:53.983791 kubelet[2588]: E1213 02:26:53.983793 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.085102 kubelet[2588]: E1213 02:26:54.085038 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.085102 kubelet[2588]: W1213 02:26:54.085073 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.085102 kubelet[2588]: E1213 02:26:54.085098 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.186484 kubelet[2588]: E1213 02:26:54.186283 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.186484 kubelet[2588]: W1213 02:26:54.186322 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.186484 kubelet[2588]: E1213 02:26:54.186353 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.288755 kubelet[2588]: E1213 02:26:54.288685 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.289042 kubelet[2588]: W1213 02:26:54.288792 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.289042 kubelet[2588]: E1213 02:26:54.288836 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.304085 kubelet[2588]: E1213 02:26:54.303597 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.304085 kubelet[2588]: W1213 02:26:54.303906 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.304085 kubelet[2588]: E1213 02:26:54.303956 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.450016 kubelet[2588]: E1213 02:26:54.449797 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8jkg" podUID="3f08b161-a86f-45d0-9c4d-8166dcb1e19a" Dec 13 02:26:54.459183 kubelet[2588]: E1213 02:26:54.459090 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.459793 kubelet[2588]: W1213 02:26:54.459388 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.459793 kubelet[2588]: E1213 02:26:54.459437 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.461317 kubelet[2588]: E1213 02:26:54.461288 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.461731 kubelet[2588]: W1213 02:26:54.461478 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.461731 kubelet[2588]: E1213 02:26:54.461518 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.462464 kubelet[2588]: E1213 02:26:54.462438 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.462992 kubelet[2588]: W1213 02:26:54.462611 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.462992 kubelet[2588]: E1213 02:26:54.462648 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.463970 kubelet[2588]: E1213 02:26:54.463811 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.463970 kubelet[2588]: W1213 02:26:54.463839 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.463970 kubelet[2588]: E1213 02:26:54.463862 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.464937 kubelet[2588]: E1213 02:26:54.464621 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.464937 kubelet[2588]: W1213 02:26:54.464648 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.464937 kubelet[2588]: E1213 02:26:54.464670 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.465634 kubelet[2588]: E1213 02:26:54.465317 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.465634 kubelet[2588]: W1213 02:26:54.465342 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.465634 kubelet[2588]: E1213 02:26:54.465364 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.466208 kubelet[2588]: E1213 02:26:54.465953 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.466208 kubelet[2588]: W1213 02:26:54.465979 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.466208 kubelet[2588]: E1213 02:26:54.466002 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.467772 kubelet[2588]: E1213 02:26:54.466730 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.467772 kubelet[2588]: W1213 02:26:54.466776 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.467772 kubelet[2588]: E1213 02:26:54.466799 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.468536 kubelet[2588]: E1213 02:26:54.468335 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.468536 kubelet[2588]: W1213 02:26:54.468365 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.468536 kubelet[2588]: E1213 02:26:54.468388 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.471501 kubelet[2588]: E1213 02:26:54.471323 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.471501 kubelet[2588]: W1213 02:26:54.471365 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.471501 kubelet[2588]: E1213 02:26:54.471396 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.472649 kubelet[2588]: E1213 02:26:54.472362 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.472649 kubelet[2588]: W1213 02:26:54.472419 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.472649 kubelet[2588]: E1213 02:26:54.472446 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.473713 kubelet[2588]: E1213 02:26:54.473335 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.473713 kubelet[2588]: W1213 02:26:54.473363 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.473713 kubelet[2588]: E1213 02:26:54.473386 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.474283 kubelet[2588]: E1213 02:26:54.474041 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.474283 kubelet[2588]: W1213 02:26:54.474067 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.474283 kubelet[2588]: E1213 02:26:54.474090 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.474945 kubelet[2588]: E1213 02:26:54.474743 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.474945 kubelet[2588]: W1213 02:26:54.474769 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.474945 kubelet[2588]: E1213 02:26:54.474791 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.475560 kubelet[2588]: E1213 02:26:54.475420 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.475560 kubelet[2588]: W1213 02:26:54.475446 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.475560 kubelet[2588]: E1213 02:26:54.475469 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.476276 kubelet[2588]: E1213 02:26:54.476070 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.476276 kubelet[2588]: W1213 02:26:54.476096 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.476276 kubelet[2588]: E1213 02:26:54.476165 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.477737 kubelet[2588]: E1213 02:26:54.477522 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.477737 kubelet[2588]: W1213 02:26:54.477552 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.477737 kubelet[2588]: E1213 02:26:54.477576 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.478334 kubelet[2588]: E1213 02:26:54.478187 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.478334 kubelet[2588]: W1213 02:26:54.478215 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.478334 kubelet[2588]: E1213 02:26:54.478238 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.479206 kubelet[2588]: E1213 02:26:54.478912 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.479206 kubelet[2588]: W1213 02:26:54.478939 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.479206 kubelet[2588]: E1213 02:26:54.478962 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.479567 kubelet[2588]: E1213 02:26:54.479543 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.479829 kubelet[2588]: W1213 02:26:54.479676 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.479829 kubelet[2588]: E1213 02:26:54.479708 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.494324 kubelet[2588]: E1213 02:26:54.491065 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.494324 kubelet[2588]: W1213 02:26:54.491099 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.494324 kubelet[2588]: E1213 02:26:54.491159 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.494324 kubelet[2588]: I1213 02:26:54.491238 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8rbl\" (UniqueName: \"kubernetes.io/projected/3f08b161-a86f-45d0-9c4d-8166dcb1e19a-kube-api-access-d8rbl\") pod \"csi-node-driver-t8jkg\" (UID: \"3f08b161-a86f-45d0-9c4d-8166dcb1e19a\") " pod="calico-system/csi-node-driver-t8jkg" Dec 13 02:26:54.494324 kubelet[2588]: E1213 02:26:54.491694 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.494324 kubelet[2588]: W1213 02:26:54.491716 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.494324 kubelet[2588]: E1213 02:26:54.491745 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.497868 kubelet[2588]: E1213 02:26:54.495212 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.497868 kubelet[2588]: W1213 02:26:54.495254 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.497868 kubelet[2588]: E1213 02:26:54.497180 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.497868 kubelet[2588]: E1213 02:26:54.497455 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.497868 kubelet[2588]: W1213 02:26:54.497480 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.497868 kubelet[2588]: E1213 02:26:54.497505 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.497868 kubelet[2588]: I1213 02:26:54.497560 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3f08b161-a86f-45d0-9c4d-8166dcb1e19a-registration-dir\") pod \"csi-node-driver-t8jkg\" (UID: \"3f08b161-a86f-45d0-9c4d-8166dcb1e19a\") " pod="calico-system/csi-node-driver-t8jkg" Dec 13 02:26:54.498633 kubelet[2588]: E1213 02:26:54.498573 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.498633 kubelet[2588]: W1213 02:26:54.498601 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.498910 kubelet[2588]: E1213 02:26:54.498854 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.499413 kubelet[2588]: I1213 02:26:54.499172 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3f08b161-a86f-45d0-9c4d-8166dcb1e19a-socket-dir\") pod \"csi-node-driver-t8jkg\" (UID: \"3f08b161-a86f-45d0-9c4d-8166dcb1e19a\") " pod="calico-system/csi-node-driver-t8jkg" Dec 13 02:26:54.499919 kubelet[2588]: E1213 02:26:54.499881 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.500089 kubelet[2588]: W1213 02:26:54.500060 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.501513 kubelet[2588]: E1213 02:26:54.501401 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.501513 kubelet[2588]: I1213 02:26:54.501460 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3f08b161-a86f-45d0-9c4d-8166dcb1e19a-varrun\") pod \"csi-node-driver-t8jkg\" (UID: \"3f08b161-a86f-45d0-9c4d-8166dcb1e19a\") " pod="calico-system/csi-node-driver-t8jkg" Dec 13 02:26:54.508475 containerd[1455]: time="2024-12-13T02:26:54.508233721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mngdp,Uid:d7e8fe28-38c6-4e23-979c-3075ce4b6bf5,Namespace:calico-system,Attempt:0,}" Dec 13 02:26:54.516025 kubelet[2588]: E1213 02:26:54.515866 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.516025 kubelet[2588]: W1213 02:26:54.515900 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.516025 kubelet[2588]: E1213 02:26:54.515996 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.516552 kubelet[2588]: E1213 02:26:54.516420 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.516552 kubelet[2588]: W1213 02:26:54.516432 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.516552 kubelet[2588]: E1213 02:26:54.516518 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.516839 kubelet[2588]: E1213 02:26:54.516736 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.516839 kubelet[2588]: W1213 02:26:54.516746 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.516985 kubelet[2588]: E1213 02:26:54.516916 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.516985 kubelet[2588]: I1213 02:26:54.516946 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f08b161-a86f-45d0-9c4d-8166dcb1e19a-kubelet-dir\") pod \"csi-node-driver-t8jkg\" (UID: \"3f08b161-a86f-45d0-9c4d-8166dcb1e19a\") " pod="calico-system/csi-node-driver-t8jkg" Dec 13 02:26:54.517143 kubelet[2588]: E1213 02:26:54.517069 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.517143 kubelet[2588]: W1213 02:26:54.517079 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.517304 kubelet[2588]: E1213 02:26:54.517221 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.517432 kubelet[2588]: E1213 02:26:54.517419 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.517580 kubelet[2588]: W1213 02:26:54.517520 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.517580 kubelet[2588]: E1213 02:26:54.517544 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.518006 kubelet[2588]: E1213 02:26:54.517928 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.518006 kubelet[2588]: W1213 02:26:54.517940 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.518006 kubelet[2588]: E1213 02:26:54.517967 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.518440 kubelet[2588]: E1213 02:26:54.518325 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.518440 kubelet[2588]: W1213 02:26:54.518337 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.518440 kubelet[2588]: E1213 02:26:54.518346 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.518875 kubelet[2588]: E1213 02:26:54.518748 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.518875 kubelet[2588]: W1213 02:26:54.518760 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.518875 kubelet[2588]: E1213 02:26:54.518770 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.519153 kubelet[2588]: E1213 02:26:54.519075 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.519153 kubelet[2588]: W1213 02:26:54.519086 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.519153 kubelet[2588]: E1213 02:26:54.519096 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.618593 kubelet[2588]: E1213 02:26:54.618404 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.618593 kubelet[2588]: W1213 02:26:54.618445 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.618593 kubelet[2588]: E1213 02:26:54.618480 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.619433 kubelet[2588]: E1213 02:26:54.619361 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.619433 kubelet[2588]: W1213 02:26:54.619390 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.619433 kubelet[2588]: E1213 02:26:54.619418 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.619676 kubelet[2588]: E1213 02:26:54.619606 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.619676 kubelet[2588]: W1213 02:26:54.619616 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.619676 kubelet[2588]: E1213 02:26:54.619626 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.619901 kubelet[2588]: E1213 02:26:54.619852 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.619901 kubelet[2588]: W1213 02:26:54.619868 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.619901 kubelet[2588]: E1213 02:26:54.619879 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.620572 kubelet[2588]: E1213 02:26:54.620251 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.620572 kubelet[2588]: W1213 02:26:54.620262 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.620572 kubelet[2588]: E1213 02:26:54.620378 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.621540 kubelet[2588]: E1213 02:26:54.621326 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.621540 kubelet[2588]: W1213 02:26:54.621360 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.621540 kubelet[2588]: E1213 02:26:54.621388 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.622963 kubelet[2588]: E1213 02:26:54.622635 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.622963 kubelet[2588]: W1213 02:26:54.622782 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.623885 kubelet[2588]: E1213 02:26:54.622895 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.624335 kubelet[2588]: E1213 02:26:54.624053 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.624335 kubelet[2588]: W1213 02:26:54.624082 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.624691 kubelet[2588]: E1213 02:26:54.624596 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.625203 kubelet[2588]: E1213 02:26:54.624901 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.625203 kubelet[2588]: W1213 02:26:54.624928 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.625203 kubelet[2588]: E1213 02:26:54.624962 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.627624 kubelet[2588]: E1213 02:26:54.627245 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.627624 kubelet[2588]: W1213 02:26:54.627272 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.627624 kubelet[2588]: E1213 02:26:54.627439 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.628020 kubelet[2588]: E1213 02:26:54.627667 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.628020 kubelet[2588]: W1213 02:26:54.627679 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.628020 kubelet[2588]: E1213 02:26:54.627837 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.628020 kubelet[2588]: E1213 02:26:54.627973 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.628020 kubelet[2588]: W1213 02:26:54.627983 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.629906 kubelet[2588]: E1213 02:26:54.628097 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.629906 kubelet[2588]: E1213 02:26:54.628231 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.629906 kubelet[2588]: W1213 02:26:54.628240 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.629906 kubelet[2588]: E1213 02:26:54.628301 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.629906 kubelet[2588]: E1213 02:26:54.628504 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.629906 kubelet[2588]: W1213 02:26:54.628518 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.629906 kubelet[2588]: E1213 02:26:54.628537 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.629906 kubelet[2588]: E1213 02:26:54.628826 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.629906 kubelet[2588]: W1213 02:26:54.628839 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.629906 kubelet[2588]: E1213 02:26:54.628879 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.630855 kubelet[2588]: E1213 02:26:54.629173 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.630855 kubelet[2588]: W1213 02:26:54.629185 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.630855 kubelet[2588]: E1213 02:26:54.629199 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.630855 kubelet[2588]: E1213 02:26:54.629510 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.630855 kubelet[2588]: W1213 02:26:54.629522 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.630855 kubelet[2588]: E1213 02:26:54.629534 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.630855 kubelet[2588]: E1213 02:26:54.629944 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.630855 kubelet[2588]: W1213 02:26:54.629957 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.630855 kubelet[2588]: E1213 02:26:54.630001 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.630855 kubelet[2588]: E1213 02:26:54.630328 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.633586 kubelet[2588]: W1213 02:26:54.630343 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.633586 kubelet[2588]: E1213 02:26:54.630608 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.633586 kubelet[2588]: W1213 02:26:54.630620 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.633586 kubelet[2588]: E1213 02:26:54.630885 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.633586 kubelet[2588]: W1213 02:26:54.630897 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.633586 kubelet[2588]: E1213 02:26:54.630910 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.633586 kubelet[2588]: E1213 02:26:54.631409 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.633586 kubelet[2588]: W1213 02:26:54.631423 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.633586 kubelet[2588]: E1213 02:26:54.631470 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.633586 kubelet[2588]: E1213 02:26:54.631774 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.634769 kubelet[2588]: W1213 02:26:54.631788 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.634769 kubelet[2588]: E1213 02:26:54.631927 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.634769 kubelet[2588]: E1213 02:26:54.631956 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.634769 kubelet[2588]: E1213 02:26:54.632368 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.634769 kubelet[2588]: E1213 02:26:54.632596 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.634769 kubelet[2588]: W1213 02:26:54.632609 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.634769 kubelet[2588]: E1213 02:26:54.632624 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.635288 kubelet[2588]: E1213 02:26:54.635216 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.635288 kubelet[2588]: W1213 02:26:54.635230 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.635288 kubelet[2588]: E1213 02:26:54.635242 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.677287 kubelet[2588]: E1213 02:26:54.675298 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:26:54.677287 kubelet[2588]: W1213 02:26:54.675340 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:26:54.677287 kubelet[2588]: E1213 02:26:54.675361 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:26:54.703443 containerd[1455]: time="2024-12-13T02:26:54.702773588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:26:54.703443 containerd[1455]: time="2024-12-13T02:26:54.702861816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:26:54.703443 containerd[1455]: time="2024-12-13T02:26:54.702889067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:54.703443 containerd[1455]: time="2024-12-13T02:26:54.703193784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:54.741365 systemd[1]: Started cri-containerd-c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f.scope - libcontainer container c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f. Dec 13 02:26:54.770792 containerd[1455]: time="2024-12-13T02:26:54.770682441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mngdp,Uid:d7e8fe28-38c6-4e23-979c-3075ce4b6bf5,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\"" Dec 13 02:26:56.440642 kubelet[2588]: E1213 02:26:56.440529 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8jkg" podUID="3f08b161-a86f-45d0-9c4d-8166dcb1e19a" Dec 13 02:26:56.543268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2256581022.mount: Deactivated successfully. Dec 13 02:26:58.440810 kubelet[2588]: E1213 02:26:58.440373 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8jkg" podUID="3f08b161-a86f-45d0-9c4d-8166dcb1e19a" Dec 13 02:26:58.892418 containerd[1455]: time="2024-12-13T02:26:58.892267533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:59.045846 containerd[1455]: time="2024-12-13T02:26:59.045724256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 02:26:59.068080 containerd[1455]: time="2024-12-13T02:26:59.066479850Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:59.086185 containerd[1455]: time="2024-12-13T02:26:59.085851959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:26:59.092497 containerd[1455]: time="2024-12-13T02:26:59.092405604Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 5.435075949s" Dec 13 02:26:59.093312 containerd[1455]: time="2024-12-13T02:26:59.092500835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 02:26:59.096293 containerd[1455]: time="2024-12-13T02:26:59.095809106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 02:26:59.139178 containerd[1455]: time="2024-12-13T02:26:59.139048329Z" level=info msg="CreateContainer within sandbox \"1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 02:26:59.182184 containerd[1455]: time="2024-12-13T02:26:59.181758742Z" level=info msg="CreateContainer within sandbox \"1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe\"" Dec 13 02:26:59.184813 containerd[1455]: time="2024-12-13T02:26:59.184728505Z" level=info msg="StartContainer for \"7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe\"" Dec 13 02:26:59.235318 systemd[1]: Started cri-containerd-7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe.scope - libcontainer container 7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe. Dec 13 02:26:59.851726 containerd[1455]: time="2024-12-13T02:26:59.851654512Z" level=info msg="StartContainer for \"7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe\" returns successfully" Dec 13 02:27:00.440281 kubelet[2588]: E1213 02:27:00.439836 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8jkg" podUID="3f08b161-a86f-45d0-9c4d-8166dcb1e19a" Dec 13 02:27:00.925254 kubelet[2588]: E1213 02:27:00.925198 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.925254 kubelet[2588]: W1213 02:27:00.925233 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.925556 kubelet[2588]: E1213 02:27:00.925270 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.926214 kubelet[2588]: E1213 02:27:00.926191 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.926214 kubelet[2588]: W1213 02:27:00.926210 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.926354 kubelet[2588]: E1213 02:27:00.926224 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.927099 kubelet[2588]: E1213 02:27:00.927077 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.927099 kubelet[2588]: W1213 02:27:00.927096 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.927279 kubelet[2588]: E1213 02:27:00.927145 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.927482 kubelet[2588]: E1213 02:27:00.927460 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.927482 kubelet[2588]: W1213 02:27:00.927478 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.928147 kubelet[2588]: E1213 02:27:00.927490 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.928209 kubelet[2588]: E1213 02:27:00.928190 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.928209 kubelet[2588]: W1213 02:27:00.928202 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.928267 kubelet[2588]: E1213 02:27:00.928214 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.928688 kubelet[2588]: E1213 02:27:00.928664 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.928688 kubelet[2588]: W1213 02:27:00.928681 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.928807 kubelet[2588]: E1213 02:27:00.928692 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.931224 kubelet[2588]: E1213 02:27:00.931198 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.931224 kubelet[2588]: W1213 02:27:00.931216 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.931498 kubelet[2588]: E1213 02:27:00.931230 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.931498 kubelet[2588]: E1213 02:27:00.931493 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.931575 kubelet[2588]: W1213 02:27:00.931503 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.931575 kubelet[2588]: E1213 02:27:00.931514 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.932293 kubelet[2588]: E1213 02:27:00.932244 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.932293 kubelet[2588]: W1213 02:27:00.932276 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.932293 kubelet[2588]: E1213 02:27:00.932290 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.932549 kubelet[2588]: E1213 02:27:00.932512 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.932549 kubelet[2588]: W1213 02:27:00.932522 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.932549 kubelet[2588]: E1213 02:27:00.932531 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.932834 kubelet[2588]: E1213 02:27:00.932797 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.932834 kubelet[2588]: W1213 02:27:00.932811 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.932834 kubelet[2588]: E1213 02:27:00.932824 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.933235 kubelet[2588]: E1213 02:27:00.933016 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.933235 kubelet[2588]: W1213 02:27:00.933026 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.933235 kubelet[2588]: E1213 02:27:00.933035 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.934216 kubelet[2588]: E1213 02:27:00.934190 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.934216 kubelet[2588]: W1213 02:27:00.934206 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.934216 kubelet[2588]: E1213 02:27:00.934217 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.934502 kubelet[2588]: E1213 02:27:00.934481 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.934502 kubelet[2588]: W1213 02:27:00.934495 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.934502 kubelet[2588]: E1213 02:27:00.934503 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.935613 kubelet[2588]: E1213 02:27:00.935591 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.935613 kubelet[2588]: W1213 02:27:00.935609 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.935700 kubelet[2588]: E1213 02:27:00.935624 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.940302 kubelet[2588]: I1213 02:27:00.940239 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6798bdff7f-zlgg7" podStartSLOduration=3.502367065 podStartE2EDuration="8.940216s" podCreationTimestamp="2024-12-13 02:26:52 +0000 UTC" firstStartedPulling="2024-12-13 02:26:53.656565055 +0000 UTC m=+18.399395394" lastFinishedPulling="2024-12-13 02:26:59.09441395 +0000 UTC m=+23.837244329" observedRunningTime="2024-12-13 02:27:00.916657972 +0000 UTC m=+25.659488301" watchObservedRunningTime="2024-12-13 02:27:00.940216 +0000 UTC m=+25.683046329" Dec 13 02:27:00.991336 kubelet[2588]: E1213 02:27:00.991066 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.991336 kubelet[2588]: W1213 02:27:00.991090 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.991336 kubelet[2588]: E1213 02:27:00.991141 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.991769 kubelet[2588]: E1213 02:27:00.991694 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.991769 kubelet[2588]: W1213 02:27:00.991707 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.991769 kubelet[2588]: E1213 02:27:00.991726 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.993431 kubelet[2588]: E1213 02:27:00.993388 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.993624 kubelet[2588]: W1213 02:27:00.993435 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.993624 kubelet[2588]: E1213 02:27:00.993484 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.993906 kubelet[2588]: E1213 02:27:00.993886 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.994064 kubelet[2588]: W1213 02:27:00.993908 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.994064 kubelet[2588]: E1213 02:27:00.993961 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.994363 kubelet[2588]: E1213 02:27:00.994342 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.994509 kubelet[2588]: W1213 02:27:00.994363 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.994509 kubelet[2588]: E1213 02:27:00.994450 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.994581 kubelet[2588]: E1213 02:27:00.994559 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.994581 kubelet[2588]: W1213 02:27:00.994573 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.994885 kubelet[2588]: E1213 02:27:00.994728 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.994885 kubelet[2588]: E1213 02:27:00.994778 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.994885 kubelet[2588]: W1213 02:27:00.994846 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.994885 kubelet[2588]: E1213 02:27:00.994859 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.995898 kubelet[2588]: E1213 02:27:00.995369 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.995898 kubelet[2588]: W1213 02:27:00.995382 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.995898 kubelet[2588]: E1213 02:27:00.995395 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.996088 kubelet[2588]: E1213 02:27:00.996076 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.996209 kubelet[2588]: W1213 02:27:00.996196 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.996342 kubelet[2588]: E1213 02:27:00.996307 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.996613 kubelet[2588]: E1213 02:27:00.996600 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.996700 kubelet[2588]: W1213 02:27:00.996689 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.996847 kubelet[2588]: E1213 02:27:00.996833 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.997079 kubelet[2588]: E1213 02:27:00.997069 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.997176 kubelet[2588]: W1213 02:27:00.997165 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.997276 kubelet[2588]: E1213 02:27:00.997237 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.997543 kubelet[2588]: E1213 02:27:00.997517 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.997543 kubelet[2588]: W1213 02:27:00.997529 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.997769 kubelet[2588]: E1213 02:27:00.997640 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.997995 kubelet[2588]: E1213 02:27:00.997968 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.997995 kubelet[2588]: W1213 02:27:00.997981 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.998679 kubelet[2588]: E1213 02:27:00.998480 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.998679 kubelet[2588]: W1213 02:27:00.998494 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.998679 kubelet[2588]: E1213 02:27:00.998504 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.999134 kubelet[2588]: E1213 02:27:00.998903 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.999134 kubelet[2588]: W1213 02:27:00.998914 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.999134 kubelet[2588]: E1213 02:27:00.998923 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:00.999536 kubelet[2588]: E1213 02:27:00.999402 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:00.999536 kubelet[2588]: W1213 02:27:00.999416 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:00.999536 kubelet[2588]: E1213 02:27:00.999427 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:01.000414 kubelet[2588]: E1213 02:27:00.998076 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:01.000738 kubelet[2588]: E1213 02:27:01.000727 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:01.000991 kubelet[2588]: W1213 02:27:01.000978 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:01.001135 kubelet[2588]: E1213 02:27:01.001062 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:01.001850 kubelet[2588]: E1213 02:27:01.001804 2588 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:27:01.001850 kubelet[2588]: W1213 02:27:01.001816 2588 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:27:01.001850 kubelet[2588]: E1213 02:27:01.001827 2588 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:27:01.410755 containerd[1455]: time="2024-12-13T02:27:01.410688399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:01.412159 containerd[1455]: time="2024-12-13T02:27:01.411989605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 02:27:01.413241 containerd[1455]: time="2024-12-13T02:27:01.413185614Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:01.416141 containerd[1455]: time="2024-12-13T02:27:01.416068066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:01.416965 containerd[1455]: time="2024-12-13T02:27:01.416826417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.320947469s" Dec 13 02:27:01.416965 containerd[1455]: time="2024-12-13T02:27:01.416866604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 02:27:01.420400 containerd[1455]: time="2024-12-13T02:27:01.420370509Z" level=info msg="CreateContainer within sandbox \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 02:27:01.450435 containerd[1455]: time="2024-12-13T02:27:01.450316449Z" level=info msg="CreateContainer within sandbox \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a\"" Dec 13 02:27:01.451450 containerd[1455]: time="2024-12-13T02:27:01.451316457Z" level=info msg="StartContainer for \"21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a\"" Dec 13 02:27:01.497362 systemd[1]: Started cri-containerd-21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a.scope - libcontainer container 21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a. Dec 13 02:27:01.542461 containerd[1455]: time="2024-12-13T02:27:01.542382124Z" level=info msg="StartContainer for \"21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a\" returns successfully" Dec 13 02:27:01.555045 systemd[1]: cri-containerd-21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a.scope: Deactivated successfully. Dec 13 02:27:01.582866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a-rootfs.mount: Deactivated successfully. Dec 13 02:27:01.843268 containerd[1455]: time="2024-12-13T02:27:01.842745086Z" level=info msg="shim disconnected" id=21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a namespace=k8s.io Dec 13 02:27:01.844090 containerd[1455]: time="2024-12-13T02:27:01.843462600Z" level=warning msg="cleaning up after shim disconnected" id=21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a namespace=k8s.io Dec 13 02:27:01.844090 containerd[1455]: time="2024-12-13T02:27:01.843501323Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:27:01.882104 containerd[1455]: time="2024-12-13T02:27:01.880414628Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:27:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 02:27:02.440147 kubelet[2588]: E1213 02:27:02.440058 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8jkg" podUID="3f08b161-a86f-45d0-9c4d-8166dcb1e19a" Dec 13 02:27:02.884084 containerd[1455]: time="2024-12-13T02:27:02.883844005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 02:27:04.439986 kubelet[2588]: E1213 02:27:04.439818 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8jkg" podUID="3f08b161-a86f-45d0-9c4d-8166dcb1e19a" Dec 13 02:27:06.442839 kubelet[2588]: E1213 02:27:06.440223 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8jkg" podUID="3f08b161-a86f-45d0-9c4d-8166dcb1e19a" Dec 13 02:27:08.439890 kubelet[2588]: E1213 02:27:08.439803 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8jkg" podUID="3f08b161-a86f-45d0-9c4d-8166dcb1e19a" Dec 13 02:27:09.641484 containerd[1455]: time="2024-12-13T02:27:09.641384961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:09.646263 containerd[1455]: time="2024-12-13T02:27:09.645384472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 02:27:09.649261 containerd[1455]: time="2024-12-13T02:27:09.649063350Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:09.683552 containerd[1455]: time="2024-12-13T02:27:09.683449068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:09.687652 containerd[1455]: time="2024-12-13T02:27:09.687552775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.803543049s" Dec 13 02:27:09.687652 containerd[1455]: time="2024-12-13T02:27:09.687630793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 02:27:09.695826 containerd[1455]: time="2024-12-13T02:27:09.695629524Z" level=info msg="CreateContainer within sandbox \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 02:27:09.740956 containerd[1455]: time="2024-12-13T02:27:09.740829706Z" level=info msg="CreateContainer within sandbox \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea\"" Dec 13 02:27:09.743236 containerd[1455]: time="2024-12-13T02:27:09.741703101Z" level=info msg="StartContainer for \"e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea\"" Dec 13 02:27:09.860318 systemd[1]: Started cri-containerd-e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea.scope - libcontainer container e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea. Dec 13 02:27:09.908227 containerd[1455]: time="2024-12-13T02:27:09.905378810Z" level=info msg="StartContainer for \"e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea\" returns successfully" Dec 13 02:27:10.440190 kubelet[2588]: E1213 02:27:10.439977 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8jkg" podUID="3f08b161-a86f-45d0-9c4d-8166dcb1e19a" Dec 13 02:27:12.441296 kubelet[2588]: E1213 02:27:12.440623 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8jkg" podUID="3f08b161-a86f-45d0-9c4d-8166dcb1e19a" Dec 13 02:27:12.475614 systemd[1]: cri-containerd-e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea.scope: Deactivated successfully. Dec 13 02:27:12.678850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea-rootfs.mount: Deactivated successfully. Dec 13 02:27:13.038066 kubelet[2588]: I1213 02:27:12.943243 2588 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 02:27:13.177765 containerd[1455]: time="2024-12-13T02:27:13.176264969Z" level=info msg="shim disconnected" id=e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea namespace=k8s.io Dec 13 02:27:13.177765 containerd[1455]: time="2024-12-13T02:27:13.176395144Z" level=warning msg="cleaning up after shim disconnected" id=e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea namespace=k8s.io Dec 13 02:27:13.177765 containerd[1455]: time="2024-12-13T02:27:13.176418929Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:27:13.250376 systemd[1]: Created slice kubepods-burstable-pode426aab6_f0cc_4d85_8ff0_a831034783ac.slice - libcontainer container kubepods-burstable-pode426aab6_f0cc_4d85_8ff0_a831034783ac.slice. Dec 13 02:27:13.269220 systemd[1]: Created slice kubepods-burstable-pod91c52a03_065a_4299_8a52_a6f37a97ba45.slice - libcontainer container kubepods-burstable-pod91c52a03_065a_4299_8a52_a6f37a97ba45.slice. Dec 13 02:27:13.281881 systemd[1]: Created slice kubepods-besteffort-podecfe26ab_ca79_4c78_8b1f_6efd69af6d02.slice - libcontainer container kubepods-besteffort-podecfe26ab_ca79_4c78_8b1f_6efd69af6d02.slice. Dec 13 02:27:13.292617 kubelet[2588]: I1213 02:27:13.292544 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e426aab6-f0cc-4d85-8ff0-a831034783ac-config-volume\") pod \"coredns-6f6b679f8f-kkd64\" (UID: \"e426aab6-f0cc-4d85-8ff0-a831034783ac\") " pod="kube-system/coredns-6f6b679f8f-kkd64" Dec 13 02:27:13.293018 kubelet[2588]: I1213 02:27:13.293001 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91c52a03-065a-4299-8a52-a6f37a97ba45-config-volume\") pod \"coredns-6f6b679f8f-rg7ts\" (UID: \"91c52a03-065a-4299-8a52-a6f37a97ba45\") " pod="kube-system/coredns-6f6b679f8f-rg7ts" Dec 13 02:27:13.293236 kubelet[2588]: I1213 02:27:13.293219 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4fhz\" (UniqueName: \"kubernetes.io/projected/91c52a03-065a-4299-8a52-a6f37a97ba45-kube-api-access-j4fhz\") pod \"coredns-6f6b679f8f-rg7ts\" (UID: \"91c52a03-065a-4299-8a52-a6f37a97ba45\") " pod="kube-system/coredns-6f6b679f8f-rg7ts" Dec 13 02:27:13.293393 kubelet[2588]: I1213 02:27:13.293361 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8sgp\" (UniqueName: \"kubernetes.io/projected/e426aab6-f0cc-4d85-8ff0-a831034783ac-kube-api-access-z8sgp\") pod \"coredns-6f6b679f8f-kkd64\" (UID: \"e426aab6-f0cc-4d85-8ff0-a831034783ac\") " pod="kube-system/coredns-6f6b679f8f-kkd64" Dec 13 02:27:13.295654 systemd[1]: Created slice kubepods-besteffort-podd1e87c6a_e32e_4fa8_9314_e0438c9aec4d.slice - libcontainer container kubepods-besteffort-podd1e87c6a_e32e_4fa8_9314_e0438c9aec4d.slice. Dec 13 02:27:13.304448 systemd[1]: Created slice kubepods-besteffort-pod6892dbf0_f6b7_4660_b308_d92eaf9f3043.slice - libcontainer container kubepods-besteffort-pod6892dbf0_f6b7_4660_b308_d92eaf9f3043.slice. Dec 13 02:27:13.395234 kubelet[2588]: I1213 02:27:13.395050 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm6f5\" (UniqueName: \"kubernetes.io/projected/6892dbf0-f6b7-4660-b308-d92eaf9f3043-kube-api-access-vm6f5\") pod \"calico-apiserver-57b88875bb-7wjvt\" (UID: \"6892dbf0-f6b7-4660-b308-d92eaf9f3043\") " pod="calico-apiserver/calico-apiserver-57b88875bb-7wjvt" Dec 13 02:27:13.395234 kubelet[2588]: I1213 02:27:13.395110 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6892dbf0-f6b7-4660-b308-d92eaf9f3043-calico-apiserver-certs\") pod \"calico-apiserver-57b88875bb-7wjvt\" (UID: \"6892dbf0-f6b7-4660-b308-d92eaf9f3043\") " pod="calico-apiserver/calico-apiserver-57b88875bb-7wjvt" Dec 13 02:27:13.395234 kubelet[2588]: I1213 02:27:13.395156 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1e87c6a-e32e-4fa8-9314-e0438c9aec4d-tigera-ca-bundle\") pod \"calico-kube-controllers-7dd4dd5bd7-zwr86\" (UID: \"d1e87c6a-e32e-4fa8-9314-e0438c9aec4d\") " pod="calico-system/calico-kube-controllers-7dd4dd5bd7-zwr86" Dec 13 02:27:13.395234 kubelet[2588]: I1213 02:27:13.395198 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5mrq\" (UniqueName: \"kubernetes.io/projected/ecfe26ab-ca79-4c78-8b1f-6efd69af6d02-kube-api-access-l5mrq\") pod \"calico-apiserver-57b88875bb-sw527\" (UID: \"ecfe26ab-ca79-4c78-8b1f-6efd69af6d02\") " pod="calico-apiserver/calico-apiserver-57b88875bb-sw527" Dec 13 02:27:13.395234 kubelet[2588]: I1213 02:27:13.395242 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ecfe26ab-ca79-4c78-8b1f-6efd69af6d02-calico-apiserver-certs\") pod \"calico-apiserver-57b88875bb-sw527\" (UID: \"ecfe26ab-ca79-4c78-8b1f-6efd69af6d02\") " pod="calico-apiserver/calico-apiserver-57b88875bb-sw527" Dec 13 02:27:13.395905 kubelet[2588]: I1213 02:27:13.395294 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqwf2\" (UniqueName: \"kubernetes.io/projected/d1e87c6a-e32e-4fa8-9314-e0438c9aec4d-kube-api-access-cqwf2\") pod \"calico-kube-controllers-7dd4dd5bd7-zwr86\" (UID: \"d1e87c6a-e32e-4fa8-9314-e0438c9aec4d\") " pod="calico-system/calico-kube-controllers-7dd4dd5bd7-zwr86" Dec 13 02:27:13.563042 containerd[1455]: time="2024-12-13T02:27:13.562429215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkd64,Uid:e426aab6-f0cc-4d85-8ff0-a831034783ac,Namespace:kube-system,Attempt:0,}" Dec 13 02:27:13.577822 containerd[1455]: time="2024-12-13T02:27:13.577412432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rg7ts,Uid:91c52a03-065a-4299-8a52-a6f37a97ba45,Namespace:kube-system,Attempt:0,}" Dec 13 02:27:13.594875 containerd[1455]: time="2024-12-13T02:27:13.594252684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b88875bb-sw527,Uid:ecfe26ab-ca79-4c78-8b1f-6efd69af6d02,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:27:13.606244 containerd[1455]: time="2024-12-13T02:27:13.603687904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dd4dd5bd7-zwr86,Uid:d1e87c6a-e32e-4fa8-9314-e0438c9aec4d,Namespace:calico-system,Attempt:0,}" Dec 13 02:27:13.616083 containerd[1455]: time="2024-12-13T02:27:13.616002281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b88875bb-7wjvt,Uid:6892dbf0-f6b7-4660-b308-d92eaf9f3043,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:27:13.949415 containerd[1455]: time="2024-12-13T02:27:13.949363520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 02:27:14.082021 containerd[1455]: time="2024-12-13T02:27:14.081936418Z" level=error msg="Failed to destroy network for sandbox \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.084724 containerd[1455]: time="2024-12-13T02:27:14.084022101Z" level=error msg="Failed to destroy network for sandbox \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.088799 containerd[1455]: time="2024-12-13T02:27:14.088764200Z" level=error msg="encountered an error cleaning up failed sandbox \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.088985 containerd[1455]: time="2024-12-13T02:27:14.088909544Z" level=error msg="Failed to destroy network for sandbox \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.089154 containerd[1455]: time="2024-12-13T02:27:14.089115130Z" level=error msg="Failed to destroy network for sandbox \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.089487 containerd[1455]: time="2024-12-13T02:27:14.089442576Z" level=error msg="encountered an error cleaning up failed sandbox \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.089559 containerd[1455]: time="2024-12-13T02:27:14.089520503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b88875bb-7wjvt,Uid:6892dbf0-f6b7-4660-b308-d92eaf9f3043,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.089731 containerd[1455]: time="2024-12-13T02:27:14.089706022Z" level=error msg="encountered an error cleaning up failed sandbox \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.089828 containerd[1455]: time="2024-12-13T02:27:14.089802423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dd4dd5bd7-zwr86,Uid:d1e87c6a-e32e-4fa8-9314-e0438c9aec4d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.100223 containerd[1455]: time="2024-12-13T02:27:14.088928880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rg7ts,Uid:91c52a03-065a-4299-8a52-a6f37a97ba45,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.100223 containerd[1455]: time="2024-12-13T02:27:14.088793296Z" level=error msg="encountered an error cleaning up failed sandbox \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.100223 containerd[1455]: time="2024-12-13T02:27:14.097964826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b88875bb-sw527,Uid:ecfe26ab-ca79-4c78-8b1f-6efd69af6d02,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.100223 containerd[1455]: time="2024-12-13T02:27:14.088936755Z" level=error msg="Failed to destroy network for sandbox \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.100223 containerd[1455]: time="2024-12-13T02:27:14.098550107Z" level=error msg="encountered an error cleaning up failed sandbox \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.100223 containerd[1455]: time="2024-12-13T02:27:14.098594972Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkd64,Uid:e426aab6-f0cc-4d85-8ff0-a831034783ac,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.105443 kubelet[2588]: E1213 02:27:14.098939 2588 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.105443 kubelet[2588]: E1213 02:27:14.099100 2588 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kkd64" Dec 13 02:27:14.105443 kubelet[2588]: E1213 02:27:14.099167 2588 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kkd64" Dec 13 02:27:14.105927 kubelet[2588]: E1213 02:27:14.099261 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-kkd64_kube-system(e426aab6-f0cc-4d85-8ff0-a831034783ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-kkd64_kube-system(e426aab6-f0cc-4d85-8ff0-a831034783ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kkd64" podUID="e426aab6-f0cc-4d85-8ff0-a831034783ac" Dec 13 02:27:14.105927 kubelet[2588]: E1213 02:27:14.099338 2588 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.105927 kubelet[2588]: E1213 02:27:14.099363 2588 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b88875bb-7wjvt" Dec 13 02:27:14.106089 kubelet[2588]: E1213 02:27:14.099388 2588 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b88875bb-7wjvt" Dec 13 02:27:14.106089 kubelet[2588]: E1213 02:27:14.099423 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b88875bb-7wjvt_calico-apiserver(6892dbf0-f6b7-4660-b308-d92eaf9f3043)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b88875bb-7wjvt_calico-apiserver(6892dbf0-f6b7-4660-b308-d92eaf9f3043)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b88875bb-7wjvt" podUID="6892dbf0-f6b7-4660-b308-d92eaf9f3043" Dec 13 02:27:14.106089 kubelet[2588]: E1213 02:27:14.099479 2588 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.106637 kubelet[2588]: E1213 02:27:14.099501 2588 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dd4dd5bd7-zwr86" Dec 13 02:27:14.106637 kubelet[2588]: E1213 02:27:14.099520 2588 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dd4dd5bd7-zwr86" Dec 13 02:27:14.106637 kubelet[2588]: E1213 02:27:14.099552 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7dd4dd5bd7-zwr86_calico-system(d1e87c6a-e32e-4fa8-9314-e0438c9aec4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7dd4dd5bd7-zwr86_calico-system(d1e87c6a-e32e-4fa8-9314-e0438c9aec4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dd4dd5bd7-zwr86" podUID="d1e87c6a-e32e-4fa8-9314-e0438c9aec4d" Dec 13 02:27:14.106842 kubelet[2588]: E1213 02:27:14.099601 2588 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.106842 kubelet[2588]: E1213 02:27:14.099625 2588 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rg7ts" Dec 13 02:27:14.106842 kubelet[2588]: E1213 02:27:14.099644 2588 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rg7ts" Dec 13 02:27:14.108234 kubelet[2588]: E1213 02:27:14.099673 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-rg7ts_kube-system(91c52a03-065a-4299-8a52-a6f37a97ba45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-rg7ts_kube-system(91c52a03-065a-4299-8a52-a6f37a97ba45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-rg7ts" podUID="91c52a03-065a-4299-8a52-a6f37a97ba45" Dec 13 02:27:14.108234 kubelet[2588]: E1213 02:27:14.099868 2588 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.108234 kubelet[2588]: E1213 02:27:14.099893 2588 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b88875bb-sw527" Dec 13 02:27:14.108386 kubelet[2588]: E1213 02:27:14.099913 2588 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b88875bb-sw527" Dec 13 02:27:14.108386 kubelet[2588]: E1213 02:27:14.099948 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b88875bb-sw527_calico-apiserver(ecfe26ab-ca79-4c78-8b1f-6efd69af6d02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b88875bb-sw527_calico-apiserver(ecfe26ab-ca79-4c78-8b1f-6efd69af6d02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b88875bb-sw527" podUID="ecfe26ab-ca79-4c78-8b1f-6efd69af6d02" Dec 13 02:27:14.455267 systemd[1]: Created slice kubepods-besteffort-pod3f08b161_a86f_45d0_9c4d_8166dcb1e19a.slice - libcontainer container kubepods-besteffort-pod3f08b161_a86f_45d0_9c4d_8166dcb1e19a.slice. Dec 13 02:27:14.461199 containerd[1455]: time="2024-12-13T02:27:14.460993135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8jkg,Uid:3f08b161-a86f-45d0-9c4d-8166dcb1e19a,Namespace:calico-system,Attempt:0,}" Dec 13 02:27:14.576920 containerd[1455]: time="2024-12-13T02:27:14.576804718Z" level=error msg="Failed to destroy network for sandbox \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.577369 containerd[1455]: time="2024-12-13T02:27:14.577294730Z" level=error msg="encountered an error cleaning up failed sandbox \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.577475 containerd[1455]: time="2024-12-13T02:27:14.577400329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8jkg,Uid:3f08b161-a86f-45d0-9c4d-8166dcb1e19a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.578261 kubelet[2588]: E1213 02:27:14.577767 2588 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:14.578261 kubelet[2588]: E1213 02:27:14.577903 2588 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t8jkg" Dec 13 02:27:14.578261 kubelet[2588]: E1213 02:27:14.577953 2588 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t8jkg" Dec 13 02:27:14.578428 kubelet[2588]: E1213 02:27:14.578066 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t8jkg_calico-system(3f08b161-a86f-45d0-9c4d-8166dcb1e19a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t8jkg_calico-system(3f08b161-a86f-45d0-9c4d-8166dcb1e19a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t8jkg" podUID="3f08b161-a86f-45d0-9c4d-8166dcb1e19a" Dec 13 02:27:14.674889 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3-shm.mount: Deactivated successfully. Dec 13 02:27:14.675163 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425-shm.mount: Deactivated successfully. Dec 13 02:27:14.675360 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11-shm.mount: Deactivated successfully. Dec 13 02:27:14.937186 kubelet[2588]: I1213 02:27:14.936417 2588 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Dec 13 02:27:14.945334 kubelet[2588]: I1213 02:27:14.943465 2588 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Dec 13 02:27:14.957990 containerd[1455]: time="2024-12-13T02:27:14.955054709Z" level=info msg="StopPodSandbox for \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\"" Dec 13 02:27:14.961841 containerd[1455]: time="2024-12-13T02:27:14.961151496Z" level=info msg="Ensure that sandbox 854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167 in task-service has been cleanup successfully" Dec 13 02:27:14.963628 containerd[1455]: time="2024-12-13T02:27:14.963499954Z" level=info msg="StopPodSandbox for \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\"" Dec 13 02:27:14.964586 containerd[1455]: time="2024-12-13T02:27:14.964519752Z" level=info msg="Ensure that sandbox 1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3 in task-service has been cleanup successfully" Dec 13 02:27:14.996450 kubelet[2588]: I1213 02:27:14.996402 2588 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Dec 13 02:27:15.008747 containerd[1455]: time="2024-12-13T02:27:15.004183616Z" level=info msg="StopPodSandbox for \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\"" Dec 13 02:27:15.008747 containerd[1455]: time="2024-12-13T02:27:15.005060926Z" level=info msg="Ensure that sandbox 902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425 in task-service has been cleanup successfully" Dec 13 02:27:15.034749 kubelet[2588]: I1213 02:27:15.034716 2588 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Dec 13 02:27:15.036771 containerd[1455]: time="2024-12-13T02:27:15.036736344Z" level=info msg="StopPodSandbox for \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\"" Dec 13 02:27:15.038192 containerd[1455]: time="2024-12-13T02:27:15.038165752Z" level=info msg="Ensure that sandbox a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11 in task-service has been cleanup successfully" Dec 13 02:27:15.052284 kubelet[2588]: I1213 02:27:15.052238 2588 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Dec 13 02:27:15.055142 containerd[1455]: time="2024-12-13T02:27:15.055080502Z" level=info msg="StopPodSandbox for \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\"" Dec 13 02:27:15.057757 containerd[1455]: time="2024-12-13T02:27:15.057717280Z" level=info msg="Ensure that sandbox be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed in task-service has been cleanup successfully" Dec 13 02:27:15.061347 kubelet[2588]: I1213 02:27:15.060895 2588 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Dec 13 02:27:15.075782 containerd[1455]: time="2024-12-13T02:27:15.075736307Z" level=info msg="StopPodSandbox for \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\"" Dec 13 02:27:15.076296 containerd[1455]: time="2024-12-13T02:27:15.076271894Z" level=info msg="Ensure that sandbox 32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500 in task-service has been cleanup successfully" Dec 13 02:27:15.079454 containerd[1455]: time="2024-12-13T02:27:15.079382523Z" level=error msg="StopPodSandbox for \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\" failed" error="failed to destroy network for sandbox \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:15.080047 kubelet[2588]: E1213 02:27:15.079771 2588 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Dec 13 02:27:15.080047 kubelet[2588]: E1213 02:27:15.079855 2588 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3"} Dec 13 02:27:15.080047 kubelet[2588]: E1213 02:27:15.079942 2588 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6892dbf0-f6b7-4660-b308-d92eaf9f3043\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:27:15.080047 kubelet[2588]: E1213 02:27:15.079975 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6892dbf0-f6b7-4660-b308-d92eaf9f3043\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b88875bb-7wjvt" podUID="6892dbf0-f6b7-4660-b308-d92eaf9f3043" Dec 13 02:27:15.119970 containerd[1455]: time="2024-12-13T02:27:15.119905791Z" level=error msg="StopPodSandbox for \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\" failed" error="failed to destroy network for sandbox \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:15.120619 kubelet[2588]: E1213 02:27:15.120551 2588 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Dec 13 02:27:15.121026 kubelet[2588]: E1213 02:27:15.120998 2588 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167"} Dec 13 02:27:15.121187 kubelet[2588]: E1213 02:27:15.121166 2588 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d1e87c6a-e32e-4fa8-9314-e0438c9aec4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:27:15.121344 kubelet[2588]: E1213 02:27:15.121319 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d1e87c6a-e32e-4fa8-9314-e0438c9aec4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dd4dd5bd7-zwr86" podUID="d1e87c6a-e32e-4fa8-9314-e0438c9aec4d" Dec 13 02:27:15.145636 containerd[1455]: time="2024-12-13T02:27:15.145585374Z" level=error msg="StopPodSandbox for \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\" failed" error="failed to destroy network for sandbox \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:15.148429 kubelet[2588]: E1213 02:27:15.148373 2588 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Dec 13 02:27:15.148604 kubelet[2588]: E1213 02:27:15.148580 2588 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11"} Dec 13 02:27:15.148703 kubelet[2588]: E1213 02:27:15.148685 2588 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e426aab6-f0cc-4d85-8ff0-a831034783ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:27:15.148867 kubelet[2588]: E1213 02:27:15.148834 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e426aab6-f0cc-4d85-8ff0-a831034783ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kkd64" podUID="e426aab6-f0cc-4d85-8ff0-a831034783ac" Dec 13 02:27:15.159022 containerd[1455]: time="2024-12-13T02:27:15.158938686Z" level=error msg="StopPodSandbox for \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\" failed" error="failed to destroy network for sandbox \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:15.159898 kubelet[2588]: E1213 02:27:15.159587 2588 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Dec 13 02:27:15.159898 kubelet[2588]: E1213 02:27:15.159685 2588 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425"} Dec 13 02:27:15.159898 kubelet[2588]: E1213 02:27:15.159754 2588 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91c52a03-065a-4299-8a52-a6f37a97ba45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:27:15.159898 kubelet[2588]: E1213 02:27:15.159857 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91c52a03-065a-4299-8a52-a6f37a97ba45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-rg7ts" podUID="91c52a03-065a-4299-8a52-a6f37a97ba45" Dec 13 02:27:15.169231 containerd[1455]: time="2024-12-13T02:27:15.169161892Z" level=error msg="StopPodSandbox for \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\" failed" error="failed to destroy network for sandbox \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:15.170385 kubelet[2588]: E1213 02:27:15.170346 2588 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Dec 13 02:27:15.170532 kubelet[2588]: E1213 02:27:15.170507 2588 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500"} Dec 13 02:27:15.170634 kubelet[2588]: E1213 02:27:15.170611 2588 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ecfe26ab-ca79-4c78-8b1f-6efd69af6d02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:27:15.170777 kubelet[2588]: E1213 02:27:15.170753 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ecfe26ab-ca79-4c78-8b1f-6efd69af6d02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b88875bb-sw527" podUID="ecfe26ab-ca79-4c78-8b1f-6efd69af6d02" Dec 13 02:27:15.175372 containerd[1455]: time="2024-12-13T02:27:15.175334159Z" level=error msg="StopPodSandbox for \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\" failed" error="failed to destroy network for sandbox \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:27:15.175707 kubelet[2588]: E1213 02:27:15.175675 2588 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Dec 13 02:27:15.175821 kubelet[2588]: E1213 02:27:15.175803 2588 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed"} Dec 13 02:27:15.175951 kubelet[2588]: E1213 02:27:15.175931 2588 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f08b161-a86f-45d0-9c4d-8166dcb1e19a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:27:15.176144 kubelet[2588]: E1213 02:27:15.176100 2588 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f08b161-a86f-45d0-9c4d-8166dcb1e19a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t8jkg" podUID="3f08b161-a86f-45d0-9c4d-8166dcb1e19a" Dec 13 02:27:23.785646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2894825410.mount: Deactivated successfully. Dec 13 02:27:24.776157 containerd[1455]: time="2024-12-13T02:27:24.773601658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 02:27:24.785788 containerd[1455]: time="2024-12-13T02:27:24.785508714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:24.800369 containerd[1455]: time="2024-12-13T02:27:24.800287193Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:24.804251 containerd[1455]: time="2024-12-13T02:27:24.804190194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:24.809985 containerd[1455]: time="2024-12-13T02:27:24.809901712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.855510018s" Dec 13 02:27:24.809985 containerd[1455]: time="2024-12-13T02:27:24.809979258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 02:27:24.882943 containerd[1455]: time="2024-12-13T02:27:24.882875121Z" level=info msg="CreateContainer within sandbox \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 02:27:25.052274 containerd[1455]: time="2024-12-13T02:27:25.051988315Z" level=info msg="CreateContainer within sandbox \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f\"" Dec 13 02:27:25.054331 containerd[1455]: time="2024-12-13T02:27:25.054256675Z" level=info msg="StartContainer for \"5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f\"" Dec 13 02:27:25.147432 systemd[1]: Started cri-containerd-5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f.scope - libcontainer container 5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f. Dec 13 02:27:25.221998 containerd[1455]: time="2024-12-13T02:27:25.221480029Z" level=info msg="StartContainer for \"5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f\" returns successfully" Dec 13 02:27:25.361997 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 02:27:25.363872 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 02:27:27.215166 kernel: bpftool[3894]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 02:27:27.443530 containerd[1455]: time="2024-12-13T02:27:27.443163713Z" level=info msg="StopPodSandbox for \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\"" Dec 13 02:27:27.443530 containerd[1455]: time="2024-12-13T02:27:27.443469337Z" level=info msg="StopPodSandbox for \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\"" Dec 13 02:27:27.450759 containerd[1455]: time="2024-12-13T02:27:27.450305715Z" level=info msg="StopPodSandbox for \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\"" Dec 13 02:27:27.696491 systemd-networkd[1365]: vxlan.calico: Link UP Dec 13 02:27:27.696502 systemd-networkd[1365]: vxlan.calico: Gained carrier Dec 13 02:27:27.819402 kubelet[2588]: I1213 02:27:27.792515 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mngdp" podStartSLOduration=4.699190484 podStartE2EDuration="34.737699375s" podCreationTimestamp="2024-12-13 02:26:53 +0000 UTC" firstStartedPulling="2024-12-13 02:26:54.772206169 +0000 UTC m=+19.515036498" lastFinishedPulling="2024-12-13 02:27:24.81071506 +0000 UTC m=+49.553545389" observedRunningTime="2024-12-13 02:27:25.629449184 +0000 UTC m=+50.372279513" watchObservedRunningTime="2024-12-13 02:27:27.737699375 +0000 UTC m=+52.480529815" Dec 13 02:27:28.837061 containerd[1455]: 2024-12-13 02:27:27.729 [INFO][3950] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Dec 13 02:27:28.837061 containerd[1455]: 2024-12-13 02:27:27.733 [INFO][3950] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" iface="eth0" netns="/var/run/netns/cni-c496c8a6-c8e2-6122-a793-3ec7e9dd6f81" Dec 13 02:27:28.837061 containerd[1455]: 2024-12-13 02:27:27.733 [INFO][3950] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" iface="eth0" netns="/var/run/netns/cni-c496c8a6-c8e2-6122-a793-3ec7e9dd6f81" Dec 13 02:27:28.837061 containerd[1455]: 2024-12-13 02:27:27.742 [INFO][3950] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" iface="eth0" netns="/var/run/netns/cni-c496c8a6-c8e2-6122-a793-3ec7e9dd6f81" Dec 13 02:27:28.837061 containerd[1455]: 2024-12-13 02:27:27.742 [INFO][3950] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Dec 13 02:27:28.837061 containerd[1455]: 2024-12-13 02:27:27.742 [INFO][3950] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Dec 13 02:27:28.837061 containerd[1455]: 2024-12-13 02:27:28.795 [INFO][4004] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" HandleID="k8s-pod-network.be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:28.837061 containerd[1455]: 2024-12-13 02:27:28.797 [INFO][4004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:28.837061 containerd[1455]: 2024-12-13 02:27:28.797 [INFO][4004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:28.837061 containerd[1455]: 2024-12-13 02:27:28.821 [WARNING][4004] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" HandleID="k8s-pod-network.be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:28.837061 containerd[1455]: 2024-12-13 02:27:28.822 [INFO][4004] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" HandleID="k8s-pod-network.be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:28.837061 containerd[1455]: 2024-12-13 02:27:28.824 [INFO][4004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:28.837061 containerd[1455]: 2024-12-13 02:27:28.832 [INFO][3950] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Dec 13 02:27:28.838862 containerd[1455]: time="2024-12-13T02:27:28.837344946Z" level=info msg="TearDown network for sandbox \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\" successfully" Dec 13 02:27:28.838862 containerd[1455]: time="2024-12-13T02:27:28.837380783Z" level=info msg="StopPodSandbox for \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\" returns successfully" Dec 13 02:27:28.845216 containerd[1455]: time="2024-12-13T02:27:28.843418050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8jkg,Uid:3f08b161-a86f-45d0-9c4d-8166dcb1e19a,Namespace:calico-system,Attempt:1,}" Dec 13 02:27:28.846650 systemd[1]: run-netns-cni\x2dc496c8a6\x2dc8e2\x2d6122\x2da793\x2d3ec7e9dd6f81.mount: Deactivated successfully. Dec 13 02:27:28.869349 containerd[1455]: 2024-12-13 02:27:27.749 [INFO][3959] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Dec 13 02:27:28.869349 containerd[1455]: 2024-12-13 02:27:27.750 [INFO][3959] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" iface="eth0" netns="/var/run/netns/cni-a392f1bb-9a92-0293-c293-a4073ec92eb7" Dec 13 02:27:28.869349 containerd[1455]: 2024-12-13 02:27:27.750 [INFO][3959] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" iface="eth0" netns="/var/run/netns/cni-a392f1bb-9a92-0293-c293-a4073ec92eb7" Dec 13 02:27:28.869349 containerd[1455]: 2024-12-13 02:27:27.750 [INFO][3959] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" iface="eth0" netns="/var/run/netns/cni-a392f1bb-9a92-0293-c293-a4073ec92eb7" Dec 13 02:27:28.869349 containerd[1455]: 2024-12-13 02:27:27.752 [INFO][3959] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Dec 13 02:27:28.869349 containerd[1455]: 2024-12-13 02:27:27.752 [INFO][3959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Dec 13 02:27:28.869349 containerd[1455]: 2024-12-13 02:27:28.794 [INFO][4007] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" HandleID="k8s-pod-network.a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:28.869349 containerd[1455]: 2024-12-13 02:27:28.797 [INFO][4007] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:28.869349 containerd[1455]: 2024-12-13 02:27:28.826 [INFO][4007] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:28.869349 containerd[1455]: 2024-12-13 02:27:28.854 [WARNING][4007] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" HandleID="k8s-pod-network.a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:28.869349 containerd[1455]: 2024-12-13 02:27:28.854 [INFO][4007] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" HandleID="k8s-pod-network.a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:28.869349 containerd[1455]: 2024-12-13 02:27:28.860 [INFO][4007] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:28.869349 containerd[1455]: 2024-12-13 02:27:28.865 [INFO][3959] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Dec 13 02:27:28.869349 containerd[1455]: time="2024-12-13T02:27:28.869077520Z" level=info msg="TearDown network for sandbox \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\" successfully" Dec 13 02:27:28.869349 containerd[1455]: time="2024-12-13T02:27:28.869179742Z" level=info msg="StopPodSandbox for \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\" returns successfully" Dec 13 02:27:28.875903 containerd[1455]: time="2024-12-13T02:27:28.874895284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkd64,Uid:e426aab6-f0cc-4d85-8ff0-a831034783ac,Namespace:kube-system,Attempt:1,}" Dec 13 02:27:28.877643 systemd[1]: run-netns-cni\x2da392f1bb\x2d9a92\x2d0293\x2dc293\x2da4073ec92eb7.mount: Deactivated successfully. Dec 13 02:27:28.913161 containerd[1455]: 2024-12-13 02:27:27.731 [INFO][3949] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Dec 13 02:27:28.913161 containerd[1455]: 2024-12-13 02:27:27.733 [INFO][3949] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" iface="eth0" netns="/var/run/netns/cni-28cb67ba-9043-88a8-774f-4d743f48835b" Dec 13 02:27:28.913161 containerd[1455]: 2024-12-13 02:27:27.734 [INFO][3949] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" iface="eth0" netns="/var/run/netns/cni-28cb67ba-9043-88a8-774f-4d743f48835b" Dec 13 02:27:28.913161 containerd[1455]: 2024-12-13 02:27:27.744 [INFO][3949] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" iface="eth0" netns="/var/run/netns/cni-28cb67ba-9043-88a8-774f-4d743f48835b" Dec 13 02:27:28.913161 containerd[1455]: 2024-12-13 02:27:27.744 [INFO][3949] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Dec 13 02:27:28.913161 containerd[1455]: 2024-12-13 02:27:27.744 [INFO][3949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Dec 13 02:27:28.913161 containerd[1455]: 2024-12-13 02:27:28.793 [INFO][4005] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" HandleID="k8s-pod-network.902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:28.913161 containerd[1455]: 2024-12-13 02:27:28.798 [INFO][4005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:28.913161 containerd[1455]: 2024-12-13 02:27:28.860 [INFO][4005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:28.913161 containerd[1455]: 2024-12-13 02:27:28.885 [WARNING][4005] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" HandleID="k8s-pod-network.902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:28.913161 containerd[1455]: 2024-12-13 02:27:28.885 [INFO][4005] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" HandleID="k8s-pod-network.902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:28.913161 containerd[1455]: 2024-12-13 02:27:28.891 [INFO][4005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:28.913161 containerd[1455]: 2024-12-13 02:27:28.900 [INFO][3949] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Dec 13 02:27:28.916332 containerd[1455]: time="2024-12-13T02:27:28.914075122Z" level=info msg="TearDown network for sandbox \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\" successfully" Dec 13 02:27:28.916332 containerd[1455]: time="2024-12-13T02:27:28.916210702Z" level=info msg="StopPodSandbox for \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\" returns successfully" Dec 13 02:27:28.919529 systemd[1]: run-netns-cni\x2d28cb67ba\x2d9043\x2d88a8\x2d774f\x2d4d743f48835b.mount: Deactivated successfully. Dec 13 02:27:28.951749 containerd[1455]: time="2024-12-13T02:27:28.951684421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rg7ts,Uid:91c52a03-065a-4299-8a52-a6f37a97ba45,Namespace:kube-system,Attempt:1,}" Dec 13 02:27:29.268790 systemd-networkd[1365]: cali43010fa0bae: Link UP Dec 13 02:27:29.269400 systemd-networkd[1365]: cali43010fa0bae: Gained carrier Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.015 [INFO][4061] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0 csi-node-driver- calico-system 3f08b161-a86f-45d0-9c4d-8166dcb1e19a 805 0 2024-12-13 02:26:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-1-b-462e46fdf9.novalocal csi-node-driver-t8jkg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali43010fa0bae [] []}} ContainerID="f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" Namespace="calico-system" Pod="csi-node-driver-t8jkg" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-" Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.015 [INFO][4061] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" Namespace="calico-system" Pod="csi-node-driver-t8jkg" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.148 [INFO][4092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" HandleID="k8s-pod-network.f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.177 [INFO][4092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" HandleID="k8s-pod-network.f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000161050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-b-462e46fdf9.novalocal", "pod":"csi-node-driver-t8jkg", "timestamp":"2024-12-13 02:27:29.148297441 +0000 UTC"}, Hostname:"ci-4081-2-1-b-462e46fdf9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.177 [INFO][4092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.178 [INFO][4092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.178 [INFO][4092] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-462e46fdf9.novalocal' Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.186 [INFO][4092] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.209 [INFO][4092] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.224 [INFO][4092] ipam/ipam.go 489: Trying affinity for 192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.229 [INFO][4092] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.235 [INFO][4092] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.235 [INFO][4092] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.237 [INFO][4092] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5 Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.247 [INFO][4092] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.260 [INFO][4092] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.193/26] block=192.168.89.192/26 handle="k8s-pod-network.f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.260 [INFO][4092] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.193/26] handle="k8s-pod-network.f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.260 [INFO][4092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:29.298356 containerd[1455]: 2024-12-13 02:27:29.260 [INFO][4092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.193/26] IPv6=[] ContainerID="f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" HandleID="k8s-pod-network.f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:29.301615 containerd[1455]: 2024-12-13 02:27:29.264 [INFO][4061] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" Namespace="calico-system" Pod="csi-node-driver-t8jkg" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3f08b161-a86f-45d0-9c4d-8166dcb1e19a", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"", Pod:"csi-node-driver-t8jkg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43010fa0bae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:29.301615 containerd[1455]: 2024-12-13 02:27:29.265 [INFO][4061] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.193/32] ContainerID="f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" Namespace="calico-system" Pod="csi-node-driver-t8jkg" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:29.301615 containerd[1455]: 2024-12-13 02:27:29.265 [INFO][4061] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43010fa0bae ContainerID="f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" Namespace="calico-system" Pod="csi-node-driver-t8jkg" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:29.301615 containerd[1455]: 2024-12-13 02:27:29.270 [INFO][4061] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" Namespace="calico-system" Pod="csi-node-driver-t8jkg" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:29.301615 containerd[1455]: 2024-12-13 02:27:29.271 [INFO][4061] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" Namespace="calico-system" Pod="csi-node-driver-t8jkg" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3f08b161-a86f-45d0-9c4d-8166dcb1e19a", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5", Pod:"csi-node-driver-t8jkg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43010fa0bae", MAC:"fa:de:f6:4d:46:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:29.301615 containerd[1455]: 2024-12-13 02:27:29.293 [INFO][4061] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5" Namespace="calico-system" Pod="csi-node-driver-t8jkg" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:29.351920 containerd[1455]: time="2024-12-13T02:27:29.351024933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:27:29.351920 containerd[1455]: time="2024-12-13T02:27:29.351143647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:27:29.351920 containerd[1455]: time="2024-12-13T02:27:29.351187880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:29.351920 containerd[1455]: time="2024-12-13T02:27:29.351515515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:29.386448 systemd[1]: Started cri-containerd-f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5.scope - libcontainer container f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5. Dec 13 02:27:29.435657 systemd-networkd[1365]: cali6daff0d3284: Link UP Dec 13 02:27:29.435846 systemd-networkd[1365]: cali6daff0d3284: Gained carrier Dec 13 02:27:29.449247 containerd[1455]: time="2024-12-13T02:27:29.448454604Z" level=info msg="StopPodSandbox for \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\"" Dec 13 02:27:29.453962 containerd[1455]: time="2024-12-13T02:27:29.451109248Z" level=info msg="StopPodSandbox for \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\"" Dec 13 02:27:29.454460 containerd[1455]: time="2024-12-13T02:27:29.453821241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8jkg,Uid:3f08b161-a86f-45d0-9c4d-8166dcb1e19a,Namespace:calico-system,Attempt:1,} returns sandbox id \"f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5\"" Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.109 [INFO][4073] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0 coredns-6f6b679f8f- kube-system e426aab6-f0cc-4d85-8ff0-a831034783ac 806 0 2024-12-13 02:26:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-b-462e46fdf9.novalocal coredns-6f6b679f8f-kkd64 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6daff0d3284 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkd64" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-" Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.110 [INFO][4073] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkd64" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.196 [INFO][4101] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" HandleID="k8s-pod-network.9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.222 [INFO][4101] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" HandleID="k8s-pod-network.9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319580), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-b-462e46fdf9.novalocal", "pod":"coredns-6f6b679f8f-kkd64", "timestamp":"2024-12-13 02:27:29.19689758 +0000 UTC"}, Hostname:"ci-4081-2-1-b-462e46fdf9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.222 [INFO][4101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.260 [INFO][4101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.260 [INFO][4101] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-462e46fdf9.novalocal' Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.286 [INFO][4101] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.366 [INFO][4101] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.384 [INFO][4101] ipam/ipam.go 489: Trying affinity for 192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.389 [INFO][4101] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.392 [INFO][4101] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.392 [INFO][4101] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.394 [INFO][4101] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9 Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.401 [INFO][4101] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.422 [INFO][4101] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.194/26] block=192.168.89.192/26 handle="k8s-pod-network.9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.423 [INFO][4101] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.194/26] handle="k8s-pod-network.9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.423 [INFO][4101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:29.485189 containerd[1455]: 2024-12-13 02:27:29.423 [INFO][4101] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.194/26] IPv6=[] ContainerID="9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" HandleID="k8s-pod-network.9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:29.486357 containerd[1455]: 2024-12-13 02:27:29.429 [INFO][4073] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkd64" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e426aab6-f0cc-4d85-8ff0-a831034783ac", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-kkd64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6daff0d3284", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:29.486357 containerd[1455]: 2024-12-13 02:27:29.430 [INFO][4073] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.194/32] ContainerID="9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkd64" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:29.486357 containerd[1455]: 2024-12-13 02:27:29.430 [INFO][4073] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6daff0d3284 ContainerID="9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkd64" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:29.486357 containerd[1455]: 2024-12-13 02:27:29.436 [INFO][4073] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkd64" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:29.486357 containerd[1455]: 2024-12-13 02:27:29.437 [INFO][4073] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkd64" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e426aab6-f0cc-4d85-8ff0-a831034783ac", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9", Pod:"coredns-6f6b679f8f-kkd64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6daff0d3284", MAC:"02:70:4b:eb:cd:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:29.486357 containerd[1455]: 2024-12-13 02:27:29.469 [INFO][4073] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkd64" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:29.499028 containerd[1455]: time="2024-12-13T02:27:29.498910198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 02:27:29.567683 systemd-networkd[1365]: califace4002a81: Link UP Dec 13 02:27:29.574263 systemd-networkd[1365]: califace4002a81: Gained carrier Dec 13 02:27:29.601625 containerd[1455]: time="2024-12-13T02:27:29.601307735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:27:29.601625 containerd[1455]: time="2024-12-13T02:27:29.601374089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:27:29.601625 containerd[1455]: time="2024-12-13T02:27:29.601389448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:29.601625 containerd[1455]: time="2024-12-13T02:27:29.601476522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:29.609227 systemd-networkd[1365]: vxlan.calico: Gained IPv6LL Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.133 [INFO][4082] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0 coredns-6f6b679f8f- kube-system 91c52a03-065a-4299-8a52-a6f37a97ba45 804 0 2024-12-13 02:26:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-b-462e46fdf9.novalocal coredns-6f6b679f8f-rg7ts eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califace4002a81 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" Namespace="kube-system" Pod="coredns-6f6b679f8f-rg7ts" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-" Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.136 [INFO][4082] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" Namespace="kube-system" Pod="coredns-6f6b679f8f-rg7ts" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.231 [INFO][4107] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" HandleID="k8s-pod-network.f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.265 [INFO][4107] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" HandleID="k8s-pod-network.f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a4670), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-b-462e46fdf9.novalocal", "pod":"coredns-6f6b679f8f-rg7ts", "timestamp":"2024-12-13 02:27:29.231624723 +0000 UTC"}, Hostname:"ci-4081-2-1-b-462e46fdf9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.265 [INFO][4107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.423 [INFO][4107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.423 [INFO][4107] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-462e46fdf9.novalocal' Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.429 [INFO][4107] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.468 [INFO][4107] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.482 [INFO][4107] ipam/ipam.go 489: Trying affinity for 192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.488 [INFO][4107] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.495 [INFO][4107] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.495 [INFO][4107] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.498 [INFO][4107] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.512 [INFO][4107] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.532 [INFO][4107] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.195/26] block=192.168.89.192/26 handle="k8s-pod-network.f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.533 [INFO][4107] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.195/26] handle="k8s-pod-network.f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.533 [INFO][4107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:29.621965 containerd[1455]: 2024-12-13 02:27:29.533 [INFO][4107] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.195/26] IPv6=[] ContainerID="f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" HandleID="k8s-pod-network.f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:29.624780 containerd[1455]: 2024-12-13 02:27:29.548 [INFO][4082] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" Namespace="kube-system" Pod="coredns-6f6b679f8f-rg7ts" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"91c52a03-065a-4299-8a52-a6f37a97ba45", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-rg7ts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califace4002a81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:29.624780 containerd[1455]: 2024-12-13 02:27:29.549 [INFO][4082] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.195/32] ContainerID="f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" Namespace="kube-system" Pod="coredns-6f6b679f8f-rg7ts" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:29.624780 containerd[1455]: 2024-12-13 02:27:29.549 [INFO][4082] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califace4002a81 ContainerID="f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" Namespace="kube-system" Pod="coredns-6f6b679f8f-rg7ts" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:29.624780 containerd[1455]: 2024-12-13 02:27:29.578 [INFO][4082] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" Namespace="kube-system" Pod="coredns-6f6b679f8f-rg7ts" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:29.624780 containerd[1455]: 2024-12-13 02:27:29.579 [INFO][4082] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" Namespace="kube-system" Pod="coredns-6f6b679f8f-rg7ts" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"91c52a03-065a-4299-8a52-a6f37a97ba45", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac", Pod:"coredns-6f6b679f8f-rg7ts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califace4002a81", MAC:"c2:7b:50:4d:f6:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:29.624780 containerd[1455]: 2024-12-13 02:27:29.607 [INFO][4082] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac" Namespace="kube-system" Pod="coredns-6f6b679f8f-rg7ts" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:29.670387 systemd[1]: Started cri-containerd-9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9.scope - libcontainer container 9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9. Dec 13 02:27:29.722542 containerd[1455]: time="2024-12-13T02:27:29.721472952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:27:29.722542 containerd[1455]: time="2024-12-13T02:27:29.721588921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:27:29.722542 containerd[1455]: time="2024-12-13T02:27:29.721620540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:29.722542 containerd[1455]: time="2024-12-13T02:27:29.721772816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:29.768457 systemd[1]: Started cri-containerd-f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac.scope - libcontainer container f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac. Dec 13 02:27:29.785769 containerd[1455]: 2024-12-13 02:27:29.660 [INFO][4202] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Dec 13 02:27:29.785769 containerd[1455]: 2024-12-13 02:27:29.660 [INFO][4202] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" iface="eth0" netns="/var/run/netns/cni-b334d3f4-dddc-ce40-18c8-a8cc5e4af3fd" Dec 13 02:27:29.785769 containerd[1455]: 2024-12-13 02:27:29.661 [INFO][4202] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" iface="eth0" netns="/var/run/netns/cni-b334d3f4-dddc-ce40-18c8-a8cc5e4af3fd" Dec 13 02:27:29.785769 containerd[1455]: 2024-12-13 02:27:29.663 [INFO][4202] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" iface="eth0" netns="/var/run/netns/cni-b334d3f4-dddc-ce40-18c8-a8cc5e4af3fd" Dec 13 02:27:29.785769 containerd[1455]: 2024-12-13 02:27:29.663 [INFO][4202] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Dec 13 02:27:29.785769 containerd[1455]: 2024-12-13 02:27:29.663 [INFO][4202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Dec 13 02:27:29.785769 containerd[1455]: 2024-12-13 02:27:29.740 [INFO][4263] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" HandleID="k8s-pod-network.1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:29.785769 containerd[1455]: 2024-12-13 02:27:29.740 [INFO][4263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:29.785769 containerd[1455]: 2024-12-13 02:27:29.740 [INFO][4263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:29.785769 containerd[1455]: 2024-12-13 02:27:29.765 [WARNING][4263] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" HandleID="k8s-pod-network.1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:29.785769 containerd[1455]: 2024-12-13 02:27:29.765 [INFO][4263] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" HandleID="k8s-pod-network.1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:29.785769 containerd[1455]: 2024-12-13 02:27:29.773 [INFO][4263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:29.785769 containerd[1455]: 2024-12-13 02:27:29.775 [INFO][4202] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Dec 13 02:27:29.789008 containerd[1455]: time="2024-12-13T02:27:29.788216497Z" level=info msg="TearDown network for sandbox \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\" successfully" Dec 13 02:27:29.789008 containerd[1455]: time="2024-12-13T02:27:29.788641265Z" level=info msg="StopPodSandbox for \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\" returns successfully" Dec 13 02:27:29.790846 containerd[1455]: time="2024-12-13T02:27:29.790183310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b88875bb-7wjvt,Uid:6892dbf0-f6b7-4660-b308-d92eaf9f3043,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:27:29.819531 containerd[1455]: time="2024-12-13T02:27:29.819423762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkd64,Uid:e426aab6-f0cc-4d85-8ff0-a831034783ac,Namespace:kube-system,Attempt:1,} returns sandbox id \"9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9\"" Dec 13 02:27:29.840532 containerd[1455]: time="2024-12-13T02:27:29.840210759Z" level=info msg="CreateContainer within sandbox \"9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:27:29.874068 systemd[1]: run-netns-cni\x2db334d3f4\x2ddddc\x2dce40\x2d18c8\x2da8cc5e4af3fd.mount: Deactivated successfully. Dec 13 02:27:29.926425 containerd[1455]: 2024-12-13 02:27:29.712 [INFO][4213] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Dec 13 02:27:29.926425 containerd[1455]: 2024-12-13 02:27:29.712 [INFO][4213] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" iface="eth0" netns="/var/run/netns/cni-395be19f-ae08-f771-e373-58a220442a2d" Dec 13 02:27:29.926425 containerd[1455]: 2024-12-13 02:27:29.712 [INFO][4213] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" iface="eth0" netns="/var/run/netns/cni-395be19f-ae08-f771-e373-58a220442a2d" Dec 13 02:27:29.926425 containerd[1455]: 2024-12-13 02:27:29.714 [INFO][4213] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" iface="eth0" netns="/var/run/netns/cni-395be19f-ae08-f771-e373-58a220442a2d" Dec 13 02:27:29.926425 containerd[1455]: 2024-12-13 02:27:29.714 [INFO][4213] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Dec 13 02:27:29.926425 containerd[1455]: 2024-12-13 02:27:29.714 [INFO][4213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Dec 13 02:27:29.926425 containerd[1455]: 2024-12-13 02:27:29.842 [INFO][4287] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" HandleID="k8s-pod-network.854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:29.926425 containerd[1455]: 2024-12-13 02:27:29.854 [INFO][4287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:29.926425 containerd[1455]: 2024-12-13 02:27:29.855 [INFO][4287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:29.926425 containerd[1455]: 2024-12-13 02:27:29.890 [WARNING][4287] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" HandleID="k8s-pod-network.854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:29.926425 containerd[1455]: 2024-12-13 02:27:29.890 [INFO][4287] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" HandleID="k8s-pod-network.854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:29.926425 containerd[1455]: 2024-12-13 02:27:29.898 [INFO][4287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:29.926425 containerd[1455]: 2024-12-13 02:27:29.902 [INFO][4213] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Dec 13 02:27:29.926456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3363395852.mount: Deactivated successfully. Dec 13 02:27:29.927894 containerd[1455]: time="2024-12-13T02:27:29.926983311Z" level=info msg="TearDown network for sandbox \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\" successfully" Dec 13 02:27:29.927894 containerd[1455]: time="2024-12-13T02:27:29.927014570Z" level=info msg="StopPodSandbox for \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\" returns successfully" Dec 13 02:27:29.932094 containerd[1455]: time="2024-12-13T02:27:29.932062387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dd4dd5bd7-zwr86,Uid:d1e87c6a-e32e-4fa8-9314-e0438c9aec4d,Namespace:calico-system,Attempt:1,}" Dec 13 02:27:29.939994 systemd[1]: run-netns-cni\x2d395be19f\x2dae08\x2df771\x2de373\x2d58a220442a2d.mount: Deactivated successfully. Dec 13 02:27:29.949205 containerd[1455]: time="2024-12-13T02:27:29.949037769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rg7ts,Uid:91c52a03-065a-4299-8a52-a6f37a97ba45,Namespace:kube-system,Attempt:1,} returns sandbox id \"f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac\"" Dec 13 02:27:29.960927 containerd[1455]: time="2024-12-13T02:27:29.960863844Z" level=info msg="CreateContainer within sandbox \"f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:27:29.962898 containerd[1455]: time="2024-12-13T02:27:29.962707224Z" level=info msg="CreateContainer within sandbox \"9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a88745084da4db4a70e4ad1822fcbc8477e588308776e586384ac63c235a0d67\"" Dec 13 02:27:29.966240 containerd[1455]: time="2024-12-13T02:27:29.965927170Z" level=info msg="StartContainer for \"a88745084da4db4a70e4ad1822fcbc8477e588308776e586384ac63c235a0d67\"" Dec 13 02:27:30.009304 systemd[1]: Started cri-containerd-a88745084da4db4a70e4ad1822fcbc8477e588308776e586384ac63c235a0d67.scope - libcontainer container a88745084da4db4a70e4ad1822fcbc8477e588308776e586384ac63c235a0d67. Dec 13 02:27:30.275588 systemd-networkd[1365]: calif8e82622d59: Link UP Dec 13 02:27:30.276219 systemd-networkd[1365]: calif8e82622d59: Gained carrier Dec 13 02:27:30.296973 containerd[1455]: time="2024-12-13T02:27:30.293932563Z" level=info msg="StartContainer for \"a88745084da4db4a70e4ad1822fcbc8477e588308776e586384ac63c235a0d67\" returns successfully" Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:29.936 [INFO][4323] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0 calico-apiserver-57b88875bb- calico-apiserver 6892dbf0-f6b7-4660-b308-d92eaf9f3043 823 0 2024-12-13 02:26:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57b88875bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-b-462e46fdf9.novalocal calico-apiserver-57b88875bb-7wjvt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif8e82622d59 [] []}} ContainerID="39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-7wjvt" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-" Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:29.936 [INFO][4323] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-7wjvt" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.068 [INFO][4372] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" HandleID="k8s-pod-network.39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.140 [INFO][4372] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" HandleID="k8s-pod-network.39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005c4590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-b-462e46fdf9.novalocal", "pod":"calico-apiserver-57b88875bb-7wjvt", "timestamp":"2024-12-13 02:27:30.068972625 +0000 UTC"}, Hostname:"ci-4081-2-1-b-462e46fdf9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.141 [INFO][4372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.141 [INFO][4372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.141 [INFO][4372] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-462e46fdf9.novalocal' Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.146 [INFO][4372] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.153 [INFO][4372] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.160 [INFO][4372] ipam/ipam.go 489: Trying affinity for 192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.164 [INFO][4372] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.167 [INFO][4372] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.167 [INFO][4372] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.171 [INFO][4372] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91 Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.206 [INFO][4372] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.265 [INFO][4372] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.196/26] block=192.168.89.192/26 handle="k8s-pod-network.39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.266 [INFO][4372] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.196/26] handle="k8s-pod-network.39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.266 [INFO][4372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:30.330455 containerd[1455]: 2024-12-13 02:27:30.266 [INFO][4372] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.196/26] IPv6=[] ContainerID="39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" HandleID="k8s-pod-network.39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:30.331939 containerd[1455]: 2024-12-13 02:27:30.269 [INFO][4323] cni-plugin/k8s.go 386: Populated endpoint ContainerID="39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-7wjvt" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0", GenerateName:"calico-apiserver-57b88875bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"6892dbf0-f6b7-4660-b308-d92eaf9f3043", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b88875bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"", Pod:"calico-apiserver-57b88875bb-7wjvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8e82622d59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:30.331939 containerd[1455]: 2024-12-13 02:27:30.270 [INFO][4323] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.196/32] ContainerID="39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-7wjvt" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:30.331939 containerd[1455]: 2024-12-13 02:27:30.270 [INFO][4323] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8e82622d59 ContainerID="39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-7wjvt" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:30.331939 containerd[1455]: 2024-12-13 02:27:30.276 [INFO][4323] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-7wjvt" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:30.331939 containerd[1455]: 2024-12-13 02:27:30.277 [INFO][4323] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-7wjvt" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0", GenerateName:"calico-apiserver-57b88875bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"6892dbf0-f6b7-4660-b308-d92eaf9f3043", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b88875bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91", Pod:"calico-apiserver-57b88875bb-7wjvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8e82622d59", MAC:"96:51:de:8a:14:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:30.331939 containerd[1455]: 2024-12-13 02:27:30.323 [INFO][4323] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-7wjvt" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:30.401188 containerd[1455]: time="2024-12-13T02:27:30.398536499Z" level=info msg="CreateContainer within sandbox \"f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ac190ae2b08a714e2b941fcc96fc2b9856be195b4beafe0dc2fb5200052642a\"" Dec 13 02:27:30.401188 containerd[1455]: time="2024-12-13T02:27:30.399588915Z" level=info msg="StartContainer for \"6ac190ae2b08a714e2b941fcc96fc2b9856be195b4beafe0dc2fb5200052642a\"" Dec 13 02:27:30.440390 systemd-networkd[1365]: cali43010fa0bae: Gained IPv6LL Dec 13 02:27:30.442718 containerd[1455]: time="2024-12-13T02:27:30.442680176Z" level=info msg="StopPodSandbox for \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\"" Dec 13 02:27:30.454474 containerd[1455]: time="2024-12-13T02:27:30.453429827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:27:30.454474 containerd[1455]: time="2024-12-13T02:27:30.453511882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:27:30.454474 containerd[1455]: time="2024-12-13T02:27:30.453534865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:30.454474 containerd[1455]: time="2024-12-13T02:27:30.453640654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:30.482437 systemd[1]: Started cri-containerd-6ac190ae2b08a714e2b941fcc96fc2b9856be195b4beafe0dc2fb5200052642a.scope - libcontainer container 6ac190ae2b08a714e2b941fcc96fc2b9856be195b4beafe0dc2fb5200052642a. Dec 13 02:27:30.515799 systemd[1]: Started cri-containerd-39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91.scope - libcontainer container 39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91. Dec 13 02:27:30.568609 systemd-networkd[1365]: cali6daff0d3284: Gained IPv6LL Dec 13 02:27:30.591496 containerd[1455]: time="2024-12-13T02:27:30.590116468Z" level=info msg="StartContainer for \"6ac190ae2b08a714e2b941fcc96fc2b9856be195b4beafe0dc2fb5200052642a\" returns successfully" Dec 13 02:27:30.717839 kubelet[2588]: I1213 02:27:30.717508 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-kkd64" podStartSLOduration=49.717491626 podStartE2EDuration="49.717491626s" podCreationTimestamp="2024-12-13 02:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:27:30.717319783 +0000 UTC m=+55.460150122" watchObservedRunningTime="2024-12-13 02:27:30.717491626 +0000 UTC m=+55.460321955" Dec 13 02:27:30.741504 containerd[1455]: time="2024-12-13T02:27:30.740724414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b88875bb-7wjvt,Uid:6892dbf0-f6b7-4660-b308-d92eaf9f3043,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91\"" Dec 13 02:27:30.769254 kubelet[2588]: I1213 02:27:30.769175 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rg7ts" podStartSLOduration=49.769154319 podStartE2EDuration="49.769154319s" podCreationTimestamp="2024-12-13 02:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:27:30.768645374 +0000 UTC m=+55.511475733" watchObservedRunningTime="2024-12-13 02:27:30.769154319 +0000 UTC m=+55.511984659" Dec 13 02:27:30.835618 systemd-networkd[1365]: cali8498c7c6f82: Link UP Dec 13 02:27:30.835858 systemd-networkd[1365]: cali8498c7c6f82: Gained carrier Dec 13 02:27:30.862031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount191728197.mount: Deactivated successfully. Dec 13 02:27:30.870062 containerd[1455]: 2024-12-13 02:27:30.649 [INFO][4491] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Dec 13 02:27:30.870062 containerd[1455]: 2024-12-13 02:27:30.649 [INFO][4491] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" iface="eth0" netns="/var/run/netns/cni-d1fca6a8-527b-e0e1-f355-52a403fa637e" Dec 13 02:27:30.870062 containerd[1455]: 2024-12-13 02:27:30.650 [INFO][4491] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" iface="eth0" netns="/var/run/netns/cni-d1fca6a8-527b-e0e1-f355-52a403fa637e" Dec 13 02:27:30.870062 containerd[1455]: 2024-12-13 02:27:30.651 [INFO][4491] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" iface="eth0" netns="/var/run/netns/cni-d1fca6a8-527b-e0e1-f355-52a403fa637e" Dec 13 02:27:30.870062 containerd[1455]: 2024-12-13 02:27:30.651 [INFO][4491] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Dec 13 02:27:30.870062 containerd[1455]: 2024-12-13 02:27:30.651 [INFO][4491] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Dec 13 02:27:30.870062 containerd[1455]: 2024-12-13 02:27:30.752 [INFO][4528] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" HandleID="k8s-pod-network.32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:30.870062 containerd[1455]: 2024-12-13 02:27:30.752 [INFO][4528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:30.870062 containerd[1455]: 2024-12-13 02:27:30.820 [INFO][4528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:30.870062 containerd[1455]: 2024-12-13 02:27:30.857 [WARNING][4528] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" HandleID="k8s-pod-network.32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:30.870062 containerd[1455]: 2024-12-13 02:27:30.857 [INFO][4528] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" HandleID="k8s-pod-network.32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:30.870062 containerd[1455]: 2024-12-13 02:27:30.861 [INFO][4528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:30.870062 containerd[1455]: 2024-12-13 02:27:30.865 [INFO][4491] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Dec 13 02:27:30.876838 systemd[1]: run-netns-cni\x2dd1fca6a8\x2d527b\x2de0e1\x2df355\x2d52a403fa637e.mount: Deactivated successfully. Dec 13 02:27:30.879081 containerd[1455]: time="2024-12-13T02:27:30.877959593Z" level=info msg="TearDown network for sandbox \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\" successfully" Dec 13 02:27:30.879081 containerd[1455]: time="2024-12-13T02:27:30.878003987Z" level=info msg="StopPodSandbox for \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\" returns successfully" Dec 13 02:27:30.882335 containerd[1455]: time="2024-12-13T02:27:30.881947441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b88875bb-sw527,Uid:ecfe26ab-ca79-4c78-8b1f-6efd69af6d02,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.405 [INFO][4408] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0 calico-kube-controllers-7dd4dd5bd7- calico-system d1e87c6a-e32e-4fa8-9314-e0438c9aec4d 824 0 2024-12-13 02:26:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7dd4dd5bd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-b-462e46fdf9.novalocal calico-kube-controllers-7dd4dd5bd7-zwr86 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8498c7c6f82 [] []}} ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Namespace="calico-system" Pod="calico-kube-controllers-7dd4dd5bd7-zwr86" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-" Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.405 [INFO][4408] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Namespace="calico-system" Pod="calico-kube-controllers-7dd4dd5bd7-zwr86" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.553 [INFO][4442] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" HandleID="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.596 [INFO][4442] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" HandleID="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d6010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-b-462e46fdf9.novalocal", "pod":"calico-kube-controllers-7dd4dd5bd7-zwr86", "timestamp":"2024-12-13 02:27:30.551116168 +0000 UTC"}, Hostname:"ci-4081-2-1-b-462e46fdf9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.596 [INFO][4442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.596 [INFO][4442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.596 [INFO][4442] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-462e46fdf9.novalocal' Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.611 [INFO][4442] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.679 [INFO][4442] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.708 [INFO][4442] ipam/ipam.go 489: Trying affinity for 192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.726 [INFO][4442] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.734 [INFO][4442] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.740 [INFO][4442] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.750 [INFO][4442] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.771 [INFO][4442] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.819 [INFO][4442] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.197/26] block=192.168.89.192/26 handle="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.819 [INFO][4442] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.197/26] handle="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.819 [INFO][4442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:30.891179 containerd[1455]: 2024-12-13 02:27:30.819 [INFO][4442] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.197/26] IPv6=[] ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" HandleID="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:30.894988 containerd[1455]: 2024-12-13 02:27:30.826 [INFO][4408] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Namespace="calico-system" Pod="calico-kube-controllers-7dd4dd5bd7-zwr86" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0", GenerateName:"calico-kube-controllers-7dd4dd5bd7-", Namespace:"calico-system", SelfLink:"", UID:"d1e87c6a-e32e-4fa8-9314-e0438c9aec4d", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dd4dd5bd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"", Pod:"calico-kube-controllers-7dd4dd5bd7-zwr86", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8498c7c6f82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:30.894988 containerd[1455]: 2024-12-13 02:27:30.826 [INFO][4408] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.197/32] ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Namespace="calico-system" Pod="calico-kube-controllers-7dd4dd5bd7-zwr86" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:30.894988 containerd[1455]: 2024-12-13 02:27:30.827 [INFO][4408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8498c7c6f82 ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Namespace="calico-system" Pod="calico-kube-controllers-7dd4dd5bd7-zwr86" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:30.894988 containerd[1455]: 2024-12-13 02:27:30.835 [INFO][4408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Namespace="calico-system" Pod="calico-kube-controllers-7dd4dd5bd7-zwr86" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:30.894988 containerd[1455]: 2024-12-13 02:27:30.837 [INFO][4408] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Namespace="calico-system" Pod="calico-kube-controllers-7dd4dd5bd7-zwr86" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0", GenerateName:"calico-kube-controllers-7dd4dd5bd7-", Namespace:"calico-system", SelfLink:"", UID:"d1e87c6a-e32e-4fa8-9314-e0438c9aec4d", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dd4dd5bd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e", Pod:"calico-kube-controllers-7dd4dd5bd7-zwr86", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8498c7c6f82", MAC:"ca:f9:2f:60:bf:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:30.894988 containerd[1455]: 2024-12-13 02:27:30.888 [INFO][4408] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Namespace="calico-system" Pod="calico-kube-controllers-7dd4dd5bd7-zwr86" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:31.001439 containerd[1455]: time="2024-12-13T02:27:30.999285825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:27:31.001439 containerd[1455]: time="2024-12-13T02:27:30.999369402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:27:31.001439 containerd[1455]: time="2024-12-13T02:27:30.999389440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:31.001439 containerd[1455]: time="2024-12-13T02:27:30.999482404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:31.054590 systemd[1]: Started cri-containerd-f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e.scope - libcontainer container f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e. Dec 13 02:27:31.177923 containerd[1455]: time="2024-12-13T02:27:31.177826652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dd4dd5bd7-zwr86,Uid:d1e87c6a-e32e-4fa8-9314-e0438c9aec4d,Namespace:calico-system,Attempt:1,} returns sandbox id \"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e\"" Dec 13 02:27:31.214615 systemd-networkd[1365]: cali3de8a40e8ca: Link UP Dec 13 02:27:31.215802 systemd-networkd[1365]: cali3de8a40e8ca: Gained carrier Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.069 [INFO][4558] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0 calico-apiserver-57b88875bb- calico-apiserver ecfe26ab-ca79-4c78-8b1f-6efd69af6d02 838 0 2024-12-13 02:26:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57b88875bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-b-462e46fdf9.novalocal calico-apiserver-57b88875bb-sw527 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3de8a40e8ca [] []}} ContainerID="31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-sw527" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-" Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.069 [INFO][4558] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-sw527" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.113 [INFO][4607] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" HandleID="k8s-pod-network.31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.155 [INFO][4607] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" HandleID="k8s-pod-network.31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d9b60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-b-462e46fdf9.novalocal", "pod":"calico-apiserver-57b88875bb-sw527", "timestamp":"2024-12-13 02:27:31.113735313 +0000 UTC"}, Hostname:"ci-4081-2-1-b-462e46fdf9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.156 [INFO][4607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.156 [INFO][4607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.156 [INFO][4607] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-462e46fdf9.novalocal' Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.169 [INFO][4607] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.174 [INFO][4607] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.183 [INFO][4607] ipam/ipam.go 489: Trying affinity for 192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.188 [INFO][4607] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.191 [INFO][4607] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.191 [INFO][4607] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.193 [INFO][4607] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.198 [INFO][4607] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.207 [INFO][4607] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.198/26] block=192.168.89.192/26 handle="k8s-pod-network.31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.207 [INFO][4607] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.198/26] handle="k8s-pod-network.31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.207 [INFO][4607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:31.240192 containerd[1455]: 2024-12-13 02:27:31.207 [INFO][4607] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.198/26] IPv6=[] ContainerID="31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" HandleID="k8s-pod-network.31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:31.246962 containerd[1455]: 2024-12-13 02:27:31.210 [INFO][4558] cni-plugin/k8s.go 386: Populated endpoint ContainerID="31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-sw527" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0", GenerateName:"calico-apiserver-57b88875bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecfe26ab-ca79-4c78-8b1f-6efd69af6d02", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b88875bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"", Pod:"calico-apiserver-57b88875bb-sw527", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3de8a40e8ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:31.246962 containerd[1455]: 2024-12-13 02:27:31.210 [INFO][4558] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.198/32] ContainerID="31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-sw527" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:31.246962 containerd[1455]: 2024-12-13 02:27:31.211 [INFO][4558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3de8a40e8ca ContainerID="31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-sw527" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:31.246962 containerd[1455]: 2024-12-13 02:27:31.217 [INFO][4558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-sw527" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:31.246962 containerd[1455]: 2024-12-13 02:27:31.217 [INFO][4558] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-sw527" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0", GenerateName:"calico-apiserver-57b88875bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecfe26ab-ca79-4c78-8b1f-6efd69af6d02", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b88875bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f", Pod:"calico-apiserver-57b88875bb-sw527", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3de8a40e8ca", MAC:"9a:ae:0e:01:a5:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:31.246962 containerd[1455]: 2024-12-13 02:27:31.234 [INFO][4558] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f" Namespace="calico-apiserver" Pod="calico-apiserver-57b88875bb-sw527" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:31.281027 containerd[1455]: time="2024-12-13T02:27:31.280900584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:27:31.281248 containerd[1455]: time="2024-12-13T02:27:31.281043964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:27:31.281248 containerd[1455]: time="2024-12-13T02:27:31.281078428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:31.281388 containerd[1455]: time="2024-12-13T02:27:31.281335961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:31.302639 systemd[1]: Started cri-containerd-31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f.scope - libcontainer container 31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f. Dec 13 02:27:31.362579 containerd[1455]: time="2024-12-13T02:27:31.362433116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b88875bb-sw527,Uid:ecfe26ab-ca79-4c78-8b1f-6efd69af6d02,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f\"" Dec 13 02:27:31.655513 systemd-networkd[1365]: califace4002a81: Gained IPv6LL Dec 13 02:27:31.871116 containerd[1455]: time="2024-12-13T02:27:31.871004883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:31.872165 containerd[1455]: time="2024-12-13T02:27:31.872053231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 02:27:31.873228 containerd[1455]: time="2024-12-13T02:27:31.873181028Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:31.876044 containerd[1455]: time="2024-12-13T02:27:31.875995542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:31.876940 containerd[1455]: time="2024-12-13T02:27:31.876846250Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.377881429s" Dec 13 02:27:31.876940 containerd[1455]: time="2024-12-13T02:27:31.876897305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 02:27:31.880553 containerd[1455]: time="2024-12-13T02:27:31.880207680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 02:27:31.882149 containerd[1455]: time="2024-12-13T02:27:31.882032576Z" level=info msg="CreateContainer within sandbox \"f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 02:27:31.905953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839278567.mount: Deactivated successfully. Dec 13 02:27:31.920682 containerd[1455]: time="2024-12-13T02:27:31.920637118Z" level=info msg="CreateContainer within sandbox \"f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1948819f08085d5e56440bc99dd3109484ccb3da959f17ddd689fa430ca35f4d\"" Dec 13 02:27:31.923113 containerd[1455]: time="2024-12-13T02:27:31.921587152Z" level=info msg="StartContainer for \"1948819f08085d5e56440bc99dd3109484ccb3da959f17ddd689fa430ca35f4d\"" Dec 13 02:27:31.964327 systemd[1]: Started cri-containerd-1948819f08085d5e56440bc99dd3109484ccb3da959f17ddd689fa430ca35f4d.scope - libcontainer container 1948819f08085d5e56440bc99dd3109484ccb3da959f17ddd689fa430ca35f4d. Dec 13 02:27:32.006229 containerd[1455]: time="2024-12-13T02:27:32.005974373Z" level=info msg="StartContainer for \"1948819f08085d5e56440bc99dd3109484ccb3da959f17ddd689fa430ca35f4d\" returns successfully" Dec 13 02:27:32.039460 systemd-networkd[1365]: calif8e82622d59: Gained IPv6LL Dec 13 02:27:32.295685 systemd-networkd[1365]: cali8498c7c6f82: Gained IPv6LL Dec 13 02:27:32.849367 systemd[1]: run-containerd-runc-k8s.io-1948819f08085d5e56440bc99dd3109484ccb3da959f17ddd689fa430ca35f4d-runc.PWo4i2.mount: Deactivated successfully. Dec 13 02:27:33.191592 systemd-networkd[1365]: cali3de8a40e8ca: Gained IPv6LL Dec 13 02:27:35.751147 containerd[1455]: time="2024-12-13T02:27:35.750863309Z" level=info msg="StopPodSandbox for \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\"" Dec 13 02:27:35.938117 containerd[1455]: time="2024-12-13T02:27:35.938041200Z" level=info msg="StopContainer for \"7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe\" with timeout 300 (s)" Dec 13 02:27:35.945181 containerd[1455]: time="2024-12-13T02:27:35.943269072Z" level=info msg="Stop container \"7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe\" with signal terminated" Dec 13 02:27:36.062203 containerd[1455]: 2024-12-13 02:27:35.914 [WARNING][4744] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e426aab6-f0cc-4d85-8ff0-a831034783ac", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9", Pod:"coredns-6f6b679f8f-kkd64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6daff0d3284", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:36.062203 containerd[1455]: 2024-12-13 02:27:35.918 [INFO][4744] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Dec 13 02:27:36.062203 containerd[1455]: 2024-12-13 02:27:35.918 [INFO][4744] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" iface="eth0" netns="" Dec 13 02:27:36.062203 containerd[1455]: 2024-12-13 02:27:35.918 [INFO][4744] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Dec 13 02:27:36.062203 containerd[1455]: 2024-12-13 02:27:35.918 [INFO][4744] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Dec 13 02:27:36.062203 containerd[1455]: 2024-12-13 02:27:36.029 [INFO][4752] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" HandleID="k8s-pod-network.a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:36.062203 containerd[1455]: 2024-12-13 02:27:36.029 [INFO][4752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:36.062203 containerd[1455]: 2024-12-13 02:27:36.029 [INFO][4752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:36.062203 containerd[1455]: 2024-12-13 02:27:36.049 [WARNING][4752] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" HandleID="k8s-pod-network.a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:36.062203 containerd[1455]: 2024-12-13 02:27:36.051 [INFO][4752] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" HandleID="k8s-pod-network.a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:36.062203 containerd[1455]: 2024-12-13 02:27:36.055 [INFO][4752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:36.062203 containerd[1455]: 2024-12-13 02:27:36.059 [INFO][4744] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Dec 13 02:27:36.066897 containerd[1455]: time="2024-12-13T02:27:36.062400383Z" level=info msg="TearDown network for sandbox \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\" successfully" Dec 13 02:27:36.066897 containerd[1455]: time="2024-12-13T02:27:36.062428466Z" level=info msg="StopPodSandbox for \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\" returns successfully" Dec 13 02:27:36.066897 containerd[1455]: time="2024-12-13T02:27:36.063900649Z" level=info msg="RemovePodSandbox for \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\"" Dec 13 02:27:36.066897 containerd[1455]: time="2024-12-13T02:27:36.063933220Z" level=info msg="Forcibly stopping sandbox \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\"" Dec 13 02:27:36.400258 containerd[1455]: 2024-12-13 02:27:36.247 [WARNING][4780] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e426aab6-f0cc-4d85-8ff0-a831034783ac", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"9652d5565c8ae480d8ab858ab4841cf112bb7104e0703ffc5433410f04632af9", Pod:"coredns-6f6b679f8f-kkd64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6daff0d3284", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:36.400258 containerd[1455]: 2024-12-13 02:27:36.247 [INFO][4780] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Dec 13 02:27:36.400258 containerd[1455]: 2024-12-13 02:27:36.247 [INFO][4780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" iface="eth0" netns="" Dec 13 02:27:36.400258 containerd[1455]: 2024-12-13 02:27:36.247 [INFO][4780] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Dec 13 02:27:36.400258 containerd[1455]: 2024-12-13 02:27:36.247 [INFO][4780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Dec 13 02:27:36.400258 containerd[1455]: 2024-12-13 02:27:36.313 [INFO][4800] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" HandleID="k8s-pod-network.a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:36.400258 containerd[1455]: 2024-12-13 02:27:36.314 [INFO][4800] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:36.400258 containerd[1455]: 2024-12-13 02:27:36.314 [INFO][4800] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:36.400258 containerd[1455]: 2024-12-13 02:27:36.334 [WARNING][4800] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" HandleID="k8s-pod-network.a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:36.400258 containerd[1455]: 2024-12-13 02:27:36.334 [INFO][4800] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" HandleID="k8s-pod-network.a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--kkd64-eth0" Dec 13 02:27:36.400258 containerd[1455]: 2024-12-13 02:27:36.387 [INFO][4800] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:36.400258 containerd[1455]: 2024-12-13 02:27:36.394 [INFO][4780] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11" Dec 13 02:27:36.400258 containerd[1455]: time="2024-12-13T02:27:36.399592306Z" level=info msg="TearDown network for sandbox \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\" successfully" Dec 13 02:27:36.884177 containerd[1455]: time="2024-12-13T02:27:36.884096998Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:27:36.885710 containerd[1455]: time="2024-12-13T02:27:36.884230528Z" level=info msg="RemovePodSandbox \"a40d664402ea6594392dfb50e5d363e2b35c3579c5d2a66f625280f437468f11\" returns successfully" Dec 13 02:27:36.886552 containerd[1455]: time="2024-12-13T02:27:36.885972948Z" level=info msg="StopPodSandbox for \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\"" Dec 13 02:27:36.899799 containerd[1455]: time="2024-12-13T02:27:36.899749083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:36.902162 containerd[1455]: time="2024-12-13T02:27:36.902045854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 02:27:36.908509 containerd[1455]: time="2024-12-13T02:27:36.908394499Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:36.910147 containerd[1455]: time="2024-12-13T02:27:36.909992147Z" level=info msg="StopContainer for \"5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f\" with timeout 5 (s)" Dec 13 02:27:36.911232 containerd[1455]: time="2024-12-13T02:27:36.911211496Z" level=info msg="Stop container \"5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f\" with signal terminated" Dec 13 02:27:36.917430 containerd[1455]: time="2024-12-13T02:27:36.917385012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:36.923039 containerd[1455]: time="2024-12-13T02:27:36.922288575Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 5.041869359s" Dec 13 02:27:36.923039 containerd[1455]: time="2024-12-13T02:27:36.922328891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 02:27:36.927082 containerd[1455]: time="2024-12-13T02:27:36.926941208Z" level=info msg="CreateContainer within sandbox \"39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:27:36.942706 containerd[1455]: time="2024-12-13T02:27:36.942573095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 02:27:36.963009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1321072339.mount: Deactivated successfully. Dec 13 02:27:36.971039 containerd[1455]: time="2024-12-13T02:27:36.970706166Z" level=info msg="CreateContainer within sandbox \"39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e1a1b2116f5f5b9fe0a8da2b530c14c180c7be7b833e2b1a4a231fed77e8086d\"" Dec 13 02:27:36.971836 containerd[1455]: time="2024-12-13T02:27:36.971791643Z" level=info msg="StartContainer for \"e1a1b2116f5f5b9fe0a8da2b530c14c180c7be7b833e2b1a4a231fed77e8086d\"" Dec 13 02:27:36.998462 systemd[1]: cri-containerd-5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f.scope: Deactivated successfully. Dec 13 02:27:36.999809 systemd[1]: cri-containerd-5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f.scope: Consumed 2.015s CPU time. Dec 13 02:27:37.050912 systemd[1]: Started cri-containerd-e1a1b2116f5f5b9fe0a8da2b530c14c180c7be7b833e2b1a4a231fed77e8086d.scope - libcontainer container e1a1b2116f5f5b9fe0a8da2b530c14c180c7be7b833e2b1a4a231fed77e8086d. Dec 13 02:27:37.083960 containerd[1455]: time="2024-12-13T02:27:37.082859342Z" level=info msg="shim disconnected" id=5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f namespace=k8s.io Dec 13 02:27:37.083960 containerd[1455]: time="2024-12-13T02:27:37.083169084Z" level=warning msg="cleaning up after shim disconnected" id=5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f namespace=k8s.io Dec 13 02:27:37.083960 containerd[1455]: time="2024-12-13T02:27:37.083523869Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:27:37.121187 containerd[1455]: time="2024-12-13T02:27:37.121085855Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:27:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 02:27:37.126094 containerd[1455]: 2024-12-13 02:27:37.017 [WARNING][4829] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0", GenerateName:"calico-apiserver-57b88875bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"6892dbf0-f6b7-4660-b308-d92eaf9f3043", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b88875bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91", Pod:"calico-apiserver-57b88875bb-7wjvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8e82622d59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:37.126094 containerd[1455]: 2024-12-13 02:27:37.018 [INFO][4829] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Dec 13 02:27:37.126094 containerd[1455]: 2024-12-13 02:27:37.018 [INFO][4829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" iface="eth0" netns="" Dec 13 02:27:37.126094 containerd[1455]: 2024-12-13 02:27:37.018 [INFO][4829] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Dec 13 02:27:37.126094 containerd[1455]: 2024-12-13 02:27:37.018 [INFO][4829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Dec 13 02:27:37.126094 containerd[1455]: 2024-12-13 02:27:37.085 [INFO][4850] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" HandleID="k8s-pod-network.1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:37.126094 containerd[1455]: 2024-12-13 02:27:37.085 [INFO][4850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:37.126094 containerd[1455]: 2024-12-13 02:27:37.085 [INFO][4850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:37.126094 containerd[1455]: 2024-12-13 02:27:37.105 [WARNING][4850] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" HandleID="k8s-pod-network.1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:37.126094 containerd[1455]: 2024-12-13 02:27:37.106 [INFO][4850] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" HandleID="k8s-pod-network.1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:37.126094 containerd[1455]: 2024-12-13 02:27:37.108 [INFO][4850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:37.126094 containerd[1455]: 2024-12-13 02:27:37.114 [INFO][4829] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Dec 13 02:27:37.154154 containerd[1455]: time="2024-12-13T02:27:37.126486721Z" level=info msg="TearDown network for sandbox \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\" successfully" Dec 13 02:27:37.154154 containerd[1455]: time="2024-12-13T02:27:37.126519583Z" level=info msg="StopPodSandbox for \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\" returns successfully" Dec 13 02:27:37.154154 containerd[1455]: time="2024-12-13T02:27:37.129552986Z" level=info msg="RemovePodSandbox for \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\"" Dec 13 02:27:37.154154 containerd[1455]: time="2024-12-13T02:27:37.129602979Z" level=info msg="Forcibly stopping sandbox \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\"" Dec 13 02:27:37.141563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f-rootfs.mount: Deactivated successfully. Dec 13 02:27:37.241517 containerd[1455]: 2024-12-13 02:27:37.197 [WARNING][4908] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0", GenerateName:"calico-apiserver-57b88875bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"6892dbf0-f6b7-4660-b308-d92eaf9f3043", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b88875bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"39490daf7ea4d7b749f2f109c35a3c8658747a1aceefb8940c41460702cf5e91", Pod:"calico-apiserver-57b88875bb-7wjvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8e82622d59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:37.241517 containerd[1455]: 2024-12-13 02:27:37.197 [INFO][4908] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Dec 13 02:27:37.241517 containerd[1455]: 2024-12-13 02:27:37.197 [INFO][4908] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" iface="eth0" netns="" Dec 13 02:27:37.241517 containerd[1455]: 2024-12-13 02:27:37.197 [INFO][4908] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Dec 13 02:27:37.241517 containerd[1455]: 2024-12-13 02:27:37.197 [INFO][4908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Dec 13 02:27:37.241517 containerd[1455]: 2024-12-13 02:27:37.223 [INFO][4920] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" HandleID="k8s-pod-network.1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:37.241517 containerd[1455]: 2024-12-13 02:27:37.223 [INFO][4920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:37.241517 containerd[1455]: 2024-12-13 02:27:37.223 [INFO][4920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:37.241517 containerd[1455]: 2024-12-13 02:27:37.233 [WARNING][4920] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" HandleID="k8s-pod-network.1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:37.241517 containerd[1455]: 2024-12-13 02:27:37.233 [INFO][4920] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" HandleID="k8s-pod-network.1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--7wjvt-eth0" Dec 13 02:27:37.241517 containerd[1455]: 2024-12-13 02:27:37.235 [INFO][4920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:37.241517 containerd[1455]: 2024-12-13 02:27:37.238 [INFO][4908] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3" Dec 13 02:27:37.243605 containerd[1455]: time="2024-12-13T02:27:37.241610902Z" level=info msg="TearDown network for sandbox \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\" successfully" Dec 13 02:27:37.528016 containerd[1455]: time="2024-12-13T02:27:37.527418817Z" level=info msg="StartContainer for \"e1a1b2116f5f5b9fe0a8da2b530c14c180c7be7b833e2b1a4a231fed77e8086d\" returns successfully" Dec 13 02:27:37.575810 containerd[1455]: time="2024-12-13T02:27:37.575732852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:27:37.575810 containerd[1455]: time="2024-12-13T02:27:37.575814184Z" level=info msg="RemovePodSandbox \"1d81cb139185fc8e7a3fedeca4dc3ac7da3980eb65545d492f853cea14fda0b3\" returns successfully" Dec 13 02:27:37.577284 containerd[1455]: time="2024-12-13T02:27:37.576535648Z" level=info msg="StopPodSandbox for \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\"" Dec 13 02:27:37.684825 containerd[1455]: 2024-12-13 02:27:37.638 [WARNING][4943] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0", GenerateName:"calico-kube-controllers-7dd4dd5bd7-", Namespace:"calico-system", SelfLink:"", UID:"d1e87c6a-e32e-4fa8-9314-e0438c9aec4d", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dd4dd5bd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e", Pod:"calico-kube-controllers-7dd4dd5bd7-zwr86", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8498c7c6f82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:37.684825 containerd[1455]: 2024-12-13 02:27:37.638 [INFO][4943] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Dec 13 02:27:37.684825 containerd[1455]: 2024-12-13 02:27:37.638 [INFO][4943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" iface="eth0" netns="" Dec 13 02:27:37.684825 containerd[1455]: 2024-12-13 02:27:37.638 [INFO][4943] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Dec 13 02:27:37.684825 containerd[1455]: 2024-12-13 02:27:37.638 [INFO][4943] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Dec 13 02:27:37.684825 containerd[1455]: 2024-12-13 02:27:37.669 [INFO][4951] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" HandleID="k8s-pod-network.854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:37.684825 containerd[1455]: 2024-12-13 02:27:37.670 [INFO][4951] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:37.684825 containerd[1455]: 2024-12-13 02:27:37.670 [INFO][4951] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:37.684825 containerd[1455]: 2024-12-13 02:27:37.679 [WARNING][4951] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" HandleID="k8s-pod-network.854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:37.684825 containerd[1455]: 2024-12-13 02:27:37.679 [INFO][4951] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" HandleID="k8s-pod-network.854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:37.684825 containerd[1455]: 2024-12-13 02:27:37.681 [INFO][4951] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:37.684825 containerd[1455]: 2024-12-13 02:27:37.682 [INFO][4943] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Dec 13 02:27:37.689170 containerd[1455]: time="2024-12-13T02:27:37.684858666Z" level=info msg="TearDown network for sandbox \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\" successfully" Dec 13 02:27:37.689170 containerd[1455]: time="2024-12-13T02:27:37.684886969Z" level=info msg="StopPodSandbox for \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\" returns successfully" Dec 13 02:27:37.689170 containerd[1455]: time="2024-12-13T02:27:37.685895181Z" level=info msg="RemovePodSandbox for \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\"" Dec 13 02:27:37.689170 containerd[1455]: time="2024-12-13T02:27:37.685928413Z" level=info msg="Forcibly stopping sandbox \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\"" Dec 13 02:27:37.827145 systemd[1]: cri-containerd-7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe.scope: Deactivated successfully. Dec 13 02:27:37.847483 containerd[1455]: 2024-12-13 02:27:37.774 [WARNING][4969] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0", GenerateName:"calico-kube-controllers-7dd4dd5bd7-", Namespace:"calico-system", SelfLink:"", UID:"d1e87c6a-e32e-4fa8-9314-e0438c9aec4d", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dd4dd5bd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e", Pod:"calico-kube-controllers-7dd4dd5bd7-zwr86", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8498c7c6f82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:37.847483 containerd[1455]: 2024-12-13 02:27:37.774 [INFO][4969] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Dec 13 02:27:37.847483 containerd[1455]: 2024-12-13 02:27:37.774 [INFO][4969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" iface="eth0" netns="" Dec 13 02:27:37.847483 containerd[1455]: 2024-12-13 02:27:37.774 [INFO][4969] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Dec 13 02:27:37.847483 containerd[1455]: 2024-12-13 02:27:37.774 [INFO][4969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Dec 13 02:27:37.847483 containerd[1455]: 2024-12-13 02:27:37.824 [INFO][4976] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" HandleID="k8s-pod-network.854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:37.847483 containerd[1455]: 2024-12-13 02:27:37.825 [INFO][4976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:37.847483 containerd[1455]: 2024-12-13 02:27:37.825 [INFO][4976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:37.847483 containerd[1455]: 2024-12-13 02:27:37.837 [WARNING][4976] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" HandleID="k8s-pod-network.854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:37.847483 containerd[1455]: 2024-12-13 02:27:37.837 [INFO][4976] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" HandleID="k8s-pod-network.854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:37.847483 containerd[1455]: 2024-12-13 02:27:37.840 [INFO][4976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:37.847483 containerd[1455]: 2024-12-13 02:27:37.842 [INFO][4969] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167" Dec 13 02:27:37.847952 containerd[1455]: time="2024-12-13T02:27:37.847508742Z" level=info msg="TearDown network for sandbox \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\" successfully" Dec 13 02:27:37.862837 containerd[1455]: time="2024-12-13T02:27:37.861890153Z" level=info msg="shim disconnected" id=7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe namespace=k8s.io Dec 13 02:27:37.862639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe-rootfs.mount: Deactivated successfully. Dec 13 02:27:37.863179 containerd[1455]: time="2024-12-13T02:27:37.863159565Z" level=warning msg="cleaning up after shim disconnected" id=7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe namespace=k8s.io Dec 13 02:27:37.863304 containerd[1455]: time="2024-12-13T02:27:37.863286914Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:27:38.764762 kubelet[2588]: I1213 02:27:38.764682 2588 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:27:38.911272 containerd[1455]: time="2024-12-13T02:27:38.910812084Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:27:39.017490 containerd[1455]: time="2024-12-13T02:27:39.017118222Z" level=info msg="StopContainer for \"5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f\" returns successfully" Dec 13 02:27:39.017858 containerd[1455]: time="2024-12-13T02:27:39.017254258Z" level=info msg="RemovePodSandbox \"854b97bed7a94239df4b9d5f0f59b773722d4510720859d733556a348ba02167\" returns successfully" Dec 13 02:27:39.018607 containerd[1455]: time="2024-12-13T02:27:39.018346507Z" level=info msg="StopPodSandbox for \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\"" Dec 13 02:27:39.018607 containerd[1455]: time="2024-12-13T02:27:39.018380381Z" level=info msg="Container to stop \"5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:27:39.018607 containerd[1455]: time="2024-12-13T02:27:39.018395840Z" level=info msg="Container to stop \"e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:27:39.018607 containerd[1455]: time="2024-12-13T02:27:39.018407822Z" level=info msg="Container to stop \"21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:27:39.018607 containerd[1455]: time="2024-12-13T02:27:39.018473235Z" level=info msg="StopPodSandbox for \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\"" Dec 13 02:27:39.032547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f-shm.mount: Deactivated successfully. Dec 13 02:27:39.034990 containerd[1455]: time="2024-12-13T02:27:39.034416887Z" level=info msg="StopContainer for \"7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe\" returns successfully" Dec 13 02:27:39.041139 containerd[1455]: time="2024-12-13T02:27:39.040880506Z" level=info msg="StopPodSandbox for \"1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd\"" Dec 13 02:27:39.041139 containerd[1455]: time="2024-12-13T02:27:39.040926162Z" level=info msg="Container to stop \"7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:27:39.053482 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd-shm.mount: Deactivated successfully. Dec 13 02:27:39.072184 systemd[1]: cri-containerd-1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd.scope: Deactivated successfully. Dec 13 02:27:39.083502 systemd[1]: cri-containerd-c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f.scope: Deactivated successfully. Dec 13 02:27:39.140600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd-rootfs.mount: Deactivated successfully. Dec 13 02:27:39.146750 containerd[1455]: time="2024-12-13T02:27:39.145677830Z" level=info msg="shim disconnected" id=1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd namespace=k8s.io Dec 13 02:27:39.146750 containerd[1455]: time="2024-12-13T02:27:39.145736209Z" level=warning msg="cleaning up after shim disconnected" id=1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd namespace=k8s.io Dec 13 02:27:39.146750 containerd[1455]: time="2024-12-13T02:27:39.145764382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:27:39.156168 containerd[1455]: time="2024-12-13T02:27:39.154054660Z" level=info msg="shim disconnected" id=c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f namespace=k8s.io Dec 13 02:27:39.156168 containerd[1455]: time="2024-12-13T02:27:39.155056090Z" level=warning msg="cleaning up after shim disconnected" id=c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f namespace=k8s.io Dec 13 02:27:39.156168 containerd[1455]: time="2024-12-13T02:27:39.155072301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:27:39.157221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f-rootfs.mount: Deactivated successfully. Dec 13 02:27:39.204534 containerd[1455]: time="2024-12-13T02:27:39.204479162Z" level=info msg="TearDown network for sandbox \"1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd\" successfully" Dec 13 02:27:39.204534 containerd[1455]: time="2024-12-13T02:27:39.204517745Z" level=info msg="StopPodSandbox for \"1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd\" returns successfully" Dec 13 02:27:39.231164 containerd[1455]: time="2024-12-13T02:27:39.229976954Z" level=info msg="TearDown network for sandbox \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\" successfully" Dec 13 02:27:39.231164 containerd[1455]: time="2024-12-13T02:27:39.230010497Z" level=info msg="StopPodSandbox for \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\" returns successfully" Dec 13 02:27:39.246300 kubelet[2588]: I1213 02:27:39.246242 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57b88875bb-7wjvt" podStartSLOduration=38.066942721 podStartE2EDuration="44.246202645s" podCreationTimestamp="2024-12-13 02:26:55 +0000 UTC" firstStartedPulling="2024-12-13 02:27:30.745502605 +0000 UTC m=+55.488332934" lastFinishedPulling="2024-12-13 02:27:36.924762529 +0000 UTC m=+61.667592858" observedRunningTime="2024-12-13 02:27:38.630643703 +0000 UTC m=+63.373474063" watchObservedRunningTime="2024-12-13 02:27:39.246202645 +0000 UTC m=+63.989032974" Dec 13 02:27:39.261072 containerd[1455]: 2024-12-13 02:27:39.170 [WARNING][5023] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0", GenerateName:"calico-apiserver-57b88875bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecfe26ab-ca79-4c78-8b1f-6efd69af6d02", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b88875bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f", Pod:"calico-apiserver-57b88875bb-sw527", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3de8a40e8ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:39.261072 containerd[1455]: 2024-12-13 02:27:39.170 [INFO][5023] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Dec 13 02:27:39.261072 containerd[1455]: 2024-12-13 02:27:39.170 [INFO][5023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" iface="eth0" netns="" Dec 13 02:27:39.261072 containerd[1455]: 2024-12-13 02:27:39.170 [INFO][5023] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Dec 13 02:27:39.261072 containerd[1455]: 2024-12-13 02:27:39.170 [INFO][5023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Dec 13 02:27:39.261072 containerd[1455]: 2024-12-13 02:27:39.234 [INFO][5078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" HandleID="k8s-pod-network.32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:39.261072 containerd[1455]: 2024-12-13 02:27:39.234 [INFO][5078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:39.261072 containerd[1455]: 2024-12-13 02:27:39.234 [INFO][5078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:39.261072 containerd[1455]: 2024-12-13 02:27:39.251 [WARNING][5078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" HandleID="k8s-pod-network.32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:39.261072 containerd[1455]: 2024-12-13 02:27:39.251 [INFO][5078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" HandleID="k8s-pod-network.32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:39.261072 containerd[1455]: 2024-12-13 02:27:39.254 [INFO][5078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:39.261072 containerd[1455]: 2024-12-13 02:27:39.257 [INFO][5023] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Dec 13 02:27:39.261782 containerd[1455]: time="2024-12-13T02:27:39.261734674Z" level=info msg="TearDown network for sandbox \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\" successfully" Dec 13 02:27:39.261886 containerd[1455]: time="2024-12-13T02:27:39.261867353Z" level=info msg="StopPodSandbox for \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\" returns successfully" Dec 13 02:27:39.262706 containerd[1455]: time="2024-12-13T02:27:39.262683484Z" level=info msg="RemovePodSandbox for \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\"" Dec 13 02:27:39.262927 containerd[1455]: time="2024-12-13T02:27:39.262891535Z" level=info msg="Forcibly stopping sandbox \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\"" Dec 13 02:27:39.303316 kubelet[2588]: E1213 02:27:39.301853 2588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" containerName="calico-node" Dec 13 02:27:39.303316 kubelet[2588]: E1213 02:27:39.301916 2588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3313764-c25a-40ad-9f30-c39affa2bbac" containerName="calico-typha" Dec 13 02:27:39.303316 kubelet[2588]: E1213 02:27:39.301981 2588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" containerName="flexvol-driver" Dec 13 02:27:39.303316 kubelet[2588]: E1213 02:27:39.301993 2588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" containerName="install-cni" Dec 13 02:27:39.303316 kubelet[2588]: I1213 02:27:39.302083 2588 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3313764-c25a-40ad-9f30-c39affa2bbac" containerName="calico-typha" Dec 13 02:27:39.303316 kubelet[2588]: I1213 02:27:39.302094 2588 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" containerName="calico-node" Dec 13 02:27:39.340101 systemd[1]: Created slice kubepods-besteffort-podf6dbf03e_542f_423c_92ac_07ec29c4f125.slice - libcontainer container kubepods-besteffort-podf6dbf03e_542f_423c_92ac_07ec29c4f125.slice. Dec 13 02:27:39.350453 systemd[1]: Created slice kubepods-besteffort-pod565fe6e4_8448_4596_a333_f4e46db847f9.slice - libcontainer container kubepods-besteffort-pod565fe6e4_8448_4596_a333_f4e46db847f9.slice. Dec 13 02:27:39.356145 kubelet[2588]: I1213 02:27:39.356088 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3313764-c25a-40ad-9f30-c39affa2bbac-tigera-ca-bundle\") pod \"d3313764-c25a-40ad-9f30-c39affa2bbac\" (UID: \"d3313764-c25a-40ad-9f30-c39affa2bbac\") " Dec 13 02:27:39.356497 kubelet[2588]: I1213 02:27:39.356338 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-cni-bin-dir\") pod \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " Dec 13 02:27:39.356497 kubelet[2588]: I1213 02:27:39.356364 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-var-run-calico\") pod \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " Dec 13 02:27:39.356763 kubelet[2588]: I1213 02:27:39.356604 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-xtables-lock\") pod \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " Dec 13 02:27:39.356763 kubelet[2588]: I1213 02:27:39.356633 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-flexvol-driver-host\") pod \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " Dec 13 02:27:39.357200 kubelet[2588]: I1213 02:27:39.356817 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d3313764-c25a-40ad-9f30-c39affa2bbac-typha-certs\") pod \"d3313764-c25a-40ad-9f30-c39affa2bbac\" (UID: \"d3313764-c25a-40ad-9f30-c39affa2bbac\") " Dec 13 02:27:39.357200 kubelet[2588]: I1213 02:27:39.356849 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-lib-modules\") pod \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " Dec 13 02:27:39.357200 kubelet[2588]: I1213 02:27:39.357024 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-tigera-ca-bundle\") pod \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " Dec 13 02:27:39.360258 kubelet[2588]: I1213 02:27:39.357270 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mghdw\" (UniqueName: \"kubernetes.io/projected/d3313764-c25a-40ad-9f30-c39affa2bbac-kube-api-access-mghdw\") pod \"d3313764-c25a-40ad-9f30-c39affa2bbac\" (UID: \"d3313764-c25a-40ad-9f30-c39affa2bbac\") " Dec 13 02:27:39.380060 systemd[1]: var-lib-kubelet-pods-d3313764\x2dc25a\x2d40ad\x2d9f30\x2dc39affa2bbac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmghdw.mount: Deactivated successfully. Dec 13 02:27:39.384572 kubelet[2588]: I1213 02:27:39.384538 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-node-certs\") pod \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " Dec 13 02:27:39.384889 kubelet[2588]: I1213 02:27:39.384872 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-cni-log-dir\") pod \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " Dec 13 02:27:39.385057 kubelet[2588]: I1213 02:27:39.384963 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-cni-net-dir\") pod \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " Dec 13 02:27:39.385315 kubelet[2588]: I1213 02:27:39.385299 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-policysync\") pod \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " Dec 13 02:27:39.386677 kubelet[2588]: I1213 02:27:39.385451 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2g8d\" (UniqueName: \"kubernetes.io/projected/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-kube-api-access-s2g8d\") pod \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " Dec 13 02:27:39.386677 kubelet[2588]: I1213 02:27:39.385502 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-var-lib-calico\") pod \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\" (UID: \"d7e8fe28-38c6-4e23-979c-3075ce4b6bf5\") " Dec 13 02:27:39.386677 kubelet[2588]: I1213 02:27:39.385581 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" (UID: "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:27:39.386677 kubelet[2588]: I1213 02:27:39.379865 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" (UID: "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:27:39.386677 kubelet[2588]: I1213 02:27:39.386379 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" (UID: "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:27:39.386835 kubelet[2588]: I1213 02:27:39.386409 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" (UID: "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:27:39.388229 kubelet[2588]: I1213 02:27:39.388204 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3313764-c25a-40ad-9f30-c39affa2bbac-kube-api-access-mghdw" (OuterVolumeSpecName: "kube-api-access-mghdw") pod "d3313764-c25a-40ad-9f30-c39affa2bbac" (UID: "d3313764-c25a-40ad-9f30-c39affa2bbac"). InnerVolumeSpecName "kube-api-access-mghdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:27:39.397855 kubelet[2588]: I1213 02:27:39.397556 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" (UID: "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:27:39.398856 kubelet[2588]: I1213 02:27:39.398816 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" (UID: "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:27:39.400496 kubelet[2588]: I1213 02:27:39.398994 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" (UID: "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:27:39.400496 kubelet[2588]: I1213 02:27:39.399020 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-policysync" (OuterVolumeSpecName: "policysync") pod "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" (UID: "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:27:39.400496 kubelet[2588]: I1213 02:27:39.399466 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" (UID: "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:27:39.400496 kubelet[2588]: I1213 02:27:39.400245 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3313764-c25a-40ad-9f30-c39affa2bbac-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "d3313764-c25a-40ad-9f30-c39affa2bbac" (UID: "d3313764-c25a-40ad-9f30-c39affa2bbac"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:27:39.414873 kubelet[2588]: I1213 02:27:39.413305 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-node-certs" (OuterVolumeSpecName: "node-certs") pod "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" (UID: "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:27:39.419712 kubelet[2588]: I1213 02:27:39.419631 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-kube-api-access-s2g8d" (OuterVolumeSpecName: "kube-api-access-s2g8d") pod "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" (UID: "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5"). InnerVolumeSpecName "kube-api-access-s2g8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:27:39.424817 kubelet[2588]: I1213 02:27:39.424765 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3313764-c25a-40ad-9f30-c39affa2bbac-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "d3313764-c25a-40ad-9f30-c39affa2bbac" (UID: "d3313764-c25a-40ad-9f30-c39affa2bbac"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:27:39.430731 kubelet[2588]: I1213 02:27:39.430681 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" (UID: "d7e8fe28-38c6-4e23-979c-3075ce4b6bf5"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:27:39.455913 systemd[1]: Removed slice kubepods-besteffort-podd3313764_c25a_40ad_9f30_c39affa2bbac.slice - libcontainer container kubepods-besteffort-podd3313764_c25a_40ad_9f30_c39affa2bbac.slice. Dec 13 02:27:39.463518 systemd[1]: Removed slice kubepods-besteffort-podd7e8fe28_38c6_4e23_979c_3075ce4b6bf5.slice - libcontainer container kubepods-besteffort-podd7e8fe28_38c6_4e23_979c_3075ce4b6bf5.slice. Dec 13 02:27:39.463633 systemd[1]: kubepods-besteffort-podd7e8fe28_38c6_4e23_979c_3075ce4b6bf5.slice: Consumed 2.770s CPU time. Dec 13 02:27:39.492473 containerd[1455]: 2024-12-13 02:27:39.372 [WARNING][5111] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0", GenerateName:"calico-apiserver-57b88875bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecfe26ab-ca79-4c78-8b1f-6efd69af6d02", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b88875bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f", Pod:"calico-apiserver-57b88875bb-sw527", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3de8a40e8ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:39.492473 containerd[1455]: 2024-12-13 02:27:39.373 [INFO][5111] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Dec 13 02:27:39.492473 containerd[1455]: 2024-12-13 02:27:39.374 [INFO][5111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" iface="eth0" netns="" Dec 13 02:27:39.492473 containerd[1455]: 2024-12-13 02:27:39.374 [INFO][5111] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Dec 13 02:27:39.492473 containerd[1455]: 2024-12-13 02:27:39.374 [INFO][5111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Dec 13 02:27:39.492473 containerd[1455]: 2024-12-13 02:27:39.449 [INFO][5119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" HandleID="k8s-pod-network.32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:39.492473 containerd[1455]: 2024-12-13 02:27:39.449 [INFO][5119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:39.492473 containerd[1455]: 2024-12-13 02:27:39.449 [INFO][5119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:39.492473 containerd[1455]: 2024-12-13 02:27:39.485 [WARNING][5119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" HandleID="k8s-pod-network.32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:39.492473 containerd[1455]: 2024-12-13 02:27:39.485 [INFO][5119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" HandleID="k8s-pod-network.32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--apiserver--57b88875bb--sw527-eth0" Dec 13 02:27:39.492473 containerd[1455]: 2024-12-13 02:27:39.488 [INFO][5119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:39.492473 containerd[1455]: 2024-12-13 02:27:39.490 [INFO][5111] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500" Dec 13 02:27:39.492950 containerd[1455]: time="2024-12-13T02:27:39.492541539Z" level=info msg="TearDown network for sandbox \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\" successfully" Dec 13 02:27:39.546852 kubelet[2588]: I1213 02:27:39.546756 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/565fe6e4-8448-4596-a333-f4e46db847f9-cni-net-dir\") pod \"calico-node-mtq2r\" (UID: \"565fe6e4-8448-4596-a333-f4e46db847f9\") " pod="calico-system/calico-node-mtq2r" Dec 13 02:27:39.546852 kubelet[2588]: I1213 02:27:39.546829 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/565fe6e4-8448-4596-a333-f4e46db847f9-policysync\") pod \"calico-node-mtq2r\" (UID: \"565fe6e4-8448-4596-a333-f4e46db847f9\") " pod="calico-system/calico-node-mtq2r" Dec 13 02:27:39.546852 kubelet[2588]: I1213 02:27:39.546858 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/565fe6e4-8448-4596-a333-f4e46db847f9-cni-bin-dir\") pod \"calico-node-mtq2r\" (UID: \"565fe6e4-8448-4596-a333-f4e46db847f9\") " pod="calico-system/calico-node-mtq2r" Dec 13 02:27:39.547312 kubelet[2588]: I1213 02:27:39.546885 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6dbf03e-542f-423c-92ac-07ec29c4f125-tigera-ca-bundle\") pod \"calico-typha-fb679588f-f4ql7\" (UID: \"f6dbf03e-542f-423c-92ac-07ec29c4f125\") " pod="calico-system/calico-typha-fb679588f-f4ql7" Dec 13 02:27:39.547312 kubelet[2588]: I1213 02:27:39.546923 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/565fe6e4-8448-4596-a333-f4e46db847f9-cni-log-dir\") pod \"calico-node-mtq2r\" (UID: \"565fe6e4-8448-4596-a333-f4e46db847f9\") " pod="calico-system/calico-node-mtq2r" Dec 13 02:27:39.547312 kubelet[2588]: I1213 02:27:39.546955 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/565fe6e4-8448-4596-a333-f4e46db847f9-node-certs\") pod \"calico-node-mtq2r\" (UID: \"565fe6e4-8448-4596-a333-f4e46db847f9\") " pod="calico-system/calico-node-mtq2r" Dec 13 02:27:39.547312 kubelet[2588]: I1213 02:27:39.546983 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/565fe6e4-8448-4596-a333-f4e46db847f9-var-run-calico\") pod \"calico-node-mtq2r\" (UID: \"565fe6e4-8448-4596-a333-f4e46db847f9\") " pod="calico-system/calico-node-mtq2r" Dec 13 02:27:39.547312 kubelet[2588]: I1213 02:27:39.547008 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/565fe6e4-8448-4596-a333-f4e46db847f9-var-lib-calico\") pod \"calico-node-mtq2r\" (UID: \"565fe6e4-8448-4596-a333-f4e46db847f9\") " pod="calico-system/calico-node-mtq2r" Dec 13 02:27:39.547651 kubelet[2588]: I1213 02:27:39.547028 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b7b5\" (UniqueName: \"kubernetes.io/projected/565fe6e4-8448-4596-a333-f4e46db847f9-kube-api-access-5b7b5\") pod \"calico-node-mtq2r\" (UID: \"565fe6e4-8448-4596-a333-f4e46db847f9\") " pod="calico-system/calico-node-mtq2r" Dec 13 02:27:39.547651 kubelet[2588]: I1213 02:27:39.547049 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nkt2\" (UniqueName: \"kubernetes.io/projected/f6dbf03e-542f-423c-92ac-07ec29c4f125-kube-api-access-4nkt2\") pod \"calico-typha-fb679588f-f4ql7\" (UID: \"f6dbf03e-542f-423c-92ac-07ec29c4f125\") " pod="calico-system/calico-typha-fb679588f-f4ql7" Dec 13 02:27:39.547651 kubelet[2588]: I1213 02:27:39.547069 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/565fe6e4-8448-4596-a333-f4e46db847f9-lib-modules\") pod \"calico-node-mtq2r\" (UID: \"565fe6e4-8448-4596-a333-f4e46db847f9\") " pod="calico-system/calico-node-mtq2r" Dec 13 02:27:39.547651 kubelet[2588]: I1213 02:27:39.547090 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/565fe6e4-8448-4596-a333-f4e46db847f9-xtables-lock\") pod \"calico-node-mtq2r\" (UID: \"565fe6e4-8448-4596-a333-f4e46db847f9\") " pod="calico-system/calico-node-mtq2r" Dec 13 02:27:39.547651 kubelet[2588]: I1213 02:27:39.547111 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f6dbf03e-542f-423c-92ac-07ec29c4f125-typha-certs\") pod \"calico-typha-fb679588f-f4ql7\" (UID: \"f6dbf03e-542f-423c-92ac-07ec29c4f125\") " pod="calico-system/calico-typha-fb679588f-f4ql7" Dec 13 02:27:39.547956 kubelet[2588]: I1213 02:27:39.547151 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/565fe6e4-8448-4596-a333-f4e46db847f9-tigera-ca-bundle\") pod \"calico-node-mtq2r\" (UID: \"565fe6e4-8448-4596-a333-f4e46db847f9\") " pod="calico-system/calico-node-mtq2r" Dec 13 02:27:39.547956 kubelet[2588]: I1213 02:27:39.547175 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/565fe6e4-8448-4596-a333-f4e46db847f9-flexvol-driver-host\") pod \"calico-node-mtq2r\" (UID: \"565fe6e4-8448-4596-a333-f4e46db847f9\") " pod="calico-system/calico-node-mtq2r" Dec 13 02:27:39.555444 kubelet[2588]: I1213 02:27:39.555281 2588 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-var-lib-calico\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:39.555444 kubelet[2588]: I1213 02:27:39.555317 2588 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-cni-bin-dir\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:39.555444 kubelet[2588]: I1213 02:27:39.555330 2588 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-var-run-calico\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:39.555444 kubelet[2588]: I1213 02:27:39.555344 2588 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3313764-c25a-40ad-9f30-c39affa2bbac-tigera-ca-bundle\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:39.555444 kubelet[2588]: I1213 02:27:39.555356 2588 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-flexvol-driver-host\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:39.555444 kubelet[2588]: I1213 02:27:39.555367 2588 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-xtables-lock\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:39.555444 kubelet[2588]: I1213 02:27:39.555377 2588 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-lib-modules\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:39.558691 kubelet[2588]: I1213 02:27:39.555388 2588 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-tigera-ca-bundle\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:39.558691 kubelet[2588]: I1213 02:27:39.555398 2588 reconciler_common.go:288] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d3313764-c25a-40ad-9f30-c39affa2bbac-typha-certs\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:39.558691 kubelet[2588]: I1213 02:27:39.555408 2588 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-node-certs\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:39.558691 kubelet[2588]: I1213 02:27:39.555419 2588 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mghdw\" (UniqueName: \"kubernetes.io/projected/d3313764-c25a-40ad-9f30-c39affa2bbac-kube-api-access-mghdw\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:39.558691 kubelet[2588]: I1213 02:27:39.557160 2588 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-cni-log-dir\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:39.558691 kubelet[2588]: I1213 02:27:39.557172 2588 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-cni-net-dir\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:39.558691 kubelet[2588]: I1213 02:27:39.557202 2588 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-policysync\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:39.559172 kubelet[2588]: I1213 02:27:39.557213 2588 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-s2g8d\" (UniqueName: \"kubernetes.io/projected/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5-kube-api-access-s2g8d\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:39.566055 containerd[1455]: time="2024-12-13T02:27:39.565298232Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:27:39.566055 containerd[1455]: time="2024-12-13T02:27:39.565420230Z" level=info msg="RemovePodSandbox \"32da3c5d522031e8fc8ec35c954c9ba0cbd1bcdaea324117d6751b2a3cd3b500\" returns successfully" Dec 13 02:27:39.566913 containerd[1455]: time="2024-12-13T02:27:39.566677089Z" level=info msg="StopPodSandbox for \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\"" Dec 13 02:27:39.718722 containerd[1455]: 2024-12-13 02:27:39.634 [WARNING][5142] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3f08b161-a86f-45d0-9c4d-8166dcb1e19a", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5", Pod:"csi-node-driver-t8jkg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43010fa0bae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:39.718722 containerd[1455]: 2024-12-13 02:27:39.634 [INFO][5142] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Dec 13 02:27:39.718722 containerd[1455]: 2024-12-13 02:27:39.634 [INFO][5142] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" iface="eth0" netns="" Dec 13 02:27:39.718722 containerd[1455]: 2024-12-13 02:27:39.634 [INFO][5142] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Dec 13 02:27:39.718722 containerd[1455]: 2024-12-13 02:27:39.634 [INFO][5142] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Dec 13 02:27:39.718722 containerd[1455]: 2024-12-13 02:27:39.663 [INFO][5149] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" HandleID="k8s-pod-network.be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:39.718722 containerd[1455]: 2024-12-13 02:27:39.665 [INFO][5149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:39.718722 containerd[1455]: 2024-12-13 02:27:39.665 [INFO][5149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:39.718722 containerd[1455]: 2024-12-13 02:27:39.703 [WARNING][5149] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" HandleID="k8s-pod-network.be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:39.718722 containerd[1455]: 2024-12-13 02:27:39.703 [INFO][5149] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" HandleID="k8s-pod-network.be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:39.718722 containerd[1455]: 2024-12-13 02:27:39.710 [INFO][5149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:39.718722 containerd[1455]: 2024-12-13 02:27:39.714 [INFO][5142] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Dec 13 02:27:39.720638 containerd[1455]: time="2024-12-13T02:27:39.718733213Z" level=info msg="TearDown network for sandbox \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\" successfully" Dec 13 02:27:39.720638 containerd[1455]: time="2024-12-13T02:27:39.718762047Z" level=info msg="StopPodSandbox for \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\" returns successfully" Dec 13 02:27:39.720638 containerd[1455]: time="2024-12-13T02:27:39.719887369Z" level=info msg="RemovePodSandbox for \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\"" Dec 13 02:27:39.720638 containerd[1455]: time="2024-12-13T02:27:39.719923136Z" level=info msg="Forcibly stopping sandbox \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\"" Dec 13 02:27:39.799208 kubelet[2588]: I1213 02:27:39.799155 2588 scope.go:117] "RemoveContainer" containerID="5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f" Dec 13 02:27:39.804234 containerd[1455]: time="2024-12-13T02:27:39.802802125Z" level=info msg="RemoveContainer for \"5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f\"" Dec 13 02:27:39.813702 containerd[1455]: time="2024-12-13T02:27:39.809962954Z" level=info msg="RemoveContainer for \"5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f\" returns successfully" Dec 13 02:27:39.814386 kubelet[2588]: I1213 02:27:39.814324 2588 scope.go:117] "RemoveContainer" containerID="e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea" Dec 13 02:27:39.819522 containerd[1455]: time="2024-12-13T02:27:39.819369006Z" level=info msg="RemoveContainer for \"e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea\"" Dec 13 02:27:39.830658 containerd[1455]: time="2024-12-13T02:27:39.829069150Z" level=info msg="RemoveContainer for \"e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea\" returns successfully" Dec 13 02:27:39.831546 kubelet[2588]: I1213 02:27:39.831353 2588 scope.go:117] "RemoveContainer" containerID="21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a" Dec 13 02:27:39.837441 containerd[1455]: time="2024-12-13T02:27:39.837408881Z" level=info msg="RemoveContainer for \"21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a\"" Dec 13 02:27:39.844804 containerd[1455]: time="2024-12-13T02:27:39.844655992Z" level=info msg="RemoveContainer for \"21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a\" returns successfully" Dec 13 02:27:39.846080 kubelet[2588]: I1213 02:27:39.845251 2588 scope.go:117] "RemoveContainer" containerID="5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f" Dec 13 02:27:39.862064 containerd[1455]: time="2024-12-13T02:27:39.845843340Z" level=error msg="ContainerStatus for \"5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f\": not found" Dec 13 02:27:39.862187 kubelet[2588]: E1213 02:27:39.862154 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f\": not found" containerID="5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f" Dec 13 02:27:39.865379 kubelet[2588]: I1213 02:27:39.865166 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f"} err="failed to get container status \"5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c54bcae5060190656648de434bd745739df86026a1648e558a901141fd1666f\": not found" Dec 13 02:27:39.865379 kubelet[2588]: I1213 02:27:39.865257 2588 scope.go:117] "RemoveContainer" containerID="e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea" Dec 13 02:27:39.866057 containerd[1455]: time="2024-12-13T02:27:39.865655863Z" level=error msg="ContainerStatus for \"e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea\": not found" Dec 13 02:27:39.866115 kubelet[2588]: E1213 02:27:39.865882 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea\": not found" containerID="e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea" Dec 13 02:27:39.866115 kubelet[2588]: I1213 02:27:39.865917 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea"} err="failed to get container status \"e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"e54e4ea9fb51448315b6ab5a6de9de979ee38b2cbc8c3464a9f9ad63ecc8d0ea\": not found" Dec 13 02:27:39.866115 kubelet[2588]: I1213 02:27:39.865945 2588 scope.go:117] "RemoveContainer" containerID="21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a" Dec 13 02:27:39.866383 containerd[1455]: time="2024-12-13T02:27:39.866339846Z" level=error msg="ContainerStatus for \"21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a\": not found" Dec 13 02:27:39.866750 kubelet[2588]: E1213 02:27:39.866570 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a\": not found" containerID="21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a" Dec 13 02:27:39.866750 kubelet[2588]: I1213 02:27:39.866632 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a"} err="failed to get container status \"21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"21d2d2735c0d783d40a319ad1be1cf26fd06ac9a9abf8661d94617be88cfae7a\": not found" Dec 13 02:27:39.866750 kubelet[2588]: I1213 02:27:39.866654 2588 scope.go:117] "RemoveContainer" containerID="7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe" Dec 13 02:27:39.870150 containerd[1455]: time="2024-12-13T02:27:39.868605007Z" level=info msg="RemoveContainer for \"7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe\"" Dec 13 02:27:39.876607 containerd[1455]: time="2024-12-13T02:27:39.875886352Z" level=info msg="RemoveContainer for \"7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe\" returns successfully" Dec 13 02:27:39.877026 kubelet[2588]: I1213 02:27:39.876992 2588 scope.go:117] "RemoveContainer" containerID="7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe" Dec 13 02:27:39.877433 containerd[1455]: time="2024-12-13T02:27:39.877299564Z" level=error msg="ContainerStatus for \"7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe\": not found" Dec 13 02:27:39.877489 kubelet[2588]: E1213 02:27:39.877451 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe\": not found" containerID="7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe" Dec 13 02:27:39.877522 kubelet[2588]: I1213 02:27:39.877492 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe"} err="failed to get container status \"7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d0e7fcfd1f05f6836f9fe8df30986c8d8d3d2d5d54482e1748a7ee0566cc1fe\": not found" Dec 13 02:27:39.946465 containerd[1455]: time="2024-12-13T02:27:39.946398863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fb679588f-f4ql7,Uid:f6dbf03e-542f-423c-92ac-07ec29c4f125,Namespace:calico-system,Attempt:0,}" Dec 13 02:27:39.979149 containerd[1455]: 2024-12-13 02:27:39.781 [WARNING][5171] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3f08b161-a86f-45d0-9c4d-8166dcb1e19a", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5", Pod:"csi-node-driver-t8jkg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43010fa0bae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:39.979149 containerd[1455]: 2024-12-13 02:27:39.781 [INFO][5171] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Dec 13 02:27:39.979149 containerd[1455]: 2024-12-13 02:27:39.781 [INFO][5171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" iface="eth0" netns="" Dec 13 02:27:39.979149 containerd[1455]: 2024-12-13 02:27:39.782 [INFO][5171] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Dec 13 02:27:39.979149 containerd[1455]: 2024-12-13 02:27:39.782 [INFO][5171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Dec 13 02:27:39.979149 containerd[1455]: 2024-12-13 02:27:39.853 [INFO][5177] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" HandleID="k8s-pod-network.be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:39.979149 containerd[1455]: 2024-12-13 02:27:39.853 [INFO][5177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:39.979149 containerd[1455]: 2024-12-13 02:27:39.853 [INFO][5177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:39.979149 containerd[1455]: 2024-12-13 02:27:39.924 [WARNING][5177] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" HandleID="k8s-pod-network.be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:39.979149 containerd[1455]: 2024-12-13 02:27:39.924 [INFO][5177] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" HandleID="k8s-pod-network.be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-csi--node--driver--t8jkg-eth0" Dec 13 02:27:39.979149 containerd[1455]: 2024-12-13 02:27:39.965 [INFO][5177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:39.979149 containerd[1455]: 2024-12-13 02:27:39.977 [INFO][5171] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed" Dec 13 02:27:39.979149 containerd[1455]: time="2024-12-13T02:27:39.978964719Z" level=info msg="TearDown network for sandbox \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\" successfully" Dec 13 02:27:39.981624 containerd[1455]: time="2024-12-13T02:27:39.979449790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:27:39.981624 containerd[1455]: time="2024-12-13T02:27:39.981311934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:27:39.984342 containerd[1455]: time="2024-12-13T02:27:39.984224770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:39.984565 containerd[1455]: time="2024-12-13T02:27:39.984461625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:39.987227 containerd[1455]: time="2024-12-13T02:27:39.986978880Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:27:39.987227 containerd[1455]: time="2024-12-13T02:27:39.987067596Z" level=info msg="RemovePodSandbox \"be1558d3d1b41c02c2b2968159e9fa362200f65bd29c91264dc6f06b102eafed\" returns successfully" Dec 13 02:27:39.987729 containerd[1455]: time="2024-12-13T02:27:39.987680225Z" level=info msg="StopPodSandbox for \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\"" Dec 13 02:27:39.991517 containerd[1455]: time="2024-12-13T02:27:39.991479856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mtq2r,Uid:565fe6e4-8448-4596-a333-f4e46db847f9,Namespace:calico-system,Attempt:0,}" Dec 13 02:27:40.041985 systemd[1]: var-lib-kubelet-pods-d7e8fe28\x2d38c6\x2d4e23\x2d979c\x2d3075ce4b6bf5-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Dec 13 02:27:40.042106 systemd[1]: var-lib-kubelet-pods-d3313764\x2dc25a\x2d40ad\x2d9f30\x2dc39affa2bbac-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Dec 13 02:27:40.042205 systemd[1]: var-lib-kubelet-pods-d7e8fe28\x2d38c6\x2d4e23\x2d979c\x2d3075ce4b6bf5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds2g8d.mount: Deactivated successfully. Dec 13 02:27:40.042288 systemd[1]: var-lib-kubelet-pods-d7e8fe28\x2d38c6\x2d4e23\x2d979c\x2d3075ce4b6bf5-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Dec 13 02:27:40.042354 systemd[1]: var-lib-kubelet-pods-d3313764\x2dc25a\x2d40ad\x2d9f30\x2dc39affa2bbac-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Dec 13 02:27:40.078333 systemd[1]: Started cri-containerd-fc135a659df21a54010555c5c7af3730eabbf47ae15bca4ee72d681cfa861494.scope - libcontainer container fc135a659df21a54010555c5c7af3730eabbf47ae15bca4ee72d681cfa861494. Dec 13 02:27:40.093366 containerd[1455]: time="2024-12-13T02:27:40.089996860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:27:40.093366 containerd[1455]: time="2024-12-13T02:27:40.092887154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:27:40.093366 containerd[1455]: time="2024-12-13T02:27:40.092924865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:40.093366 containerd[1455]: time="2024-12-13T02:27:40.093037576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:40.138913 systemd[1]: run-containerd-runc-k8s.io-0fe64a57cc839a8e04308f257894f2a78f4ee91b6f32ef0fae48f0c07ca5382b-runc.DQ4jpF.mount: Deactivated successfully. Dec 13 02:27:40.154309 systemd[1]: Started cri-containerd-0fe64a57cc839a8e04308f257894f2a78f4ee91b6f32ef0fae48f0c07ca5382b.scope - libcontainer container 0fe64a57cc839a8e04308f257894f2a78f4ee91b6f32ef0fae48f0c07ca5382b. Dec 13 02:27:40.230676 containerd[1455]: time="2024-12-13T02:27:40.230626505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mtq2r,Uid:565fe6e4-8448-4596-a333-f4e46db847f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"0fe64a57cc839a8e04308f257894f2a78f4ee91b6f32ef0fae48f0c07ca5382b\"" Dec 13 02:27:40.236756 containerd[1455]: time="2024-12-13T02:27:40.236690886Z" level=info msg="CreateContainer within sandbox \"0fe64a57cc839a8e04308f257894f2a78f4ee91b6f32ef0fae48f0c07ca5382b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 02:27:40.283148 containerd[1455]: time="2024-12-13T02:27:40.282293917Z" level=info msg="CreateContainer within sandbox \"0fe64a57cc839a8e04308f257894f2a78f4ee91b6f32ef0fae48f0c07ca5382b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e44788e41c809fd107559131353006b32c23ccd7261ba08b7db47556bc82473e\"" Dec 13 02:27:40.284291 containerd[1455]: time="2024-12-13T02:27:40.284038291Z" level=info msg="StartContainer for \"e44788e41c809fd107559131353006b32c23ccd7261ba08b7db47556bc82473e\"" Dec 13 02:27:40.345336 systemd[1]: Started cri-containerd-e44788e41c809fd107559131353006b32c23ccd7261ba08b7db47556bc82473e.scope - libcontainer container e44788e41c809fd107559131353006b32c23ccd7261ba08b7db47556bc82473e. Dec 13 02:27:40.368866 containerd[1455]: 2024-12-13 02:27:40.211 [WARNING][5220] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"91c52a03-065a-4299-8a52-a6f37a97ba45", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac", Pod:"coredns-6f6b679f8f-rg7ts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califace4002a81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:40.368866 containerd[1455]: 2024-12-13 02:27:40.212 [INFO][5220] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Dec 13 02:27:40.368866 containerd[1455]: 2024-12-13 02:27:40.213 [INFO][5220] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" iface="eth0" netns="" Dec 13 02:27:40.368866 containerd[1455]: 2024-12-13 02:27:40.213 [INFO][5220] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Dec 13 02:27:40.368866 containerd[1455]: 2024-12-13 02:27:40.213 [INFO][5220] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Dec 13 02:27:40.368866 containerd[1455]: 2024-12-13 02:27:40.321 [INFO][5270] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" HandleID="k8s-pod-network.902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:40.368866 containerd[1455]: 2024-12-13 02:27:40.332 [INFO][5270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:40.368866 containerd[1455]: 2024-12-13 02:27:40.332 [INFO][5270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:40.368866 containerd[1455]: 2024-12-13 02:27:40.354 [WARNING][5270] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" HandleID="k8s-pod-network.902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:40.368866 containerd[1455]: 2024-12-13 02:27:40.354 [INFO][5270] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" HandleID="k8s-pod-network.902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:40.368866 containerd[1455]: 2024-12-13 02:27:40.358 [INFO][5270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:40.368866 containerd[1455]: 2024-12-13 02:27:40.362 [INFO][5220] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Dec 13 02:27:40.368866 containerd[1455]: time="2024-12-13T02:27:40.368728787Z" level=info msg="TearDown network for sandbox \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\" successfully" Dec 13 02:27:40.368866 containerd[1455]: time="2024-12-13T02:27:40.368760706Z" level=info msg="StopPodSandbox for \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\" returns successfully" Dec 13 02:27:40.373152 containerd[1455]: time="2024-12-13T02:27:40.371045555Z" level=info msg="RemovePodSandbox for \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\"" Dec 13 02:27:40.373152 containerd[1455]: time="2024-12-13T02:27:40.371078547Z" level=info msg="Forcibly stopping sandbox \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\"" Dec 13 02:27:40.505978 containerd[1455]: time="2024-12-13T02:27:40.505918256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fb679588f-f4ql7,Uid:f6dbf03e-542f-423c-92ac-07ec29c4f125,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc135a659df21a54010555c5c7af3730eabbf47ae15bca4ee72d681cfa861494\"" Dec 13 02:27:40.520927 containerd[1455]: time="2024-12-13T02:27:40.520888830Z" level=info msg="CreateContainer within sandbox \"fc135a659df21a54010555c5c7af3730eabbf47ae15bca4ee72d681cfa861494\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 02:27:40.576340 containerd[1455]: 2024-12-13 02:27:40.467 [WARNING][5322] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"91c52a03-065a-4299-8a52-a6f37a97ba45", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 26, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"f7b939b6a8f7142c64236ab45c15b0a58a7d69442e044e013a3ce029c2d2fdac", Pod:"coredns-6f6b679f8f-rg7ts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califace4002a81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:40.576340 containerd[1455]: 2024-12-13 02:27:40.468 [INFO][5322] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Dec 13 02:27:40.576340 containerd[1455]: 2024-12-13 02:27:40.468 [INFO][5322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" iface="eth0" netns="" Dec 13 02:27:40.576340 containerd[1455]: 2024-12-13 02:27:40.468 [INFO][5322] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Dec 13 02:27:40.576340 containerd[1455]: 2024-12-13 02:27:40.468 [INFO][5322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Dec 13 02:27:40.576340 containerd[1455]: 2024-12-13 02:27:40.541 [INFO][5329] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" HandleID="k8s-pod-network.902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:40.576340 containerd[1455]: 2024-12-13 02:27:40.541 [INFO][5329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:40.576340 containerd[1455]: 2024-12-13 02:27:40.541 [INFO][5329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:40.576340 containerd[1455]: 2024-12-13 02:27:40.553 [WARNING][5329] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" HandleID="k8s-pod-network.902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:40.576340 containerd[1455]: 2024-12-13 02:27:40.553 [INFO][5329] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" HandleID="k8s-pod-network.902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-coredns--6f6b679f8f--rg7ts-eth0" Dec 13 02:27:40.576340 containerd[1455]: 2024-12-13 02:27:40.558 [INFO][5329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:40.576340 containerd[1455]: 2024-12-13 02:27:40.566 [INFO][5322] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425" Dec 13 02:27:40.578792 containerd[1455]: time="2024-12-13T02:27:40.577475164Z" level=info msg="CreateContainer within sandbox \"fc135a659df21a54010555c5c7af3730eabbf47ae15bca4ee72d681cfa861494\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"58c9a72e91b577f7a29b71f2cdb5e6f579b283ce17416b3406c63eb5e5c94e5f\"" Dec 13 02:27:40.578792 containerd[1455]: time="2024-12-13T02:27:40.578550752Z" level=info msg="TearDown network for sandbox \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\" successfully" Dec 13 02:27:40.581535 containerd[1455]: time="2024-12-13T02:27:40.581365044Z" level=info msg="StartContainer for \"58c9a72e91b577f7a29b71f2cdb5e6f579b283ce17416b3406c63eb5e5c94e5f\"" Dec 13 02:27:40.598277 containerd[1455]: time="2024-12-13T02:27:40.596799159Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:27:40.605571 containerd[1455]: time="2024-12-13T02:27:40.605489067Z" level=info msg="RemovePodSandbox \"902372fe084eeccbc5eabcb140698fb1d6bb2e8bdc4d3344b928318d1583e425\" returns successfully" Dec 13 02:27:40.605691 containerd[1455]: time="2024-12-13T02:27:40.605649538Z" level=info msg="StartContainer for \"e44788e41c809fd107559131353006b32c23ccd7261ba08b7db47556bc82473e\" returns successfully" Dec 13 02:27:40.655346 systemd[1]: Started cri-containerd-58c9a72e91b577f7a29b71f2cdb5e6f579b283ce17416b3406c63eb5e5c94e5f.scope - libcontainer container 58c9a72e91b577f7a29b71f2cdb5e6f579b283ce17416b3406c63eb5e5c94e5f. Dec 13 02:27:40.733462 systemd[1]: Started sshd@9-172.24.4.31:22-172.24.4.1:57864.service - OpenSSH per-connection server daemon (172.24.4.1:57864). Dec 13 02:27:40.915962 systemd[1]: cri-containerd-e44788e41c809fd107559131353006b32c23ccd7261ba08b7db47556bc82473e.scope: Deactivated successfully. Dec 13 02:27:40.948056 containerd[1455]: time="2024-12-13T02:27:40.947396554Z" level=info msg="StartContainer for \"58c9a72e91b577f7a29b71f2cdb5e6f579b283ce17416b3406c63eb5e5c94e5f\" returns successfully" Dec 13 02:27:41.161193 containerd[1455]: time="2024-12-13T02:27:41.160753969Z" level=info msg="shim disconnected" id=e44788e41c809fd107559131353006b32c23ccd7261ba08b7db47556bc82473e namespace=k8s.io Dec 13 02:27:41.161193 containerd[1455]: time="2024-12-13T02:27:41.160882510Z" level=warning msg="cleaning up after shim disconnected" id=e44788e41c809fd107559131353006b32c23ccd7261ba08b7db47556bc82473e namespace=k8s.io Dec 13 02:27:41.161193 containerd[1455]: time="2024-12-13T02:27:41.160902297Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:27:41.449149 kubelet[2588]: I1213 02:27:41.448532 2588 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3313764-c25a-40ad-9f30-c39affa2bbac" path="/var/lib/kubelet/pods/d3313764-c25a-40ad-9f30-c39affa2bbac/volumes" Dec 13 02:27:41.451609 kubelet[2588]: I1213 02:27:41.451392 2588 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8fe28-38c6-4e23-979c-3075ce4b6bf5" path="/var/lib/kubelet/pods/d7e8fe28-38c6-4e23-979c-3075ce4b6bf5/volumes" Dec 13 02:27:41.893534 containerd[1455]: time="2024-12-13T02:27:41.893470549Z" level=info msg="CreateContainer within sandbox \"0fe64a57cc839a8e04308f257894f2a78f4ee91b6f32ef0fae48f0c07ca5382b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 02:27:41.911432 sshd[5377]: Accepted publickey for core from 172.24.4.1 port 57864 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:27:41.917009 sshd[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:27:41.938080 systemd-logind[1431]: New session 12 of user core. Dec 13 02:27:41.943149 containerd[1455]: time="2024-12-13T02:27:41.942956246Z" level=info msg="CreateContainer within sandbox \"0fe64a57cc839a8e04308f257894f2a78f4ee91b6f32ef0fae48f0c07ca5382b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"65bd9d2dc54a9207b0260add0245fa58b6046da900de553a0771f76a1040b885\"" Dec 13 02:27:41.946509 containerd[1455]: time="2024-12-13T02:27:41.946253273Z" level=info msg="StartContainer for \"65bd9d2dc54a9207b0260add0245fa58b6046da900de553a0771f76a1040b885\"" Dec 13 02:27:41.949485 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 02:27:41.966456 kubelet[2588]: I1213 02:27:41.966110 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fb679588f-f4ql7" podStartSLOduration=6.966086874 podStartE2EDuration="6.966086874s" podCreationTimestamp="2024-12-13 02:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:27:41.965893651 +0000 UTC m=+66.708723981" watchObservedRunningTime="2024-12-13 02:27:41.966086874 +0000 UTC m=+66.708917203" Dec 13 02:27:42.037712 systemd[1]: Started cri-containerd-65bd9d2dc54a9207b0260add0245fa58b6046da900de553a0771f76a1040b885.scope - libcontainer container 65bd9d2dc54a9207b0260add0245fa58b6046da900de553a0771f76a1040b885. Dec 13 02:27:42.126161 containerd[1455]: time="2024-12-13T02:27:42.126069870Z" level=info msg="StartContainer for \"65bd9d2dc54a9207b0260add0245fa58b6046da900de553a0771f76a1040b885\" returns successfully" Dec 13 02:27:43.247952 containerd[1455]: time="2024-12-13T02:27:43.247893350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:43.252531 containerd[1455]: time="2024-12-13T02:27:43.252160247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 02:27:43.253912 containerd[1455]: time="2024-12-13T02:27:43.253876128Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:43.262511 containerd[1455]: time="2024-12-13T02:27:43.262399021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:43.265677 containerd[1455]: time="2024-12-13T02:27:43.265540206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 6.322918028s" Dec 13 02:27:43.265677 containerd[1455]: time="2024-12-13T02:27:43.265588065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 02:27:43.269736 containerd[1455]: time="2024-12-13T02:27:43.269204532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 02:27:43.336173 containerd[1455]: time="2024-12-13T02:27:43.331295390Z" level=info msg="CreateContainer within sandbox \"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 02:27:43.374583 containerd[1455]: time="2024-12-13T02:27:43.374523772Z" level=info msg="CreateContainer within sandbox \"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490\"" Dec 13 02:27:43.375911 containerd[1455]: time="2024-12-13T02:27:43.375874136Z" level=info msg="StartContainer for \"e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490\"" Dec 13 02:27:43.438375 systemd[1]: Started cri-containerd-e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490.scope - libcontainer container e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490. Dec 13 02:27:43.579383 containerd[1455]: time="2024-12-13T02:27:43.578659798Z" level=info msg="StartContainer for \"e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490\" returns successfully" Dec 13 02:27:43.704884 sshd[5377]: pam_unix(sshd:session): session closed for user core Dec 13 02:27:43.714991 systemd[1]: sshd@9-172.24.4.31:22-172.24.4.1:57864.service: Deactivated successfully. Dec 13 02:27:43.719061 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:27:43.720560 systemd-logind[1431]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:27:43.723880 systemd-logind[1431]: Removed session 12. Dec 13 02:27:43.734434 containerd[1455]: time="2024-12-13T02:27:43.732599147Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:43.734723 containerd[1455]: time="2024-12-13T02:27:43.734683668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 02:27:43.740479 containerd[1455]: time="2024-12-13T02:27:43.740416256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 471.165126ms" Dec 13 02:27:43.740479 containerd[1455]: time="2024-12-13T02:27:43.740465398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 02:27:43.742862 containerd[1455]: time="2024-12-13T02:27:43.742829174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 02:27:43.748325 containerd[1455]: time="2024-12-13T02:27:43.747773161Z" level=info msg="CreateContainer within sandbox \"31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:27:43.788183 containerd[1455]: time="2024-12-13T02:27:43.788104837Z" level=info msg="CreateContainer within sandbox \"31adb05c95ea892d767633d227ca764635e1eea43b5b93478c90f53a7896e82f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"32f9fcebb58a35417235b9750b1523882f4fc0c610c1439d3136d6d26d7fe03f\"" Dec 13 02:27:43.789317 containerd[1455]: time="2024-12-13T02:27:43.789281515Z" level=info msg="StartContainer for \"32f9fcebb58a35417235b9750b1523882f4fc0c610c1439d3136d6d26d7fe03f\"" Dec 13 02:27:43.846431 systemd[1]: Started cri-containerd-32f9fcebb58a35417235b9750b1523882f4fc0c610c1439d3136d6d26d7fe03f.scope - libcontainer container 32f9fcebb58a35417235b9750b1523882f4fc0c610c1439d3136d6d26d7fe03f. Dec 13 02:27:43.914760 containerd[1455]: time="2024-12-13T02:27:43.914358461Z" level=info msg="StopContainer for \"e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490\" with timeout 30 (s)" Dec 13 02:27:43.917554 containerd[1455]: time="2024-12-13T02:27:43.916343807Z" level=info msg="Stop container \"e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490\" with signal terminated" Dec 13 02:27:43.949697 systemd[1]: cri-containerd-e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490.scope: Deactivated successfully. Dec 13 02:27:43.991160 containerd[1455]: time="2024-12-13T02:27:43.990648164Z" level=error msg="ExecSync for \"e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490\" failed" error="failed to exec in container: failed to start exec \"2bd7277c8e2f04c113a20641fd0aae4b95eb8c4f11472657caf570d82459c6b6\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" Dec 13 02:27:43.991319 kubelet[2588]: E1213 02:27:43.991141 2588 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"2bd7277c8e2f04c113a20641fd0aae4b95eb8c4f11472657caf570d82459c6b6\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490" cmd=["/usr/bin/check-status","-r"] Dec 13 02:27:44.014892 containerd[1455]: time="2024-12-13T02:27:44.014697124Z" level=info msg="StartContainer for \"32f9fcebb58a35417235b9750b1523882f4fc0c610c1439d3136d6d26d7fe03f\" returns successfully" Dec 13 02:27:44.305299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490-rootfs.mount: Deactivated successfully. Dec 13 02:27:44.737919 containerd[1455]: time="2024-12-13T02:27:44.737646400Z" level=info msg="shim disconnected" id=e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490 namespace=k8s.io Dec 13 02:27:44.737919 containerd[1455]: time="2024-12-13T02:27:44.737712675Z" level=warning msg="cleaning up after shim disconnected" id=e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490 namespace=k8s.io Dec 13 02:27:44.737919 containerd[1455]: time="2024-12-13T02:27:44.737723555Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:27:44.742235 containerd[1455]: time="2024-12-13T02:27:44.741752245Z" level=error msg="ExecSync for \"e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"99202dd8d3df44df9b171d321b4f29300b208b374dafac117be907f384c681f0\": container not created: not found" Dec 13 02:27:44.742993 kubelet[2588]: E1213 02:27:44.742751 2588 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"99202dd8d3df44df9b171d321b4f29300b208b374dafac117be907f384c681f0\": container not created: not found" containerID="e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490" cmd=["/usr/bin/check-status","-r"] Dec 13 02:27:44.747478 containerd[1455]: time="2024-12-13T02:27:44.747377301Z" level=error msg="ExecSync for \"e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490 not found: not found" Dec 13 02:27:44.766560 containerd[1455]: time="2024-12-13T02:27:44.763017279Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:27:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 02:27:44.766852 kubelet[2588]: E1213 02:27:44.747896 2588 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490 not found: not found" containerID="e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490" cmd=["/usr/bin/check-status","-r"] Dec 13 02:27:44.784942 containerd[1455]: time="2024-12-13T02:27:44.784893111Z" level=info msg="StopContainer for \"e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490\" returns successfully" Dec 13 02:27:44.786356 containerd[1455]: time="2024-12-13T02:27:44.786315260Z" level=info msg="StopPodSandbox for \"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e\"" Dec 13 02:27:44.786411 containerd[1455]: time="2024-12-13T02:27:44.786366726Z" level=info msg="Container to stop \"e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:27:44.789724 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e-shm.mount: Deactivated successfully. Dec 13 02:27:44.799169 systemd[1]: cri-containerd-f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e.scope: Deactivated successfully. Dec 13 02:27:44.836647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e-rootfs.mount: Deactivated successfully. Dec 13 02:27:44.843844 containerd[1455]: time="2024-12-13T02:27:44.843787649Z" level=info msg="shim disconnected" id=f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e namespace=k8s.io Dec 13 02:27:44.843844 containerd[1455]: time="2024-12-13T02:27:44.843842312Z" level=warning msg="cleaning up after shim disconnected" id=f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e namespace=k8s.io Dec 13 02:27:44.843956 containerd[1455]: time="2024-12-13T02:27:44.843853322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:27:44.995743 kubelet[2588]: I1213 02:27:44.995608 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7dd4dd5bd7-zwr86" podStartSLOduration=38.909964372 podStartE2EDuration="50.995583872s" podCreationTimestamp="2024-12-13 02:26:54 +0000 UTC" firstStartedPulling="2024-12-13 02:27:31.182530793 +0000 UTC m=+55.925361123" lastFinishedPulling="2024-12-13 02:27:43.268150284 +0000 UTC m=+68.010980623" observedRunningTime="2024-12-13 02:27:43.943656922 +0000 UTC m=+68.686487261" watchObservedRunningTime="2024-12-13 02:27:44.995583872 +0000 UTC m=+69.738414201" Dec 13 02:27:45.262080 kubelet[2588]: I1213 02:27:45.252828 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57b88875bb-sw527" podStartSLOduration=37.874707077 podStartE2EDuration="50.252807736s" podCreationTimestamp="2024-12-13 02:26:55 +0000 UTC" firstStartedPulling="2024-12-13 02:27:31.363989458 +0000 UTC m=+56.106819797" lastFinishedPulling="2024-12-13 02:27:43.742090117 +0000 UTC m=+68.484920456" observedRunningTime="2024-12-13 02:27:44.996773534 +0000 UTC m=+69.739603873" watchObservedRunningTime="2024-12-13 02:27:45.252807736 +0000 UTC m=+69.995638075" Dec 13 02:27:45.257948 systemd-networkd[1365]: cali8498c7c6f82: Link DOWN Dec 13 02:27:45.257953 systemd-networkd[1365]: cali8498c7c6f82: Lost carrier Dec 13 02:27:45.309228 kubelet[2588]: I1213 02:27:45.308103 2588 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:27:45.457109 containerd[1455]: 2024-12-13 02:27:45.255 [INFO][5639] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Dec 13 02:27:45.457109 containerd[1455]: 2024-12-13 02:27:45.256 [INFO][5639] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" iface="eth0" netns="/var/run/netns/cni-975f0362-df74-d919-33f1-6070a0f225c1" Dec 13 02:27:45.457109 containerd[1455]: 2024-12-13 02:27:45.256 [INFO][5639] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" iface="eth0" netns="/var/run/netns/cni-975f0362-df74-d919-33f1-6070a0f225c1" Dec 13 02:27:45.457109 containerd[1455]: 2024-12-13 02:27:45.266 [INFO][5639] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" after=10.175094ms iface="eth0" netns="/var/run/netns/cni-975f0362-df74-d919-33f1-6070a0f225c1" Dec 13 02:27:45.457109 containerd[1455]: 2024-12-13 02:27:45.266 [INFO][5639] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Dec 13 02:27:45.457109 containerd[1455]: 2024-12-13 02:27:45.266 [INFO][5639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Dec 13 02:27:45.457109 containerd[1455]: 2024-12-13 02:27:45.302 [INFO][5648] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" HandleID="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:45.457109 containerd[1455]: 2024-12-13 02:27:45.302 [INFO][5648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:45.457109 containerd[1455]: 2024-12-13 02:27:45.302 [INFO][5648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:45.457109 containerd[1455]: 2024-12-13 02:27:45.438 [INFO][5648] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" HandleID="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:45.457109 containerd[1455]: 2024-12-13 02:27:45.438 [INFO][5648] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" HandleID="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:27:45.457109 containerd[1455]: 2024-12-13 02:27:45.443 [INFO][5648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:45.457109 containerd[1455]: 2024-12-13 02:27:45.448 [INFO][5639] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Dec 13 02:27:45.459433 containerd[1455]: time="2024-12-13T02:27:45.459331762Z" level=info msg="TearDown network for sandbox \"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e\" successfully" Dec 13 02:27:45.459433 containerd[1455]: time="2024-12-13T02:27:45.459379922Z" level=info msg="StopPodSandbox for \"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e\" returns successfully" Dec 13 02:27:45.461583 systemd[1]: run-netns-cni\x2d975f0362\x2ddf74\x2dd919\x2d33f1\x2d6070a0f225c1.mount: Deactivated successfully. Dec 13 02:27:45.589114 kubelet[2588]: I1213 02:27:45.589059 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1e87c6a-e32e-4fa8-9314-e0438c9aec4d-tigera-ca-bundle\") pod \"d1e87c6a-e32e-4fa8-9314-e0438c9aec4d\" (UID: \"d1e87c6a-e32e-4fa8-9314-e0438c9aec4d\") " Dec 13 02:27:45.589114 kubelet[2588]: I1213 02:27:45.589165 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqwf2\" (UniqueName: \"kubernetes.io/projected/d1e87c6a-e32e-4fa8-9314-e0438c9aec4d-kube-api-access-cqwf2\") pod \"d1e87c6a-e32e-4fa8-9314-e0438c9aec4d\" (UID: \"d1e87c6a-e32e-4fa8-9314-e0438c9aec4d\") " Dec 13 02:27:45.598794 systemd[1]: var-lib-kubelet-pods-d1e87c6a\x2de32e\x2d4fa8\x2d9314\x2de0438c9aec4d-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Dec 13 02:27:45.620333 systemd[1]: var-lib-kubelet-pods-d1e87c6a\x2de32e\x2d4fa8\x2d9314\x2de0438c9aec4d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcqwf2.mount: Deactivated successfully. Dec 13 02:27:45.627187 kubelet[2588]: I1213 02:27:45.627075 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e87c6a-e32e-4fa8-9314-e0438c9aec4d-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "d1e87c6a-e32e-4fa8-9314-e0438c9aec4d" (UID: "d1e87c6a-e32e-4fa8-9314-e0438c9aec4d"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:27:45.634288 kubelet[2588]: I1213 02:27:45.634221 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1e87c6a-e32e-4fa8-9314-e0438c9aec4d-kube-api-access-cqwf2" (OuterVolumeSpecName: "kube-api-access-cqwf2") pod "d1e87c6a-e32e-4fa8-9314-e0438c9aec4d" (UID: "d1e87c6a-e32e-4fa8-9314-e0438c9aec4d"). InnerVolumeSpecName "kube-api-access-cqwf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:27:45.690271 kubelet[2588]: I1213 02:27:45.690217 2588 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1e87c6a-e32e-4fa8-9314-e0438c9aec4d-tigera-ca-bundle\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:45.690271 kubelet[2588]: I1213 02:27:45.690255 2588 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cqwf2\" (UniqueName: \"kubernetes.io/projected/d1e87c6a-e32e-4fa8-9314-e0438c9aec4d-kube-api-access-cqwf2\") on node \"ci-4081-2-1-b-462e46fdf9.novalocal\" DevicePath \"\"" Dec 13 02:27:45.954016 kubelet[2588]: I1213 02:27:45.953847 2588 scope.go:117] "RemoveContainer" containerID="e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490" Dec 13 02:27:45.957254 containerd[1455]: time="2024-12-13T02:27:45.957167840Z" level=info msg="RemoveContainer for \"e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490\"" Dec 13 02:27:45.991536 systemd[1]: Removed slice kubepods-besteffort-podd1e87c6a_e32e_4fa8_9314_e0438c9aec4d.slice - libcontainer container kubepods-besteffort-podd1e87c6a_e32e_4fa8_9314_e0438c9aec4d.slice. Dec 13 02:27:46.021463 containerd[1455]: time="2024-12-13T02:27:46.021342914Z" level=info msg="RemoveContainer for \"e8097781f2093ba27e58c5fa42921b3425531cf07016bf0ef004ee541fc7c490\" returns successfully" Dec 13 02:27:46.959079 kubelet[2588]: I1213 02:27:46.959039 2588 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:27:47.452146 kubelet[2588]: I1213 02:27:47.451978 2588 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1e87c6a-e32e-4fa8-9314-e0438c9aec4d" path="/var/lib/kubelet/pods/d1e87c6a-e32e-4fa8-9314-e0438c9aec4d/volumes" Dec 13 02:27:47.580052 containerd[1455]: time="2024-12-13T02:27:47.579278239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:47.588192 containerd[1455]: time="2024-12-13T02:27:47.586409571Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:47.590874 containerd[1455]: time="2024-12-13T02:27:47.590834464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 02:27:47.591808 containerd[1455]: time="2024-12-13T02:27:47.591782292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:27:47.593790 containerd[1455]: time="2024-12-13T02:27:47.593724306Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.85085673s" Dec 13 02:27:47.593853 containerd[1455]: time="2024-12-13T02:27:47.593787595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 02:27:47.598043 containerd[1455]: time="2024-12-13T02:27:47.597936490Z" level=info msg="CreateContainer within sandbox \"f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 02:27:47.651095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4069923493.mount: Deactivated successfully. Dec 13 02:27:47.670921 containerd[1455]: time="2024-12-13T02:27:47.670706861Z" level=info msg="CreateContainer within sandbox \"f15b104b2841cb65c80716de9dcabeb55f01c7bbd72f94c6bf88f248954df2e5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"14c61ba06ed56ad91fa24bfee1e2400fb725749f5f939bd1e15372777709db7b\"" Dec 13 02:27:47.671840 containerd[1455]: time="2024-12-13T02:27:47.671576162Z" level=info msg="StartContainer for \"14c61ba06ed56ad91fa24bfee1e2400fb725749f5f939bd1e15372777709db7b\"" Dec 13 02:27:47.732282 systemd[1]: Started cri-containerd-14c61ba06ed56ad91fa24bfee1e2400fb725749f5f939bd1e15372777709db7b.scope - libcontainer container 14c61ba06ed56ad91fa24bfee1e2400fb725749f5f939bd1e15372777709db7b. Dec 13 02:27:47.793025 containerd[1455]: time="2024-12-13T02:27:47.792700017Z" level=info msg="StartContainer for \"14c61ba06ed56ad91fa24bfee1e2400fb725749f5f939bd1e15372777709db7b\" returns successfully" Dec 13 02:27:48.021596 kubelet[2588]: I1213 02:27:48.021244 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-t8jkg" podStartSLOduration=36.89065828 podStartE2EDuration="55.021202703s" podCreationTimestamp="2024-12-13 02:26:53 +0000 UTC" firstStartedPulling="2024-12-13 02:27:29.465278241 +0000 UTC m=+54.208108570" lastFinishedPulling="2024-12-13 02:27:47.595822664 +0000 UTC m=+72.338652993" observedRunningTime="2024-12-13 02:27:48.019448742 +0000 UTC m=+72.762279111" watchObservedRunningTime="2024-12-13 02:27:48.021202703 +0000 UTC m=+72.764033082" Dec 13 02:27:48.720787 systemd[1]: Started sshd@10-172.24.4.31:22-172.24.4.1:53726.service - OpenSSH per-connection server daemon (172.24.4.1:53726). Dec 13 02:27:49.543151 systemd[1]: cri-containerd-65bd9d2dc54a9207b0260add0245fa58b6046da900de553a0771f76a1040b885.scope: Deactivated successfully. Dec 13 02:27:49.543445 systemd[1]: cri-containerd-65bd9d2dc54a9207b0260add0245fa58b6046da900de553a0771f76a1040b885.scope: Consumed 1.297s CPU time. Dec 13 02:27:49.629206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65bd9d2dc54a9207b0260add0245fa58b6046da900de553a0771f76a1040b885-rootfs.mount: Deactivated successfully. Dec 13 02:27:49.638978 containerd[1455]: time="2024-12-13T02:27:49.638339132Z" level=info msg="shim disconnected" id=65bd9d2dc54a9207b0260add0245fa58b6046da900de553a0771f76a1040b885 namespace=k8s.io Dec 13 02:27:49.638978 containerd[1455]: time="2024-12-13T02:27:49.638395017Z" level=warning msg="cleaning up after shim disconnected" id=65bd9d2dc54a9207b0260add0245fa58b6046da900de553a0771f76a1040b885 namespace=k8s.io Dec 13 02:27:49.638978 containerd[1455]: time="2024-12-13T02:27:49.638405868Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:27:49.649771 kubelet[2588]: I1213 02:27:49.649489 2588 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 02:27:49.656908 kubelet[2588]: I1213 02:27:49.656623 2588 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 02:27:50.235282 containerd[1455]: time="2024-12-13T02:27:50.235092422Z" level=info msg="CreateContainer within sandbox \"0fe64a57cc839a8e04308f257894f2a78f4ee91b6f32ef0fae48f0c07ca5382b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 02:27:50.309815 containerd[1455]: time="2024-12-13T02:27:50.309559671Z" level=info msg="CreateContainer within sandbox \"0fe64a57cc839a8e04308f257894f2a78f4ee91b6f32ef0fae48f0c07ca5382b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"56fdc2b7aa9a59438d04af70cd3acf4f97274076b3d5adec509bd84a02beb2c3\"" Dec 13 02:27:50.311039 containerd[1455]: time="2024-12-13T02:27:50.310216654Z" level=info msg="StartContainer for \"56fdc2b7aa9a59438d04af70cd3acf4f97274076b3d5adec509bd84a02beb2c3\"" Dec 13 02:27:50.323217 sshd[5711]: Accepted publickey for core from 172.24.4.1 port 53726 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:27:50.326569 sshd[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:27:50.337730 systemd-logind[1431]: New session 13 of user core. Dec 13 02:27:50.345645 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 02:27:50.364357 systemd[1]: Started cri-containerd-56fdc2b7aa9a59438d04af70cd3acf4f97274076b3d5adec509bd84a02beb2c3.scope - libcontainer container 56fdc2b7aa9a59438d04af70cd3acf4f97274076b3d5adec509bd84a02beb2c3. Dec 13 02:27:50.424174 containerd[1455]: time="2024-12-13T02:27:50.424072983Z" level=info msg="StartContainer for \"56fdc2b7aa9a59438d04af70cd3acf4f97274076b3d5adec509bd84a02beb2c3\" returns successfully" Dec 13 02:27:50.650438 kubelet[2588]: E1213 02:27:50.650283 2588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d1e87c6a-e32e-4fa8-9314-e0438c9aec4d" containerName="calico-kube-controllers" Dec 13 02:27:50.651092 kubelet[2588]: I1213 02:27:50.650522 2588 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1e87c6a-e32e-4fa8-9314-e0438c9aec4d" containerName="calico-kube-controllers" Dec 13 02:27:50.660550 systemd[1]: Created slice kubepods-besteffort-pod93da1f82_7ab5_455f_84bd_885dcc138d7e.slice - libcontainer container kubepods-besteffort-pod93da1f82_7ab5_455f_84bd_885dcc138d7e.slice. Dec 13 02:27:50.769380 kubelet[2588]: I1213 02:27:50.769324 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93da1f82-7ab5-455f-84bd-885dcc138d7e-tigera-ca-bundle\") pod \"calico-kube-controllers-5d57fd998c-47dp2\" (UID: \"93da1f82-7ab5-455f-84bd-885dcc138d7e\") " pod="calico-system/calico-kube-controllers-5d57fd998c-47dp2" Dec 13 02:27:50.769750 kubelet[2588]: I1213 02:27:50.769690 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6949v\" (UniqueName: \"kubernetes.io/projected/93da1f82-7ab5-455f-84bd-885dcc138d7e-kube-api-access-6949v\") pod \"calico-kube-controllers-5d57fd998c-47dp2\" (UID: \"93da1f82-7ab5-455f-84bd-885dcc138d7e\") " pod="calico-system/calico-kube-controllers-5d57fd998c-47dp2" Dec 13 02:27:50.966275 containerd[1455]: time="2024-12-13T02:27:50.965746973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d57fd998c-47dp2,Uid:93da1f82-7ab5-455f-84bd-885dcc138d7e,Namespace:calico-system,Attempt:0,}" Dec 13 02:27:51.597052 systemd-networkd[1365]: calif637fa821fc: Link UP Dec 13 02:27:51.598296 systemd-networkd[1365]: calif637fa821fc: Gained carrier Dec 13 02:27:51.636212 kubelet[2588]: I1213 02:27:51.635855 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mtq2r" podStartSLOduration=12.635832049 podStartE2EDuration="12.635832049s" podCreationTimestamp="2024-12-13 02:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:27:51.282012934 +0000 UTC m=+76.024843284" watchObservedRunningTime="2024-12-13 02:27:51.635832049 +0000 UTC m=+76.378662378" Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.375 [INFO][5819] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--5d57fd998c--47dp2-eth0 calico-kube-controllers-5d57fd998c- calico-system 93da1f82-7ab5-455f-84bd-885dcc138d7e 1124 0 2024-12-13 02:27:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d57fd998c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-b-462e46fdf9.novalocal calico-kube-controllers-5d57fd998c-47dp2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif637fa821fc [] []}} ContainerID="1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" Namespace="calico-system" Pod="calico-kube-controllers-5d57fd998c-47dp2" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--5d57fd998c--47dp2-" Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.380 [INFO][5819] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" Namespace="calico-system" Pod="calico-kube-controllers-5d57fd998c-47dp2" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--5d57fd998c--47dp2-eth0" Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.472 [INFO][5869] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" HandleID="k8s-pod-network.1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--5d57fd998c--47dp2-eth0" Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.504 [INFO][5869] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" HandleID="k8s-pod-network.1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--5d57fd998c--47dp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a2130), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-b-462e46fdf9.novalocal", "pod":"calico-kube-controllers-5d57fd998c-47dp2", "timestamp":"2024-12-13 02:27:51.472830949 +0000 UTC"}, Hostname:"ci-4081-2-1-b-462e46fdf9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.504 [INFO][5869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.504 [INFO][5869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.504 [INFO][5869] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-462e46fdf9.novalocal' Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.507 [INFO][5869] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.516 [INFO][5869] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.542 [INFO][5869] ipam/ipam.go 489: Trying affinity for 192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.549 [INFO][5869] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.554 [INFO][5869] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.554 [INFO][5869] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.557 [INFO][5869] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.568 [INFO][5869] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.582 [INFO][5869] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.199/26] block=192.168.89.192/26 handle="k8s-pod-network.1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.583 [INFO][5869] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.199/26] handle="k8s-pod-network.1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" host="ci-4081-2-1-b-462e46fdf9.novalocal" Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.583 [INFO][5869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:27:51.646512 containerd[1455]: 2024-12-13 02:27:51.583 [INFO][5869] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.199/26] IPv6=[] ContainerID="1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" HandleID="k8s-pod-network.1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--5d57fd998c--47dp2-eth0" Dec 13 02:27:51.651065 containerd[1455]: 2024-12-13 02:27:51.590 [INFO][5819] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" Namespace="calico-system" Pod="calico-kube-controllers-5d57fd998c-47dp2" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--5d57fd998c--47dp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--5d57fd998c--47dp2-eth0", GenerateName:"calico-kube-controllers-5d57fd998c-", Namespace:"calico-system", SelfLink:"", UID:"93da1f82-7ab5-455f-84bd-885dcc138d7e", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 27, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d57fd998c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"", Pod:"calico-kube-controllers-5d57fd998c-47dp2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif637fa821fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:51.651065 containerd[1455]: 2024-12-13 02:27:51.590 [INFO][5819] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.199/32] ContainerID="1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" Namespace="calico-system" Pod="calico-kube-controllers-5d57fd998c-47dp2" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--5d57fd998c--47dp2-eth0" Dec 13 02:27:51.651065 containerd[1455]: 2024-12-13 02:27:51.590 [INFO][5819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif637fa821fc ContainerID="1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" Namespace="calico-system" Pod="calico-kube-controllers-5d57fd998c-47dp2" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--5d57fd998c--47dp2-eth0" Dec 13 02:27:51.651065 containerd[1455]: 2024-12-13 02:27:51.593 [INFO][5819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" Namespace="calico-system" Pod="calico-kube-controllers-5d57fd998c-47dp2" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--5d57fd998c--47dp2-eth0" Dec 13 02:27:51.651065 containerd[1455]: 2024-12-13 02:27:51.603 [INFO][5819] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" Namespace="calico-system" Pod="calico-kube-controllers-5d57fd998c-47dp2" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--5d57fd998c--47dp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--5d57fd998c--47dp2-eth0", GenerateName:"calico-kube-controllers-5d57fd998c-", Namespace:"calico-system", SelfLink:"", UID:"93da1f82-7ab5-455f-84bd-885dcc138d7e", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 27, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d57fd998c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-462e46fdf9.novalocal", ContainerID:"1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa", Pod:"calico-kube-controllers-5d57fd998c-47dp2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif637fa821fc", MAC:"8a:55:b3:3b:60:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:27:51.651065 containerd[1455]: 2024-12-13 02:27:51.637 [INFO][5819] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa" Namespace="calico-system" Pod="calico-kube-controllers-5d57fd998c-47dp2" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--5d57fd998c--47dp2-eth0" Dec 13 02:27:51.748593 containerd[1455]: time="2024-12-13T02:27:51.746154284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:27:51.748593 containerd[1455]: time="2024-12-13T02:27:51.746221219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:27:51.748593 containerd[1455]: time="2024-12-13T02:27:51.746234384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:51.748593 containerd[1455]: time="2024-12-13T02:27:51.746324773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:27:51.799416 systemd[1]: Started cri-containerd-1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa.scope - libcontainer container 1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa. Dec 13 02:27:51.961706 containerd[1455]: time="2024-12-13T02:27:51.961660843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d57fd998c-47dp2,Uid:93da1f82-7ab5-455f-84bd-885dcc138d7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa\"" Dec 13 02:27:51.995433 containerd[1455]: time="2024-12-13T02:27:51.994798872Z" level=info msg="CreateContainer within sandbox \"1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 02:27:52.013980 sshd[5711]: pam_unix(sshd:session): session closed for user core Dec 13 02:27:52.032974 systemd[1]: sshd@10-172.24.4.31:22-172.24.4.1:53726.service: Deactivated successfully. Dec 13 02:27:52.039643 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:27:52.044366 systemd-logind[1431]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:27:52.046846 systemd-logind[1431]: Removed session 13. Dec 13 02:27:52.050048 containerd[1455]: time="2024-12-13T02:27:52.049809150Z" level=info msg="CreateContainer within sandbox \"1aa217c8c51c560833529fd9a86cbf34665823c9b8ed880bac12f7c9c4376dfa\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"97f213146be22ada7c2703c2709cf80329ca49e862e1113e826bd9b88f00c3c3\"" Dec 13 02:27:52.053163 containerd[1455]: time="2024-12-13T02:27:52.052344537Z" level=info msg="StartContainer for \"97f213146be22ada7c2703c2709cf80329ca49e862e1113e826bd9b88f00c3c3\"" Dec 13 02:27:52.124871 systemd[1]: Started cri-containerd-97f213146be22ada7c2703c2709cf80329ca49e862e1113e826bd9b88f00c3c3.scope - libcontainer container 97f213146be22ada7c2703c2709cf80329ca49e862e1113e826bd9b88f00c3c3. Dec 13 02:27:52.329902 containerd[1455]: time="2024-12-13T02:27:52.329667080Z" level=info msg="StartContainer for \"97f213146be22ada7c2703c2709cf80329ca49e862e1113e826bd9b88f00c3c3\" returns successfully" Dec 13 02:27:52.647962 systemd-networkd[1365]: calif637fa821fc: Gained IPv6LL Dec 13 02:27:54.345334 kubelet[2588]: I1213 02:27:54.344765 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d57fd998c-47dp2" podStartSLOduration=8.344746042 podStartE2EDuration="8.344746042s" podCreationTimestamp="2024-12-13 02:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:27:53.432313081 +0000 UTC m=+78.175143551" watchObservedRunningTime="2024-12-13 02:27:54.344746042 +0000 UTC m=+79.087576371" Dec 13 02:27:57.070545 systemd[1]: Started sshd@11-172.24.4.31:22-172.24.4.1:37432.service - OpenSSH per-connection server daemon (172.24.4.1:37432). Dec 13 02:27:58.728897 sshd[6194]: Accepted publickey for core from 172.24.4.1 port 37432 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:27:58.770199 sshd[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:27:58.794588 systemd-logind[1431]: New session 14 of user core. Dec 13 02:27:58.805038 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 02:28:00.017664 sshd[6194]: pam_unix(sshd:session): session closed for user core Dec 13 02:28:00.030210 systemd[1]: sshd@11-172.24.4.31:22-172.24.4.1:37432.service: Deactivated successfully. Dec 13 02:28:00.032563 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:28:00.034767 systemd-logind[1431]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:28:00.041464 systemd[1]: Started sshd@12-172.24.4.31:22-172.24.4.1:37438.service - OpenSSH per-connection server daemon (172.24.4.1:37438). Dec 13 02:28:00.043452 systemd-logind[1431]: Removed session 14. Dec 13 02:28:01.396645 sshd[6216]: Accepted publickey for core from 172.24.4.1 port 37438 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:28:01.400281 sshd[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:28:01.410818 systemd-logind[1431]: New session 15 of user core. Dec 13 02:28:01.419651 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 02:28:02.352577 sshd[6216]: pam_unix(sshd:session): session closed for user core Dec 13 02:28:02.368855 systemd[1]: sshd@12-172.24.4.31:22-172.24.4.1:37438.service: Deactivated successfully. Dec 13 02:28:02.372688 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:28:02.379269 systemd-logind[1431]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:28:02.386938 systemd[1]: Started sshd@13-172.24.4.31:22-172.24.4.1:37448.service - OpenSSH per-connection server daemon (172.24.4.1:37448). Dec 13 02:28:02.399651 systemd-logind[1431]: Removed session 15. Dec 13 02:28:03.577372 sshd[6227]: Accepted publickey for core from 172.24.4.1 port 37448 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:28:03.581014 sshd[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:28:03.591560 systemd-logind[1431]: New session 16 of user core. Dec 13 02:28:03.606458 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 02:28:04.492268 sshd[6227]: pam_unix(sshd:session): session closed for user core Dec 13 02:28:04.498905 systemd-logind[1431]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:28:04.501576 systemd[1]: sshd@13-172.24.4.31:22-172.24.4.1:37448.service: Deactivated successfully. Dec 13 02:28:04.506247 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:28:04.508289 systemd-logind[1431]: Removed session 16. Dec 13 02:28:09.516555 systemd[1]: Started sshd@14-172.24.4.31:22-172.24.4.1:58794.service - OpenSSH per-connection server daemon (172.24.4.1:58794). Dec 13 02:28:10.775269 sshd[6248]: Accepted publickey for core from 172.24.4.1 port 58794 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:28:10.785589 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:28:10.795871 systemd-logind[1431]: New session 17 of user core. Dec 13 02:28:10.804425 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 02:28:11.718892 sshd[6248]: pam_unix(sshd:session): session closed for user core Dec 13 02:28:11.725411 systemd[1]: sshd@14-172.24.4.31:22-172.24.4.1:58794.service: Deactivated successfully. Dec 13 02:28:11.728876 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:28:11.730440 systemd-logind[1431]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:28:11.732872 systemd-logind[1431]: Removed session 17. Dec 13 02:28:16.748945 systemd[1]: Started sshd@15-172.24.4.31:22-172.24.4.1:48954.service - OpenSSH per-connection server daemon (172.24.4.1:48954). Dec 13 02:28:18.671188 sshd[6300]: Accepted publickey for core from 172.24.4.1 port 48954 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:28:18.676002 sshd[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:28:18.688277 systemd-logind[1431]: New session 18 of user core. Dec 13 02:28:18.692702 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 02:28:19.629309 sshd[6300]: pam_unix(sshd:session): session closed for user core Dec 13 02:28:19.640349 systemd[1]: sshd@15-172.24.4.31:22-172.24.4.1:48954.service: Deactivated successfully. Dec 13 02:28:19.648020 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:28:19.650674 systemd-logind[1431]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:28:19.653983 systemd-logind[1431]: Removed session 18. Dec 13 02:28:24.656824 systemd[1]: Started sshd@16-172.24.4.31:22-172.24.4.1:38798.service - OpenSSH per-connection server daemon (172.24.4.1:38798). Dec 13 02:28:25.944751 sshd[6331]: Accepted publickey for core from 172.24.4.1 port 38798 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:28:25.948073 sshd[6331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:28:25.960175 systemd-logind[1431]: New session 19 of user core. Dec 13 02:28:25.967513 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 02:28:26.880117 sshd[6331]: pam_unix(sshd:session): session closed for user core Dec 13 02:28:26.895102 systemd[1]: sshd@16-172.24.4.31:22-172.24.4.1:38798.service: Deactivated successfully. Dec 13 02:28:26.901654 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:28:26.906565 systemd-logind[1431]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:28:26.913765 systemd[1]: Started sshd@17-172.24.4.31:22-172.24.4.1:38804.service - OpenSSH per-connection server daemon (172.24.4.1:38804). Dec 13 02:28:26.917212 systemd-logind[1431]: Removed session 19. Dec 13 02:28:28.211224 sshd[6345]: Accepted publickey for core from 172.24.4.1 port 38804 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:28:28.214588 sshd[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:28:28.224600 systemd-logind[1431]: New session 20 of user core. Dec 13 02:28:28.237541 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 02:28:29.471219 sshd[6345]: pam_unix(sshd:session): session closed for user core Dec 13 02:28:29.488058 systemd[1]: Started sshd@18-172.24.4.31:22-172.24.4.1:38820.service - OpenSSH per-connection server daemon (172.24.4.1:38820). Dec 13 02:28:29.498706 systemd[1]: sshd@17-172.24.4.31:22-172.24.4.1:38804.service: Deactivated successfully. Dec 13 02:28:29.509375 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:28:29.514862 systemd-logind[1431]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:28:29.524372 systemd-logind[1431]: Removed session 20. Dec 13 02:28:30.789529 sshd[6354]: Accepted publickey for core from 172.24.4.1 port 38820 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:28:30.793409 sshd[6354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:28:30.807968 systemd-logind[1431]: New session 21 of user core. Dec 13 02:28:30.816413 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 02:28:34.245653 sshd[6354]: pam_unix(sshd:session): session closed for user core Dec 13 02:28:34.271381 systemd[1]: Started sshd@19-172.24.4.31:22-172.24.4.1:38826.service - OpenSSH per-connection server daemon (172.24.4.1:38826). Dec 13 02:28:34.283105 systemd[1]: sshd@18-172.24.4.31:22-172.24.4.1:38820.service: Deactivated successfully. Dec 13 02:28:34.295349 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:28:34.298381 systemd-logind[1431]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:28:34.302798 systemd-logind[1431]: Removed session 21. Dec 13 02:28:35.742593 sshd[6380]: Accepted publickey for core from 172.24.4.1 port 38826 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:28:35.745580 sshd[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:28:35.756550 systemd-logind[1431]: New session 22 of user core. Dec 13 02:28:35.764432 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 02:28:38.363340 sshd[6380]: pam_unix(sshd:session): session closed for user core Dec 13 02:28:38.374961 systemd[1]: sshd@19-172.24.4.31:22-172.24.4.1:38826.service: Deactivated successfully. Dec 13 02:28:38.378244 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:28:38.380267 systemd-logind[1431]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:28:38.388488 systemd[1]: Started sshd@20-172.24.4.31:22-172.24.4.1:33236.service - OpenSSH per-connection server daemon (172.24.4.1:33236). Dec 13 02:28:38.390931 systemd-logind[1431]: Removed session 22. Dec 13 02:28:39.573901 sshd[6395]: Accepted publickey for core from 172.24.4.1 port 33236 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:28:39.579257 sshd[6395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:28:39.593079 systemd-logind[1431]: New session 23 of user core. Dec 13 02:28:39.605877 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 02:28:40.732492 containerd[1455]: time="2024-12-13T02:28:40.691190499Z" level=info msg="StopPodSandbox for \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\"" Dec 13 02:28:40.742028 sshd[6395]: pam_unix(sshd:session): session closed for user core Dec 13 02:28:40.750316 containerd[1455]: time="2024-12-13T02:28:40.749707825Z" level=info msg="TearDown network for sandbox \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\" successfully" Dec 13 02:28:40.750316 containerd[1455]: time="2024-12-13T02:28:40.749782175Z" level=info msg="StopPodSandbox for \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\" returns successfully" Dec 13 02:28:40.751991 systemd[1]: sshd@20-172.24.4.31:22-172.24.4.1:33236.service: Deactivated successfully. Dec 13 02:28:40.756575 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:28:40.759998 systemd-logind[1431]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:28:40.763693 systemd-logind[1431]: Removed session 23. Dec 13 02:28:40.765107 containerd[1455]: time="2024-12-13T02:28:40.764559785Z" level=info msg="RemovePodSandbox for \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\"" Dec 13 02:28:40.776505 containerd[1455]: time="2024-12-13T02:28:40.776408951Z" level=info msg="Forcibly stopping sandbox \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\"" Dec 13 02:28:40.776925 containerd[1455]: time="2024-12-13T02:28:40.776597865Z" level=info msg="TearDown network for sandbox \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\" successfully" Dec 13 02:28:40.814753 containerd[1455]: time="2024-12-13T02:28:40.814383002Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:28:40.814753 containerd[1455]: time="2024-12-13T02:28:40.814560355Z" level=info msg="RemovePodSandbox \"c4271f99447d9a6354064428f6f53abc395e3ea0b66582a6c0c7433a3da7a80f\" returns successfully" Dec 13 02:28:40.816095 containerd[1455]: time="2024-12-13T02:28:40.815704452Z" level=info msg="StopPodSandbox for \"1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd\"" Dec 13 02:28:40.816095 containerd[1455]: time="2024-12-13T02:28:40.815926669Z" level=info msg="TearDown network for sandbox \"1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd\" successfully" Dec 13 02:28:40.816095 containerd[1455]: time="2024-12-13T02:28:40.815969268Z" level=info msg="StopPodSandbox for \"1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd\" returns successfully" Dec 13 02:28:40.818370 containerd[1455]: time="2024-12-13T02:28:40.818030425Z" level=info msg="RemovePodSandbox for \"1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd\"" Dec 13 02:28:40.818370 containerd[1455]: time="2024-12-13T02:28:40.818170939Z" level=info msg="Forcibly stopping sandbox \"1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd\"" Dec 13 02:28:40.818736 containerd[1455]: time="2024-12-13T02:28:40.818479257Z" level=info msg="TearDown network for sandbox \"1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd\" successfully" Dec 13 02:28:40.832044 containerd[1455]: time="2024-12-13T02:28:40.830920023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:28:40.832044 containerd[1455]: time="2024-12-13T02:28:40.831636738Z" level=info msg="RemovePodSandbox \"1f7727ecba3eb0a25a320f0302a24fc8b92dd4337413984b85cac3bbc4524cfd\" returns successfully" Dec 13 02:28:40.834322 containerd[1455]: time="2024-12-13T02:28:40.833438317Z" level=info msg="StopPodSandbox for \"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e\"" Dec 13 02:28:41.501932 containerd[1455]: 2024-12-13 02:28:41.113 [WARNING][6442] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:28:41.501932 containerd[1455]: 2024-12-13 02:28:41.115 [INFO][6442] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Dec 13 02:28:41.501932 containerd[1455]: 2024-12-13 02:28:41.115 [INFO][6442] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" iface="eth0" netns="" Dec 13 02:28:41.501932 containerd[1455]: 2024-12-13 02:28:41.115 [INFO][6442] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Dec 13 02:28:41.501932 containerd[1455]: 2024-12-13 02:28:41.115 [INFO][6442] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Dec 13 02:28:41.501932 containerd[1455]: 2024-12-13 02:28:41.472 [INFO][6449] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" HandleID="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:28:41.501932 containerd[1455]: 2024-12-13 02:28:41.476 [INFO][6449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:28:41.501932 containerd[1455]: 2024-12-13 02:28:41.476 [INFO][6449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:28:41.501932 containerd[1455]: 2024-12-13 02:28:41.494 [WARNING][6449] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" HandleID="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:28:41.501932 containerd[1455]: 2024-12-13 02:28:41.494 [INFO][6449] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" HandleID="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:28:41.501932 containerd[1455]: 2024-12-13 02:28:41.497 [INFO][6449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:28:41.501932 containerd[1455]: 2024-12-13 02:28:41.499 [INFO][6442] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Dec 13 02:28:41.501932 containerd[1455]: time="2024-12-13T02:28:41.501866229Z" level=info msg="TearDown network for sandbox \"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e\" successfully" Dec 13 02:28:41.501932 containerd[1455]: time="2024-12-13T02:28:41.501902366Z" level=info msg="StopPodSandbox for \"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e\" returns successfully" Dec 13 02:28:41.504277 containerd[1455]: time="2024-12-13T02:28:41.503848858Z" level=info msg="RemovePodSandbox for \"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e\"" Dec 13 02:28:41.504277 containerd[1455]: time="2024-12-13T02:28:41.503883924Z" level=info msg="Forcibly stopping sandbox \"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e\"" Dec 13 02:28:41.623017 containerd[1455]: 2024-12-13 02:28:41.572 [WARNING][6467] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" WorkloadEndpoint="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:28:41.623017 containerd[1455]: 2024-12-13 02:28:41.572 [INFO][6467] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Dec 13 02:28:41.623017 containerd[1455]: 2024-12-13 02:28:41.573 [INFO][6467] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" iface="eth0" netns="" Dec 13 02:28:41.623017 containerd[1455]: 2024-12-13 02:28:41.573 [INFO][6467] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Dec 13 02:28:41.623017 containerd[1455]: 2024-12-13 02:28:41.573 [INFO][6467] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Dec 13 02:28:41.623017 containerd[1455]: 2024-12-13 02:28:41.607 [INFO][6473] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" HandleID="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:28:41.623017 containerd[1455]: 2024-12-13 02:28:41.607 [INFO][6473] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:28:41.623017 containerd[1455]: 2024-12-13 02:28:41.607 [INFO][6473] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:28:41.623017 containerd[1455]: 2024-12-13 02:28:41.615 [WARNING][6473] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" HandleID="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:28:41.623017 containerd[1455]: 2024-12-13 02:28:41.615 [INFO][6473] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" HandleID="k8s-pod-network.f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Workload="ci--4081--2--1--b--462e46fdf9.novalocal-k8s-calico--kube--controllers--7dd4dd5bd7--zwr86-eth0" Dec 13 02:28:41.623017 containerd[1455]: 2024-12-13 02:28:41.617 [INFO][6473] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:28:41.623017 containerd[1455]: 2024-12-13 02:28:41.620 [INFO][6467] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e" Dec 13 02:28:41.624390 containerd[1455]: time="2024-12-13T02:28:41.623084700Z" level=info msg="TearDown network for sandbox \"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e\" successfully" Dec 13 02:28:41.641994 containerd[1455]: time="2024-12-13T02:28:41.641935660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:28:41.642154 containerd[1455]: time="2024-12-13T02:28:41.642025609Z" level=info msg="RemovePodSandbox \"f807108181a7a16acd53a8778cac9c5a91fcf9af572cf7be709bdb7d652c3d3e\" returns successfully" Dec 13 02:28:45.768606 systemd[1]: Started sshd@21-172.24.4.31:22-172.24.4.1:45830.service - OpenSSH per-connection server daemon (172.24.4.1:45830). Dec 13 02:28:47.089628 sshd[6485]: Accepted publickey for core from 172.24.4.1 port 45830 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:28:47.095419 sshd[6485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:28:47.114680 systemd-logind[1431]: New session 24 of user core. Dec 13 02:28:47.126002 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 02:28:48.435439 sshd[6485]: pam_unix(sshd:session): session closed for user core Dec 13 02:28:48.444290 systemd[1]: sshd@21-172.24.4.31:22-172.24.4.1:45830.service: Deactivated successfully. Dec 13 02:28:48.452512 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:28:48.454333 systemd-logind[1431]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:28:48.458001 systemd-logind[1431]: Removed session 24. Dec 13 02:28:51.043820 systemd[1]: run-containerd-runc-k8s.io-97f213146be22ada7c2703c2709cf80329ca49e862e1113e826bd9b88f00c3c3-runc.vIjb39.mount: Deactivated successfully. Dec 13 02:28:53.460774 systemd[1]: Started sshd@22-172.24.4.31:22-172.24.4.1:45838.service - OpenSSH per-connection server daemon (172.24.4.1:45838). Dec 13 02:28:54.510275 sshd[6535]: Accepted publickey for core from 172.24.4.1 port 45838 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:28:54.511996 sshd[6535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:28:54.533419 systemd-logind[1431]: New session 25 of user core. Dec 13 02:28:54.536450 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 02:28:55.341110 sshd[6535]: pam_unix(sshd:session): session closed for user core Dec 13 02:28:55.351072 systemd[1]: sshd@22-172.24.4.31:22-172.24.4.1:45838.service: Deactivated successfully. Dec 13 02:28:55.357347 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:28:55.362769 systemd-logind[1431]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:28:55.366685 systemd-logind[1431]: Removed session 25. Dec 13 02:29:00.367769 systemd[1]: Started sshd@23-172.24.4.31:22-172.24.4.1:59856.service - OpenSSH per-connection server daemon (172.24.4.1:59856). Dec 13 02:29:01.512407 sshd[6548]: Accepted publickey for core from 172.24.4.1 port 59856 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:29:01.515701 sshd[6548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:29:01.530231 systemd-logind[1431]: New session 26 of user core. Dec 13 02:29:01.533490 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 02:29:02.206741 sshd[6548]: pam_unix(sshd:session): session closed for user core Dec 13 02:29:02.214380 systemd[1]: sshd@23-172.24.4.31:22-172.24.4.1:59856.service: Deactivated successfully. Dec 13 02:29:02.221350 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 02:29:02.223526 systemd-logind[1431]: Session 26 logged out. Waiting for processes to exit. Dec 13 02:29:02.227911 systemd-logind[1431]: Removed session 26. Dec 13 02:29:07.223517 systemd[1]: Started sshd@24-172.24.4.31:22-172.24.4.1:53544.service - OpenSSH per-connection server daemon (172.24.4.1:53544). Dec 13 02:29:08.425788 sshd[6561]: Accepted publickey for core from 172.24.4.1 port 53544 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:29:08.432584 sshd[6561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:29:08.444662 systemd-logind[1431]: New session 27 of user core. Dec 13 02:29:08.458564 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 02:29:09.199986 sshd[6561]: pam_unix(sshd:session): session closed for user core Dec 13 02:29:09.207820 systemd[1]: sshd@24-172.24.4.31:22-172.24.4.1:53544.service: Deactivated successfully. Dec 13 02:29:09.212518 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 02:29:09.214942 systemd-logind[1431]: Session 27 logged out. Waiting for processes to exit. Dec 13 02:29:09.217051 systemd-logind[1431]: Removed session 27. Dec 13 02:29:14.221668 systemd[1]: Started sshd@25-172.24.4.31:22-172.24.4.1:53554.service - OpenSSH per-connection server daemon (172.24.4.1:53554). Dec 13 02:29:15.634307 sshd[6606]: Accepted publickey for core from 172.24.4.1 port 53554 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:29:15.637400 sshd[6606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:29:15.648818 systemd-logind[1431]: New session 28 of user core. Dec 13 02:29:15.654433 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 02:29:16.430710 sshd[6606]: pam_unix(sshd:session): session closed for user core Dec 13 02:29:16.440557 systemd-logind[1431]: Session 28 logged out. Waiting for processes to exit. Dec 13 02:29:16.441454 systemd[1]: sshd@25-172.24.4.31:22-172.24.4.1:53554.service: Deactivated successfully. Dec 13 02:29:16.446074 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 02:29:16.451933 systemd-logind[1431]: Removed session 28.