Oct 8 20:00:45.074274 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 20:00:45.074317 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:00:45.074329 kernel: BIOS-provided physical RAM map: Oct 8 20:00:45.074337 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 8 20:00:45.074344 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 8 20:00:45.074370 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 8 20:00:45.074379 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Oct 8 20:00:45.074387 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Oct 8 20:00:45.074394 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 8 20:00:45.074404 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 8 20:00:45.074412 kernel: NX (Execute Disable) protection: active Oct 8 20:00:45.074419 kernel: APIC: Static calls initialized Oct 8 20:00:45.074426 kernel: SMBIOS 2.8 present. Oct 8 20:00:45.074434 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Oct 8 20:00:45.074443 kernel: Hypervisor detected: KVM Oct 8 20:00:45.074452 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 20:00:45.074460 kernel: kvm-clock: using sched offset of 6239217649 cycles Oct 8 20:00:45.074468 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 20:00:45.074476 kernel: tsc: Detected 1996.249 MHz processor Oct 8 20:00:45.074484 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 20:00:45.074493 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 20:00:45.074500 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Oct 8 20:00:45.074508 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 8 20:00:45.074517 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 20:00:45.074526 kernel: ACPI: Early table checksum verification disabled Oct 8 20:00:45.074534 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Oct 8 20:00:45.074542 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:45.074550 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:45.074558 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:45.074567 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 8 20:00:45.074574 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:45.074582 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:45.074590 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Oct 8 20:00:45.074600 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Oct 8 20:00:45.074608 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 8 20:00:45.074616 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Oct 8 20:00:45.074624 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Oct 8 20:00:45.074631 kernel: No NUMA configuration found Oct 8 20:00:45.074639 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Oct 8 20:00:45.074647 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Oct 8 20:00:45.074658 kernel: Zone ranges: Oct 8 20:00:45.074668 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 20:00:45.074676 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Oct 8 20:00:45.074684 kernel: Normal empty Oct 8 20:00:45.074693 kernel: Movable zone start for each node Oct 8 20:00:45.074701 kernel: Early memory node ranges Oct 8 20:00:45.074709 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 8 20:00:45.074717 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Oct 8 20:00:45.074727 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Oct 8 20:00:45.074736 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 20:00:45.074744 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 8 20:00:45.074752 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Oct 8 20:00:45.074760 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 8 20:00:45.074768 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 20:00:45.074776 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 20:00:45.074785 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 8 20:00:45.074793 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 20:00:45.074803 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 20:00:45.074812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 20:00:45.074820 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 20:00:45.074828 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 20:00:45.074836 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 8 20:00:45.074844 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 8 20:00:45.074852 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 8 20:00:45.074860 kernel: Booting paravirtualized kernel on KVM Oct 8 20:00:45.074869 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 20:00:45.074879 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 8 20:00:45.074888 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 8 20:00:45.074896 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 8 20:00:45.074904 kernel: pcpu-alloc: [0] 0 1 Oct 8 20:00:45.074912 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 8 20:00:45.074922 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:00:45.074931 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 20:00:45.074939 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 20:00:45.074949 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 8 20:00:45.074958 kernel: Fallback order for Node 0: 0 Oct 8 20:00:45.074966 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Oct 8 20:00:45.074974 kernel: Policy zone: DMA32 Oct 8 20:00:45.074982 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 20:00:45.074991 kernel: Memory: 1971212K/2096620K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 125148K reserved, 0K cma-reserved) Oct 8 20:00:45.074999 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 20:00:45.075007 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 20:00:45.075017 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 20:00:45.075025 kernel: Dynamic Preempt: voluntary Oct 8 20:00:45.075034 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 20:00:45.075042 kernel: rcu: RCU event tracing is enabled. Oct 8 20:00:45.075051 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 20:00:45.075059 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 20:00:45.075068 kernel: Rude variant of Tasks RCU enabled. Oct 8 20:00:45.075076 kernel: Tracing variant of Tasks RCU enabled. Oct 8 20:00:45.075084 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 20:00:45.075093 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 20:00:45.075103 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 8 20:00:45.075111 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 20:00:45.075119 kernel: Console: colour VGA+ 80x25 Oct 8 20:00:45.075127 kernel: printk: console [tty0] enabled Oct 8 20:00:45.075135 kernel: printk: console [ttyS0] enabled Oct 8 20:00:45.075143 kernel: ACPI: Core revision 20230628 Oct 8 20:00:45.075151 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 20:00:45.075159 kernel: x2apic enabled Oct 8 20:00:45.075168 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 20:00:45.075178 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 8 20:00:45.075186 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 8 20:00:45.075194 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Oct 8 20:00:45.075203 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 8 20:00:45.075211 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 8 20:00:45.075219 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 20:00:45.075227 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 20:00:45.075235 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 20:00:45.075244 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 20:00:45.075254 kernel: Speculative Store Bypass: Vulnerable Oct 8 20:00:45.075262 kernel: x86/fpu: x87 FPU will use FXSAVE Oct 8 20:00:45.075270 kernel: Freeing SMP alternatives memory: 32K Oct 8 20:00:45.075278 kernel: pid_max: default: 32768 minimum: 301 Oct 8 20:00:45.075287 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 20:00:45.075295 kernel: landlock: Up and running. Oct 8 20:00:45.075303 kernel: SELinux: Initializing. Oct 8 20:00:45.075311 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 8 20:00:45.075327 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 8 20:00:45.075336 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Oct 8 20:00:45.075345 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:00:45.075385 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:00:45.075397 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:00:45.075405 kernel: Performance Events: AMD PMU driver. Oct 8 20:00:45.075413 kernel: ... version: 0 Oct 8 20:00:45.075422 kernel: ... bit width: 48 Oct 8 20:00:45.075431 kernel: ... generic registers: 4 Oct 8 20:00:45.075441 kernel: ... value mask: 0000ffffffffffff Oct 8 20:00:45.075450 kernel: ... max period: 00007fffffffffff Oct 8 20:00:45.075458 kernel: ... fixed-purpose events: 0 Oct 8 20:00:45.075467 kernel: ... event mask: 000000000000000f Oct 8 20:00:45.075475 kernel: signal: max sigframe size: 1440 Oct 8 20:00:45.075484 kernel: rcu: Hierarchical SRCU implementation. Oct 8 20:00:45.075493 kernel: rcu: Max phase no-delay instances is 400. Oct 8 20:00:45.075501 kernel: smp: Bringing up secondary CPUs ... Oct 8 20:00:45.075509 kernel: smpboot: x86: Booting SMP configuration: Oct 8 20:00:45.075520 kernel: .... node #0, CPUs: #1 Oct 8 20:00:45.075529 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 20:00:45.075537 kernel: smpboot: Max logical packages: 2 Oct 8 20:00:45.075546 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Oct 8 20:00:45.075554 kernel: devtmpfs: initialized Oct 8 20:00:45.075563 kernel: x86/mm: Memory block size: 128MB Oct 8 20:00:45.075572 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 20:00:45.075580 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 20:00:45.075589 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 20:00:45.075599 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 20:00:45.075608 kernel: audit: initializing netlink subsys (disabled) Oct 8 20:00:45.075617 kernel: audit: type=2000 audit(1728417644.522:1): state=initialized audit_enabled=0 res=1 Oct 8 20:00:45.075625 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 20:00:45.075634 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 20:00:45.075642 kernel: cpuidle: using governor menu Oct 8 20:00:45.075651 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 20:00:45.075659 kernel: dca service started, version 1.12.1 Oct 8 20:00:45.075668 kernel: PCI: Using configuration type 1 for base access Oct 8 20:00:45.075678 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 20:00:45.075687 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 20:00:45.075695 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 20:00:45.075704 kernel: ACPI: Added _OSI(Module Device) Oct 8 20:00:45.075712 kernel: ACPI: Added _OSI(Processor Device) Oct 8 20:00:45.075721 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 20:00:45.075729 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 20:00:45.075738 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 20:00:45.075746 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 20:00:45.075757 kernel: ACPI: Interpreter enabled Oct 8 20:00:45.075766 kernel: ACPI: PM: (supports S0 S3 S5) Oct 8 20:00:45.075774 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 20:00:45.075783 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 20:00:45.075803 kernel: PCI: Using E820 reservations for host bridge windows Oct 8 20:00:45.075812 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 8 20:00:45.075820 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 20:00:45.075994 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 8 20:00:45.076100 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 8 20:00:45.076193 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 8 20:00:45.076206 kernel: acpiphp: Slot [3] registered Oct 8 20:00:45.076215 kernel: acpiphp: Slot [4] registered Oct 8 20:00:45.076224 kernel: acpiphp: Slot [5] registered Oct 8 20:00:45.076232 kernel: acpiphp: Slot [6] registered Oct 8 20:00:45.076241 kernel: acpiphp: Slot [7] registered Oct 8 20:00:45.076249 kernel: acpiphp: Slot [8] registered Oct 8 20:00:45.076260 kernel: acpiphp: Slot [9] registered Oct 8 20:00:45.076269 kernel: acpiphp: Slot [10] registered Oct 8 20:00:45.076277 kernel: acpiphp: Slot [11] registered Oct 8 20:00:45.076286 kernel: acpiphp: Slot [12] registered Oct 8 20:00:45.076294 kernel: acpiphp: Slot [13] registered Oct 8 20:00:45.076302 kernel: acpiphp: Slot [14] registered Oct 8 20:00:45.076311 kernel: acpiphp: Slot [15] registered Oct 8 20:00:45.076319 kernel: acpiphp: Slot [16] registered Oct 8 20:00:45.076327 kernel: acpiphp: Slot [17] registered Oct 8 20:00:45.076336 kernel: acpiphp: Slot [18] registered Oct 8 20:00:45.076346 kernel: acpiphp: Slot [19] registered Oct 8 20:00:45.076372 kernel: acpiphp: Slot [20] registered Oct 8 20:00:45.076381 kernel: acpiphp: Slot [21] registered Oct 8 20:00:45.076389 kernel: acpiphp: Slot [22] registered Oct 8 20:00:45.076397 kernel: acpiphp: Slot [23] registered Oct 8 20:00:45.076406 kernel: acpiphp: Slot [24] registered Oct 8 20:00:45.076414 kernel: acpiphp: Slot [25] registered Oct 8 20:00:45.076422 kernel: acpiphp: Slot [26] registered Oct 8 20:00:45.076431 kernel: acpiphp: Slot [27] registered Oct 8 20:00:45.076442 kernel: acpiphp: Slot [28] registered Oct 8 20:00:45.076450 kernel: acpiphp: Slot [29] registered Oct 8 20:00:45.076459 kernel: acpiphp: Slot [30] registered Oct 8 20:00:45.076467 kernel: acpiphp: Slot [31] registered Oct 8 20:00:45.076475 kernel: PCI host bridge to bus 0000:00 Oct 8 20:00:45.076578 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 20:00:45.076666 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 20:00:45.076749 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 20:00:45.076836 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 8 20:00:45.076916 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 8 20:00:45.076997 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 20:00:45.077126 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 8 20:00:45.077233 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 8 20:00:45.077338 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 8 20:00:45.077463 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Oct 8 20:00:45.077557 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 8 20:00:45.077650 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 8 20:00:45.077746 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 8 20:00:45.077838 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 8 20:00:45.077939 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 8 20:00:45.078041 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 8 20:00:45.078138 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 8 20:00:45.078247 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 8 20:00:45.078343 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 8 20:00:45.080139 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 8 20:00:45.080244 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Oct 8 20:00:45.080343 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Oct 8 20:00:45.082949 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 8 20:00:45.083067 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 8 20:00:45.083169 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Oct 8 20:00:45.083260 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Oct 8 20:00:45.083374 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 8 20:00:45.083474 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Oct 8 20:00:45.083574 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 8 20:00:45.083668 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 8 20:00:45.083765 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Oct 8 20:00:45.083879 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 8 20:00:45.083987 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Oct 8 20:00:45.084087 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Oct 8 20:00:45.084186 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 8 20:00:45.084294 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 20:00:45.091276 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Oct 8 20:00:45.091486 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 8 20:00:45.091501 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 20:00:45.091511 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 20:00:45.091520 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 20:00:45.091529 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 20:00:45.091537 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 8 20:00:45.091546 kernel: iommu: Default domain type: Translated Oct 8 20:00:45.091555 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 20:00:45.091568 kernel: PCI: Using ACPI for IRQ routing Oct 8 20:00:45.091577 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 20:00:45.091585 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 8 20:00:45.091594 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Oct 8 20:00:45.091685 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 8 20:00:45.091781 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 8 20:00:45.091893 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 8 20:00:45.091906 kernel: vgaarb: loaded Oct 8 20:00:45.091915 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 20:00:45.091928 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 20:00:45.091936 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 20:00:45.091945 kernel: pnp: PnP ACPI init Oct 8 20:00:45.092049 kernel: pnp 00:03: [dma 2] Oct 8 20:00:45.092064 kernel: pnp: PnP ACPI: found 5 devices Oct 8 20:00:45.092073 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 20:00:45.092082 kernel: NET: Registered PF_INET protocol family Oct 8 20:00:45.092090 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 20:00:45.092103 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 8 20:00:45.092112 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 20:00:45.092120 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 8 20:00:45.092129 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 8 20:00:45.092138 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 8 20:00:45.092147 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 8 20:00:45.092155 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 8 20:00:45.092164 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 20:00:45.092172 kernel: NET: Registered PF_XDP protocol family Oct 8 20:00:45.092267 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 20:00:45.092372 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 20:00:45.092462 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 20:00:45.092543 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 8 20:00:45.092623 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 8 20:00:45.092716 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 8 20:00:45.092812 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 8 20:00:45.092826 kernel: PCI: CLS 0 bytes, default 64 Oct 8 20:00:45.092839 kernel: Initialise system trusted keyrings Oct 8 20:00:45.092848 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 8 20:00:45.092856 kernel: Key type asymmetric registered Oct 8 20:00:45.092865 kernel: Asymmetric key parser 'x509' registered Oct 8 20:00:45.092874 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 20:00:45.092882 kernel: io scheduler mq-deadline registered Oct 8 20:00:45.092891 kernel: io scheduler kyber registered Oct 8 20:00:45.092900 kernel: io scheduler bfq registered Oct 8 20:00:45.092908 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 20:00:45.092926 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 8 20:00:45.092935 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 8 20:00:45.092944 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 8 20:00:45.092953 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 8 20:00:45.092962 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 20:00:45.092971 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 20:00:45.092980 kernel: random: crng init done Oct 8 20:00:45.092988 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 20:00:45.092997 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 20:00:45.093008 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 20:00:45.093016 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 20:00:45.093121 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 8 20:00:45.093212 kernel: rtc_cmos 00:04: registered as rtc0 Oct 8 20:00:45.093297 kernel: rtc_cmos 00:04: setting system clock to 2024-10-08T20:00:44 UTC (1728417644) Oct 8 20:00:45.093420 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 8 20:00:45.093435 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 8 20:00:45.093444 kernel: NET: Registered PF_INET6 protocol family Oct 8 20:00:45.093457 kernel: Segment Routing with IPv6 Oct 8 20:00:45.093466 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 20:00:45.093474 kernel: NET: Registered PF_PACKET protocol family Oct 8 20:00:45.093483 kernel: Key type dns_resolver registered Oct 8 20:00:45.093491 kernel: IPI shorthand broadcast: enabled Oct 8 20:00:45.093500 kernel: sched_clock: Marking stable (846021201, 127901367)->(986992525, -13069957) Oct 8 20:00:45.093508 kernel: registered taskstats version 1 Oct 8 20:00:45.093517 kernel: Loading compiled-in X.509 certificates Oct 8 20:00:45.093526 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 20:00:45.093536 kernel: Key type .fscrypt registered Oct 8 20:00:45.093545 kernel: Key type fscrypt-provisioning registered Oct 8 20:00:45.093554 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 20:00:45.093562 kernel: ima: Allocated hash algorithm: sha1 Oct 8 20:00:45.093571 kernel: ima: No architecture policies found Oct 8 20:00:45.093579 kernel: clk: Disabling unused clocks Oct 8 20:00:45.093588 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 20:00:45.093596 kernel: Write protecting the kernel read-only data: 36864k Oct 8 20:00:45.093607 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 20:00:45.093616 kernel: Run /init as init process Oct 8 20:00:45.093624 kernel: with arguments: Oct 8 20:00:45.093633 kernel: /init Oct 8 20:00:45.093641 kernel: with environment: Oct 8 20:00:45.093650 kernel: HOME=/ Oct 8 20:00:45.093658 kernel: TERM=linux Oct 8 20:00:45.093666 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 20:00:45.093683 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:00:45.093697 systemd[1]: Detected virtualization kvm. Oct 8 20:00:45.093706 systemd[1]: Detected architecture x86-64. Oct 8 20:00:45.093715 systemd[1]: Running in initrd. Oct 8 20:00:45.093725 systemd[1]: No hostname configured, using default hostname. Oct 8 20:00:45.093734 systemd[1]: Hostname set to . Oct 8 20:00:45.093743 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:00:45.093753 systemd[1]: Queued start job for default target initrd.target. Oct 8 20:00:45.093765 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:00:45.093774 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:00:45.093784 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 20:00:45.093794 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:00:45.093803 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 20:00:45.093813 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 20:00:45.093824 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 20:00:45.093835 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 20:00:45.093845 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:00:45.093854 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:00:45.093864 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:00:45.093882 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:00:45.093894 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:00:45.093905 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:00:45.093914 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:00:45.093924 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:00:45.093934 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:00:45.093943 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:00:45.093953 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:00:45.093963 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:00:45.093972 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:00:45.093982 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:00:45.093994 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 20:00:45.094004 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:00:45.094014 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 20:00:45.094023 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 20:00:45.094033 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:00:45.094042 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:00:45.094052 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:00:45.094062 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 20:00:45.094074 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:00:45.094117 systemd-journald[184]: Collecting audit messages is disabled. Oct 8 20:00:45.094141 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 20:00:45.094155 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:00:45.094166 systemd-journald[184]: Journal started Oct 8 20:00:45.094192 systemd-journald[184]: Runtime Journal (/run/log/journal/ad48961f70bc40f1bb152d19ea4d2fc7) is 4.9M, max 39.3M, 34.4M free. Oct 8 20:00:45.048828 systemd-modules-load[185]: Inserted module 'overlay' Oct 8 20:00:45.135384 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:00:45.135484 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 20:00:45.135511 kernel: Bridge firewalling registered Oct 8 20:00:45.098410 systemd-modules-load[185]: Inserted module 'br_netfilter' Oct 8 20:00:45.135863 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:00:45.136740 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:00:45.138059 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:00:45.146680 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:00:45.150566 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:00:45.154663 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:00:45.162037 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:00:45.177212 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:00:45.178054 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:00:45.182709 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 20:00:45.197402 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:00:45.200605 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:00:45.206267 dracut-cmdline[214]: dracut-dracut-053 Oct 8 20:00:45.207516 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:00:45.210369 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:00:45.250081 systemd-resolved[224]: Positive Trust Anchors: Oct 8 20:00:45.250109 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:00:45.250150 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:00:45.257131 systemd-resolved[224]: Defaulting to hostname 'linux'. Oct 8 20:00:45.258974 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:00:45.260259 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:00:45.293442 kernel: SCSI subsystem initialized Oct 8 20:00:45.304382 kernel: Loading iSCSI transport class v2.0-870. Oct 8 20:00:45.316403 kernel: iscsi: registered transport (tcp) Oct 8 20:00:45.341782 kernel: iscsi: registered transport (qla4xxx) Oct 8 20:00:45.342496 kernel: QLogic iSCSI HBA Driver Oct 8 20:00:45.413803 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 20:00:45.418501 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 20:00:45.497647 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 20:00:45.497772 kernel: device-mapper: uevent: version 1.0.3 Oct 8 20:00:45.500486 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 20:00:45.576515 kernel: raid6: sse2x4 gen() 5200 MB/s Oct 8 20:00:45.594489 kernel: raid6: sse2x2 gen() 5925 MB/s Oct 8 20:00:45.611688 kernel: raid6: sse2x1 gen() 9363 MB/s Oct 8 20:00:45.611821 kernel: raid6: using algorithm sse2x1 gen() 9363 MB/s Oct 8 20:00:45.631700 kernel: raid6: .... xor() 7297 MB/s, rmw enabled Oct 8 20:00:45.631773 kernel: raid6: using ssse3x2 recovery algorithm Oct 8 20:00:45.654927 kernel: xor: measuring software checksum speed Oct 8 20:00:45.655007 kernel: prefetch64-sse : 18385 MB/sec Oct 8 20:00:45.655437 kernel: generic_sse : 16788 MB/sec Oct 8 20:00:45.656490 kernel: xor: using function: prefetch64-sse (18385 MB/sec) Oct 8 20:00:45.854457 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 20:00:45.874517 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:00:45.885682 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:00:45.928676 systemd-udevd[401]: Using default interface naming scheme 'v255'. Oct 8 20:00:45.940480 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:00:45.950605 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 20:00:45.979721 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Oct 8 20:00:46.023733 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:00:46.033651 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:00:46.077171 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:00:46.083540 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 20:00:46.106792 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 20:00:46.108074 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:00:46.110770 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:00:46.116207 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:00:46.124829 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 20:00:46.147982 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:00:46.178998 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Oct 8 20:00:46.189647 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Oct 8 20:00:46.198622 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 20:00:46.198679 kernel: GPT:17805311 != 41943039 Oct 8 20:00:46.198692 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 20:00:46.199686 kernel: GPT:17805311 != 41943039 Oct 8 20:00:46.200470 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 20:00:46.204682 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:00:46.205419 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:00:46.205635 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:00:46.207308 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:00:46.208614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:00:46.208752 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:00:46.209244 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:00:46.216205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:00:46.228895 kernel: libata version 3.00 loaded. Oct 8 20:00:46.230963 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 8 20:00:46.234378 kernel: scsi host0: ata_piix Oct 8 20:00:46.234521 kernel: scsi host1: ata_piix Oct 8 20:00:46.234630 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Oct 8 20:00:46.237425 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Oct 8 20:00:46.282005 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:00:46.295525 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:00:46.309565 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:00:46.442399 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by (udev-worker) (465) Oct 8 20:00:46.455410 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (454) Oct 8 20:00:46.480832 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 20:00:46.487422 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 20:00:46.493812 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 20:00:46.499112 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 20:00:46.500695 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 20:00:46.514787 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 20:00:46.528563 disk-uuid[505]: Primary Header is updated. Oct 8 20:00:46.528563 disk-uuid[505]: Secondary Entries is updated. Oct 8 20:00:46.528563 disk-uuid[505]: Secondary Header is updated. Oct 8 20:00:46.541437 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:00:46.552451 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:00:46.563433 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:00:47.568442 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:00:47.569746 disk-uuid[506]: The operation has completed successfully. Oct 8 20:00:47.649318 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 20:00:47.649528 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 20:00:47.673486 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 20:00:47.682625 sh[519]: Success Oct 8 20:00:47.704406 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Oct 8 20:00:47.801908 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 20:00:47.803765 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 20:00:47.810564 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 20:00:47.829686 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 20:00:47.829757 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:00:47.832395 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 20:00:47.832467 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 20:00:47.834691 kernel: BTRFS info (device dm-0): using free space tree Oct 8 20:00:47.846191 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 20:00:47.848523 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 20:00:47.853621 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 20:00:47.859741 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 20:00:47.870660 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:00:47.870706 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:00:47.870718 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:00:47.877532 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:00:47.891126 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 20:00:47.894399 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:00:47.907688 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 20:00:47.915918 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 20:00:47.997125 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:00:48.005603 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:00:48.033618 systemd-networkd[703]: lo: Link UP Oct 8 20:00:48.033625 systemd-networkd[703]: lo: Gained carrier Oct 8 20:00:48.036037 systemd-networkd[703]: Enumeration completed Oct 8 20:00:48.036127 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:00:48.037254 systemd[1]: Reached target network.target - Network. Oct 8 20:00:48.037500 systemd-networkd[703]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:00:48.037505 systemd-networkd[703]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:00:48.039005 systemd-networkd[703]: eth0: Link UP Oct 8 20:00:48.039009 systemd-networkd[703]: eth0: Gained carrier Oct 8 20:00:48.039017 systemd-networkd[703]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:00:48.061627 systemd-networkd[703]: eth0: DHCPv4 address 172.24.4.139/24, gateway 172.24.4.1 acquired from 172.24.4.1 Oct 8 20:00:48.080050 ignition[609]: Ignition 2.19.0 Oct 8 20:00:48.080854 ignition[609]: Stage: fetch-offline Oct 8 20:00:48.080898 ignition[609]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:48.080908 ignition[609]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 8 20:00:48.082428 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:00:48.081013 ignition[609]: parsed url from cmdline: "" Oct 8 20:00:48.081017 ignition[609]: no config URL provided Oct 8 20:00:48.081023 ignition[609]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:00:48.081031 ignition[609]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:00:48.081037 ignition[609]: failed to fetch config: resource requires networking Oct 8 20:00:48.081224 ignition[609]: Ignition finished successfully Oct 8 20:00:48.090582 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 20:00:48.103401 ignition[713]: Ignition 2.19.0 Oct 8 20:00:48.103414 ignition[713]: Stage: fetch Oct 8 20:00:48.103590 ignition[713]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:48.103603 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 8 20:00:48.103691 ignition[713]: parsed url from cmdline: "" Oct 8 20:00:48.103695 ignition[713]: no config URL provided Oct 8 20:00:48.103700 ignition[713]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:00:48.103709 ignition[713]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:00:48.103817 ignition[713]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Oct 8 20:00:48.103844 ignition[713]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Oct 8 20:00:48.103861 ignition[713]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Oct 8 20:00:48.292251 ignition[713]: GET result: OK Oct 8 20:00:48.292462 ignition[713]: parsing config with SHA512: bd4ad99dff03431f4e74045d924e0526bb2413fc402fd54360d07f528a9b97aafc3c5a465ed1ed782784442176128034107d47c6ccb9be0f536a02aa0739e5ac Oct 8 20:00:48.300822 unknown[713]: fetched base config from "system" Oct 8 20:00:48.300848 unknown[713]: fetched base config from "system" Oct 8 20:00:48.301824 ignition[713]: fetch: fetch complete Oct 8 20:00:48.300862 unknown[713]: fetched user config from "openstack" Oct 8 20:00:48.301837 ignition[713]: fetch: fetch passed Oct 8 20:00:48.306761 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 20:00:48.301924 ignition[713]: Ignition finished successfully Oct 8 20:00:48.318734 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 20:00:48.350152 ignition[719]: Ignition 2.19.0 Oct 8 20:00:48.350183 ignition[719]: Stage: kargs Oct 8 20:00:48.350664 ignition[719]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:48.350691 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 8 20:00:48.353417 ignition[719]: kargs: kargs passed Oct 8 20:00:48.355410 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 20:00:48.353508 ignition[719]: Ignition finished successfully Oct 8 20:00:48.363692 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 20:00:48.397732 ignition[726]: Ignition 2.19.0 Oct 8 20:00:48.397745 ignition[726]: Stage: disks Oct 8 20:00:48.397950 ignition[726]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:48.399982 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 20:00:48.397963 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 8 20:00:48.401807 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 20:00:48.398921 ignition[726]: disks: disks passed Oct 8 20:00:48.403180 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:00:48.398965 ignition[726]: Ignition finished successfully Oct 8 20:00:48.405010 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:00:48.407211 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:00:48.409431 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:00:48.416539 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 20:00:48.444836 systemd-fsck[734]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 8 20:00:48.458447 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 20:00:48.463563 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 20:00:48.631610 kernel: EXT4-fs (vda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 20:00:48.632210 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 20:00:48.633283 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 20:00:48.641553 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:00:48.644808 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 20:00:48.648306 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 20:00:48.654675 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (742) Oct 8 20:00:48.653931 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Oct 8 20:00:48.665099 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:00:48.665147 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:00:48.665176 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:00:48.665204 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:00:48.654726 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 20:00:48.654766 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:00:48.664105 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 20:00:48.667244 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:00:48.683489 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 20:00:48.839364 initrd-setup-root[770]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 20:00:48.846010 initrd-setup-root[777]: cut: /sysroot/etc/group: No such file or directory Oct 8 20:00:48.851765 initrd-setup-root[784]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 20:00:48.858519 initrd-setup-root[791]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 20:00:48.994261 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 20:00:49.003451 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 20:00:49.006511 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 20:00:49.013332 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 20:00:49.014084 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:00:49.049867 ignition[858]: INFO : Ignition 2.19.0 Oct 8 20:00:49.051277 ignition[858]: INFO : Stage: mount Oct 8 20:00:49.053335 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:49.053335 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 8 20:00:49.056928 ignition[858]: INFO : mount: mount passed Oct 8 20:00:49.056928 ignition[858]: INFO : Ignition finished successfully Oct 8 20:00:49.055284 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 20:00:49.059668 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 20:00:49.823961 systemd-networkd[703]: eth0: Gained IPv6LL Oct 8 20:00:55.910582 coreos-metadata[744]: Oct 08 20:00:55.910 WARN failed to locate config-drive, using the metadata service API instead Oct 8 20:00:55.951881 coreos-metadata[744]: Oct 08 20:00:55.951 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 8 20:00:55.968291 coreos-metadata[744]: Oct 08 20:00:55.968 INFO Fetch successful Oct 8 20:00:55.969753 coreos-metadata[744]: Oct 08 20:00:55.969 INFO wrote hostname ci-4081-1-0-b-d257b8cc02.novalocal to /sysroot/etc/hostname Oct 8 20:00:55.973071 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Oct 8 20:00:55.973398 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Oct 8 20:00:55.988584 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 20:00:56.024703 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:00:56.041493 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (875) Oct 8 20:00:56.043408 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:00:56.047241 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:00:56.051911 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:00:56.069478 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:00:56.074501 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:00:56.121765 ignition[893]: INFO : Ignition 2.19.0 Oct 8 20:00:56.121765 ignition[893]: INFO : Stage: files Oct 8 20:00:56.124592 ignition[893]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:56.124592 ignition[893]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 8 20:00:56.128032 ignition[893]: DEBUG : files: compiled without relabeling support, skipping Oct 8 20:00:56.128032 ignition[893]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 20:00:56.128032 ignition[893]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 20:00:56.134675 ignition[893]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 20:00:56.136825 ignition[893]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 20:00:56.139141 ignition[893]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 20:00:56.138821 unknown[893]: wrote ssh authorized keys file for user: core Oct 8 20:00:56.143082 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:00:56.145477 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 20:00:56.216790 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 20:00:56.546669 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:00:56.546669 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 20:00:56.553093 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 20:00:56.553093 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:00:56.553093 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:00:56.553093 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:00:56.553093 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:00:56.553093 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:00:56.553093 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:00:56.553093 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:00:56.553093 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:00:56.553093 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:00:56.553093 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:00:56.553093 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:00:56.553093 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Oct 8 20:00:57.104715 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 20:00:58.789631 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:00:58.789631 ignition[893]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 20:00:58.794890 ignition[893]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:00:58.794890 ignition[893]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:00:58.794890 ignition[893]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 20:00:58.794890 ignition[893]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 8 20:00:58.794890 ignition[893]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 20:00:58.794890 ignition[893]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:00:58.794890 ignition[893]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:00:58.794890 ignition[893]: INFO : files: files passed Oct 8 20:00:58.794890 ignition[893]: INFO : Ignition finished successfully Oct 8 20:00:58.796796 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 20:00:58.805671 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 20:00:58.816590 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 20:00:58.825037 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 20:00:58.825257 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 20:00:58.831520 initrd-setup-root-after-ignition[920]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:00:58.831520 initrd-setup-root-after-ignition[920]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:00:58.837570 initrd-setup-root-after-ignition[924]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:00:58.836569 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:00:58.838814 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 20:00:58.846607 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 20:00:58.885478 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 20:00:58.885761 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 20:00:58.888441 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 20:00:58.889839 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 20:00:58.891839 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 20:00:58.898635 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 20:00:58.916475 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:00:58.934891 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 20:00:58.953069 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:00:58.954781 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:00:58.956877 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 20:00:58.958775 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 20:00:58.959203 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:00:58.961832 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 20:00:58.963852 systemd[1]: Stopped target basic.target - Basic System. Oct 8 20:00:58.965590 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 20:00:58.967405 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:00:58.968628 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 20:00:58.969728 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 20:00:58.970879 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:00:58.972147 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 20:00:58.973301 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 20:00:58.974480 systemd[1]: Stopped target swap.target - Swaps. Oct 8 20:00:58.975502 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 20:00:58.975639 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:00:58.977014 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:00:58.977736 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:00:58.978969 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 20:00:58.980599 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:00:58.981612 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 20:00:58.981784 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 20:00:58.983140 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 20:00:58.983278 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:00:58.984703 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 20:00:58.984822 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 20:00:58.998269 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 20:00:59.001683 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 20:00:59.002321 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 20:00:59.002537 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:00:59.004416 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 20:00:59.004590 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:00:59.011636 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 20:00:59.011743 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 20:00:59.018560 ignition[945]: INFO : Ignition 2.19.0 Oct 8 20:00:59.020432 ignition[945]: INFO : Stage: umount Oct 8 20:00:59.020432 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:59.020432 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 8 20:00:59.023308 ignition[945]: INFO : umount: umount passed Oct 8 20:00:59.023308 ignition[945]: INFO : Ignition finished successfully Oct 8 20:00:59.023829 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 20:00:59.023971 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 20:00:59.025642 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 20:00:59.025721 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 20:00:59.028846 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 20:00:59.028922 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 20:00:59.031502 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 20:00:59.031570 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 20:00:59.032972 systemd[1]: Stopped target network.target - Network. Oct 8 20:00:59.035221 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 20:00:59.035282 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:00:59.035885 systemd[1]: Stopped target paths.target - Path Units. Oct 8 20:00:59.036364 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 20:00:59.040405 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:00:59.040956 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 20:00:59.042440 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 20:00:59.043517 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 20:00:59.043563 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:00:59.044437 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 20:00:59.044472 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:00:59.045391 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 20:00:59.045438 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 20:00:59.046324 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 20:00:59.046383 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 20:00:59.047407 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 20:00:59.048488 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 20:00:59.050645 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 20:00:59.053441 systemd-networkd[703]: eth0: DHCPv6 lease lost Oct 8 20:00:59.056527 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 20:00:59.056685 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 20:00:59.059415 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 20:00:59.059558 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 20:00:59.061449 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 20:00:59.061513 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:00:59.067752 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 20:00:59.068322 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 20:00:59.068415 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:00:59.069063 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:00:59.069111 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:00:59.070145 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 20:00:59.070191 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 20:00:59.071247 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 20:00:59.071292 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:00:59.072578 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:00:59.081791 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 20:00:59.081950 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:00:59.085464 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 20:00:59.085580 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 20:00:59.088035 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 20:00:59.088097 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 20:00:59.089308 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 20:00:59.089344 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:00:59.090487 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 20:00:59.090535 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:00:59.092110 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 20:00:59.092156 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 20:00:59.093180 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:00:59.093227 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:00:59.099603 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 20:00:59.102677 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 20:00:59.102751 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:00:59.103945 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 8 20:00:59.103991 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:00:59.105089 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 20:00:59.105133 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:00:59.106672 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:00:59.106714 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:00:59.108404 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 20:00:59.108497 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 20:00:59.561681 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 20:00:59.562003 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 20:00:59.564920 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 20:00:59.567014 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 20:00:59.567130 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 20:00:59.577704 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 20:00:59.612638 systemd[1]: Switching root. Oct 8 20:00:59.654329 systemd-journald[184]: Journal stopped Oct 8 20:01:01.963709 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Oct 8 20:01:01.963845 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 20:01:01.963864 kernel: SELinux: policy capability open_perms=1 Oct 8 20:01:01.963876 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 20:01:01.963888 kernel: SELinux: policy capability always_check_network=0 Oct 8 20:01:01.963899 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 20:01:01.963910 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 20:01:01.963925 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 20:01:01.963936 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 20:01:01.963947 kernel: audit: type=1403 audit(1728417660.207:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 20:01:01.963960 systemd[1]: Successfully loaded SELinux policy in 71.479ms. Oct 8 20:01:01.963982 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.948ms. Oct 8 20:01:01.963995 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:01:01.964009 systemd[1]: Detected virtualization kvm. Oct 8 20:01:01.964022 systemd[1]: Detected architecture x86-64. Oct 8 20:01:01.964036 systemd[1]: Detected first boot. Oct 8 20:01:01.964049 systemd[1]: Hostname set to . Oct 8 20:01:01.964061 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:01:01.964073 zram_generator::config[987]: No configuration found. Oct 8 20:01:01.964087 systemd[1]: Populated /etc with preset unit settings. Oct 8 20:01:01.964101 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 20:01:01.964113 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 20:01:01.964126 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 20:01:01.964139 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 20:01:01.964154 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 20:01:01.964169 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 20:01:01.964181 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 20:01:01.964194 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 20:01:01.964207 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 20:01:01.964222 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 20:01:01.964234 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 20:01:01.964246 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:01:01.964260 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:01:01.964273 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 20:01:01.964285 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 20:01:01.964297 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 20:01:01.964310 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:01:01.964322 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 20:01:01.964334 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:01:01.964346 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 20:01:01.967655 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 20:01:01.967674 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 20:01:01.967688 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 20:01:01.967702 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:01:01.967715 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:01:01.967728 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:01:01.967741 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:01:01.967753 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 20:01:01.967783 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 20:01:01.967797 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:01:01.967809 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:01:01.967822 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:01:01.967835 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 20:01:01.967847 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 20:01:01.967860 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 20:01:01.967873 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 20:01:01.967885 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:01:01.967900 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 20:01:01.967913 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 20:01:01.967926 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 20:01:01.967940 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 20:01:01.967952 systemd[1]: Reached target machines.target - Containers. Oct 8 20:01:01.967965 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 20:01:01.967978 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:01:01.967990 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:01:01.968005 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 20:01:01.968018 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:01:01.968030 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:01:01.968043 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:01:01.968056 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 20:01:01.968069 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:01:01.968082 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 20:01:01.968094 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 20:01:01.968109 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 20:01:01.968122 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 20:01:01.968134 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 20:01:01.968146 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:01:01.968158 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:01:01.968171 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 20:01:01.968183 kernel: loop: module loaded Oct 8 20:01:01.968197 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 20:01:01.968210 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:01:01.968223 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 20:01:01.968238 systemd[1]: Stopped verity-setup.service. Oct 8 20:01:01.968251 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:01:01.968264 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 20:01:01.968277 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 20:01:01.968289 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 20:01:01.968302 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 20:01:01.968316 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 20:01:01.968329 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 20:01:01.970866 systemd-journald[1073]: Collecting audit messages is disabled. Oct 8 20:01:01.970906 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:01:01.970921 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 20:01:01.970934 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 20:01:01.970951 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:01:01.970964 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:01:01.970979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:01:01.970990 kernel: fuse: init (API version 7.39) Oct 8 20:01:01.971002 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:01:01.971017 systemd-journald[1073]: Journal started Oct 8 20:01:01.971042 systemd-journald[1073]: Runtime Journal (/run/log/journal/ad48961f70bc40f1bb152d19ea4d2fc7) is 4.9M, max 39.3M, 34.4M free. Oct 8 20:01:01.590140 systemd[1]: Queued start job for default target multi-user.target. Oct 8 20:01:01.618969 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 20:01:01.619401 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 20:01:01.974430 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:01:01.975204 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:01:01.975335 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:01:01.976288 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:01:01.977039 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 20:01:01.977853 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 20:01:01.980267 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 20:01:01.980544 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 20:01:01.992315 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 20:01:01.997535 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 20:01:02.005729 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 20:01:02.007422 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 20:01:02.007462 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:01:02.009201 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 20:01:02.015528 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 20:01:02.019538 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 20:01:02.020257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:01:02.027526 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 20:01:02.033273 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 20:01:02.033969 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:01:02.036508 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 20:01:02.037467 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:01:02.041475 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:01:02.047546 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 20:01:02.053551 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:01:02.057385 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 20:01:02.058536 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 20:01:02.059234 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 20:01:02.059973 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 20:01:02.081621 systemd-journald[1073]: Time spent on flushing to /var/log/journal/ad48961f70bc40f1bb152d19ea4d2fc7 is 91.884ms for 936 entries. Oct 8 20:01:02.081621 systemd-journald[1073]: System Journal (/var/log/journal/ad48961f70bc40f1bb152d19ea4d2fc7) is 8.0M, max 584.8M, 576.8M free. Oct 8 20:01:02.244084 systemd-journald[1073]: Received client request to flush runtime journal. Oct 8 20:01:02.244154 kernel: loop0: detected capacity change from 0 to 142488 Oct 8 20:01:02.244180 kernel: ACPI: bus type drm_connector registered Oct 8 20:01:02.095417 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 20:01:02.097083 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 20:01:02.104651 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 20:01:02.123796 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:01:02.147648 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 20:01:02.158652 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:01:02.159481 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:01:02.171768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:01:02.177831 udevadm[1129]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 20:01:02.202240 systemd-tmpfiles[1119]: ACLs are not supported, ignoring. Oct 8 20:01:02.202256 systemd-tmpfiles[1119]: ACLs are not supported, ignoring. Oct 8 20:01:02.210131 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:01:02.217521 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 20:01:02.252924 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 20:01:02.251177 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 20:01:02.260527 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 20:01:02.261254 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 20:01:02.284108 kernel: loop1: detected capacity change from 0 to 8 Oct 8 20:01:02.303657 kernel: loop2: detected capacity change from 0 to 205544 Oct 8 20:01:02.317416 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 20:01:02.323597 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:01:02.344733 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Oct 8 20:01:02.344755 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Oct 8 20:01:02.349039 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:01:02.415402 kernel: loop3: detected capacity change from 0 to 140768 Oct 8 20:01:02.491428 kernel: loop4: detected capacity change from 0 to 142488 Oct 8 20:01:02.591378 kernel: loop5: detected capacity change from 0 to 8 Oct 8 20:01:02.591502 kernel: loop6: detected capacity change from 0 to 205544 Oct 8 20:01:02.647392 kernel: loop7: detected capacity change from 0 to 140768 Oct 8 20:01:02.701833 (sd-merge)[1149]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Oct 8 20:01:02.702464 (sd-merge)[1149]: Merged extensions into '/usr'. Oct 8 20:01:02.717814 systemd[1]: Reloading requested from client PID 1118 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 20:01:02.718121 systemd[1]: Reloading... Oct 8 20:01:02.799380 zram_generator::config[1172]: No configuration found. Oct 8 20:01:03.056677 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:01:03.116131 systemd[1]: Reloading finished in 397 ms. Oct 8 20:01:03.154160 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 20:01:03.156969 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 20:01:03.168751 systemd[1]: Starting ensure-sysext.service... Oct 8 20:01:03.171097 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:01:03.183586 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:01:03.187527 systemd[1]: Reloading requested from client PID 1231 ('systemctl') (unit ensure-sysext.service)... Oct 8 20:01:03.187550 systemd[1]: Reloading... Oct 8 20:01:03.215934 systemd-udevd[1233]: Using default interface naming scheme 'v255'. Oct 8 20:01:03.226166 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 20:01:03.227064 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 20:01:03.231467 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 20:01:03.232061 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Oct 8 20:01:03.232160 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Oct 8 20:01:03.234956 ldconfig[1113]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 20:01:03.240144 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:01:03.240151 systemd-tmpfiles[1232]: Skipping /boot Oct 8 20:01:03.255304 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:01:03.255324 systemd-tmpfiles[1232]: Skipping /boot Oct 8 20:01:03.299458 zram_generator::config[1260]: No configuration found. Oct 8 20:01:03.390163 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1274) Oct 8 20:01:03.409620 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1272) Oct 8 20:01:03.425411 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1272) Oct 8 20:01:03.464412 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 8 20:01:03.527798 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 8 20:01:03.557860 kernel: ACPI: button: Power Button [PWRF] Oct 8 20:01:03.557961 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 8 20:01:03.557343 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:01:03.577393 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 20:01:03.593735 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 8 20:01:03.595416 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 8 20:01:03.599662 kernel: Console: switching to colour dummy device 80x25 Oct 8 20:01:03.599703 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 8 20:01:03.599719 kernel: [drm] features: -context_init Oct 8 20:01:03.603498 kernel: [drm] number of scanouts: 1 Oct 8 20:01:03.603537 kernel: [drm] number of cap sets: 0 Oct 8 20:01:03.605417 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Oct 8 20:01:03.615379 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 8 20:01:03.616375 kernel: Console: switching to colour frame buffer device 128x48 Oct 8 20:01:03.623382 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 8 20:01:03.642272 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 20:01:03.642602 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 20:01:03.644599 systemd[1]: Reloading finished in 456 ms. Oct 8 20:01:03.667486 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:01:03.669785 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 20:01:03.676773 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:01:03.715468 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 20:01:03.718642 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:01:03.724636 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:01:03.740768 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 20:01:03.741016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:01:03.745965 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 20:01:03.749138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:01:03.752729 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:01:03.757679 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:01:03.761687 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:01:03.762658 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:01:03.774407 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 20:01:03.777205 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 20:01:03.783701 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:01:03.801374 lvm[1348]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:01:03.802502 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:01:03.805668 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 20:01:03.810346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:01:03.812323 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:01:03.826608 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 20:01:03.827133 systemd[1]: Finished ensure-sysext.service. Oct 8 20:01:03.836643 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 20:01:03.840331 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 20:01:03.841245 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:01:03.843800 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:01:03.857238 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:01:03.857475 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:01:03.861816 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:01:03.862728 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:01:03.874270 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 20:01:03.891079 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:01:03.892429 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:01:03.900925 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:01:03.911601 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 20:01:03.912253 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:01:03.912347 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:01:03.914540 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 20:01:03.930848 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 20:01:03.941949 augenrules[1387]: No rules Oct 8 20:01:03.937204 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:01:03.947704 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 20:01:03.949634 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 20:01:03.952713 lvm[1383]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:01:03.989890 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 20:01:03.998728 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 20:01:04.052333 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:01:04.063438 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 20:01:04.066283 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:01:04.084114 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 20:01:04.086598 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 20:01:04.097747 systemd-networkd[1361]: lo: Link UP Oct 8 20:01:04.097759 systemd-networkd[1361]: lo: Gained carrier Oct 8 20:01:04.099188 systemd-networkd[1361]: Enumeration completed Oct 8 20:01:04.099500 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:01:04.104907 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:01:04.104918 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:01:04.105650 systemd-networkd[1361]: eth0: Link UP Oct 8 20:01:04.105661 systemd-networkd[1361]: eth0: Gained carrier Oct 8 20:01:04.105675 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:01:04.106787 systemd-resolved[1362]: Positive Trust Anchors: Oct 8 20:01:04.106800 systemd-resolved[1362]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:01:04.106841 systemd-resolved[1362]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:01:04.112515 systemd-resolved[1362]: Using system hostname 'ci-4081-1-0-b-d257b8cc02.novalocal'. Oct 8 20:01:04.112567 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 20:01:04.116633 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:01:04.117230 systemd[1]: Reached target network.target - Network. Oct 8 20:01:04.117702 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:01:04.118146 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:01:04.118652 systemd-networkd[1361]: eth0: DHCPv4 address 172.24.4.139/24, gateway 172.24.4.1 acquired from 172.24.4.1 Oct 8 20:01:04.119461 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 20:01:04.122745 systemd-timesyncd[1368]: Network configuration changed, trying to establish connection. Oct 8 20:01:04.124025 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 20:01:04.126052 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 20:01:04.129034 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 20:01:04.131994 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 20:01:04.134093 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 20:01:04.134128 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:01:04.136016 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:01:04.140390 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 20:01:04.146932 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 20:01:04.155592 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 20:01:04.161778 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 20:01:04.162577 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:01:04.163134 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:01:04.165861 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:01:04.165901 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:01:04.171465 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 20:01:04.175471 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 20:01:04.185540 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 20:01:04.197551 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 20:01:04.206419 jq[1418]: false Oct 8 20:01:04.210652 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 20:01:04.212466 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 20:01:04.217521 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 20:01:04.229927 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 20:01:04.239638 extend-filesystems[1419]: Found loop4 Oct 8 20:01:04.239602 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 20:01:04.247900 extend-filesystems[1419]: Found loop5 Oct 8 20:01:04.247900 extend-filesystems[1419]: Found loop6 Oct 8 20:01:04.247900 extend-filesystems[1419]: Found loop7 Oct 8 20:01:04.247900 extend-filesystems[1419]: Found vda Oct 8 20:01:04.247900 extend-filesystems[1419]: Found vda1 Oct 8 20:01:04.247900 extend-filesystems[1419]: Found vda2 Oct 8 20:01:04.247900 extend-filesystems[1419]: Found vda3 Oct 8 20:01:04.247900 extend-filesystems[1419]: Found usr Oct 8 20:01:04.247900 extend-filesystems[1419]: Found vda4 Oct 8 20:01:04.247900 extend-filesystems[1419]: Found vda6 Oct 8 20:01:04.247900 extend-filesystems[1419]: Found vda7 Oct 8 20:01:04.247900 extend-filesystems[1419]: Found vda9 Oct 8 20:01:04.247900 extend-filesystems[1419]: Checking size of /dev/vda9 Oct 8 20:01:04.249805 dbus-daemon[1417]: [system] SELinux support is enabled Oct 8 20:01:04.252141 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 20:01:04.276040 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 20:01:04.279678 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 20:01:04.280453 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 20:01:04.293430 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 20:01:04.309527 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 20:01:04.313888 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 20:01:04.324776 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 20:01:04.324970 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 20:01:04.325254 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 20:01:04.325436 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 20:01:04.329259 jq[1436]: true Oct 8 20:01:04.337601 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 20:01:04.337799 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 20:01:04.351429 extend-filesystems[1419]: Resized partition /dev/vda9 Oct 8 20:01:04.369938 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 20:01:04.370002 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 20:01:04.371776 update_engine[1434]: I20241008 20:01:04.370806 1434 main.cc:92] Flatcar Update Engine starting Oct 8 20:01:04.376113 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 20:01:04.376152 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 20:01:04.388182 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Oct 8 20:01:04.420676 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1266) Oct 8 20:01:04.420721 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Oct 8 20:01:04.420741 update_engine[1434]: I20241008 20:01:04.398847 1434 update_check_scheduler.cc:74] Next update check in 6m39s Oct 8 20:01:04.420822 tar[1438]: linux-amd64/helm Oct 8 20:01:04.389985 systemd[1]: Started update-engine.service - Update Engine. Oct 8 20:01:04.404534 (ntainerd)[1448]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 20:01:04.424626 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 20:01:04.437398 jq[1442]: true Oct 8 20:01:04.448572 systemd-logind[1429]: New seat seat0. Oct 8 20:01:04.454878 systemd-logind[1429]: Watching system buttons on /dev/input/event1 (Power Button) Oct 8 20:01:04.454908 systemd-logind[1429]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 20:01:04.455176 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 20:01:04.626300 locksmithd[1454]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 20:01:04.640394 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Oct 8 20:01:04.721373 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 20:01:04.723969 bash[1471]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:01:04.724283 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 20:01:04.724283 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 3 Oct 8 20:01:04.724283 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Oct 8 20:01:04.721587 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 20:01:04.739237 extend-filesystems[1419]: Resized filesystem in /dev/vda9 Oct 8 20:01:04.727770 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 20:01:04.769712 systemd[1]: Starting sshkeys.service... Oct 8 20:01:04.793093 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 20:01:04.804829 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 20:01:05.017988 containerd[1448]: time="2024-10-08T20:01:05.017820126Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 20:01:05.102640 containerd[1448]: time="2024-10-08T20:01:05.102393133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:01:05.108809 containerd[1448]: time="2024-10-08T20:01:05.108744044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:01:05.108809 containerd[1448]: time="2024-10-08T20:01:05.108804688Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 20:01:05.108933 containerd[1448]: time="2024-10-08T20:01:05.108829595Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 20:01:05.109090 containerd[1448]: time="2024-10-08T20:01:05.109041282Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 20:01:05.109090 containerd[1448]: time="2024-10-08T20:01:05.109070767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 20:01:05.110308 containerd[1448]: time="2024-10-08T20:01:05.109143013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:01:05.110308 containerd[1448]: time="2024-10-08T20:01:05.109167819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:01:05.110308 containerd[1448]: time="2024-10-08T20:01:05.109431934Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:01:05.110308 containerd[1448]: time="2024-10-08T20:01:05.109456891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 20:01:05.110308 containerd[1448]: time="2024-10-08T20:01:05.109475626Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:01:05.110308 containerd[1448]: time="2024-10-08T20:01:05.109487839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 20:01:05.110308 containerd[1448]: time="2024-10-08T20:01:05.109576496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:01:05.110308 containerd[1448]: time="2024-10-08T20:01:05.109806968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:01:05.111503 containerd[1448]: time="2024-10-08T20:01:05.111472562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:01:05.111503 containerd[1448]: time="2024-10-08T20:01:05.111501005Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 20:01:05.111641 containerd[1448]: time="2024-10-08T20:01:05.111614337Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 20:01:05.111703 containerd[1448]: time="2024-10-08T20:01:05.111679239Z" level=info msg="metadata content store policy set" policy=shared Oct 8 20:01:05.120638 containerd[1448]: time="2024-10-08T20:01:05.120600281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 20:01:05.120742 containerd[1448]: time="2024-10-08T20:01:05.120662367Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 20:01:05.120742 containerd[1448]: time="2024-10-08T20:01:05.120697102Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 20:01:05.120812 containerd[1448]: time="2024-10-08T20:01:05.120755472Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 20:01:05.120812 containerd[1448]: time="2024-10-08T20:01:05.120788634Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 20:01:05.120961 containerd[1448]: time="2024-10-08T20:01:05.120932103Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 20:01:05.122673 containerd[1448]: time="2024-10-08T20:01:05.122632983Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 20:01:05.122793 containerd[1448]: time="2024-10-08T20:01:05.122767595Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 20:01:05.122825 containerd[1448]: time="2024-10-08T20:01:05.122793444Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 20:01:05.122825 containerd[1448]: time="2024-10-08T20:01:05.122808723Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 20:01:05.122881 containerd[1448]: time="2024-10-08T20:01:05.122825574Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 20:01:05.122881 containerd[1448]: time="2024-10-08T20:01:05.122841324Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 20:01:05.122881 containerd[1448]: time="2024-10-08T20:01:05.122855520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 20:01:05.122881 containerd[1448]: time="2024-10-08T20:01:05.122870358Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 20:01:05.122969 containerd[1448]: time="2024-10-08T20:01:05.122888412Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 20:01:05.122969 containerd[1448]: time="2024-10-08T20:01:05.122904592Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 20:01:05.122969 containerd[1448]: time="2024-10-08T20:01:05.122918058Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 20:01:05.122969 containerd[1448]: time="2024-10-08T20:01:05.122932254Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 20:01:05.122969 containerd[1448]: time="2024-10-08T20:01:05.122955157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.123076 containerd[1448]: time="2024-10-08T20:01:05.122970646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.123076 containerd[1448]: time="2024-10-08T20:01:05.122985875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.123076 containerd[1448]: time="2024-10-08T20:01:05.123000733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.123076 containerd[1448]: time="2024-10-08T20:01:05.123016041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.123076 containerd[1448]: time="2024-10-08T20:01:05.123031120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.123076 containerd[1448]: time="2024-10-08T20:01:05.123044294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.123076 containerd[1448]: time="2024-10-08T20:01:05.123058251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.123076 containerd[1448]: time="2024-10-08T20:01:05.123072147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.123252 containerd[1448]: time="2024-10-08T20:01:05.123094098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.123252 containerd[1448]: time="2024-10-08T20:01:05.123109286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.123252 containerd[1448]: time="2024-10-08T20:01:05.123124004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.123252 containerd[1448]: time="2024-10-08T20:01:05.123140004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.123252 containerd[1448]: time="2024-10-08T20:01:05.123162446Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 20:01:05.123252 containerd[1448]: time="2024-10-08T20:01:05.123185018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.123252 containerd[1448]: time="2024-10-08T20:01:05.123198293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.123252 containerd[1448]: time="2024-10-08T20:01:05.123210326Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 20:01:05.124071 containerd[1448]: time="2024-10-08T20:01:05.123273073Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 20:01:05.124071 containerd[1448]: time="2024-10-08T20:01:05.123295035Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 20:01:05.124071 containerd[1448]: time="2024-10-08T20:01:05.123308300Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 20:01:05.124071 containerd[1448]: time="2024-10-08T20:01:05.123399430Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 20:01:05.124071 containerd[1448]: time="2024-10-08T20:01:05.123415220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.124071 containerd[1448]: time="2024-10-08T20:01:05.123429918Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 20:01:05.124071 containerd[1448]: time="2024-10-08T20:01:05.123441249Z" level=info msg="NRI interface is disabled by configuration." Oct 8 20:01:05.124071 containerd[1448]: time="2024-10-08T20:01:05.123453061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 20:01:05.124254 containerd[1448]: time="2024-10-08T20:01:05.123775616Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 20:01:05.124254 containerd[1448]: time="2024-10-08T20:01:05.123854985Z" level=info msg="Connect containerd service" Oct 8 20:01:05.124254 containerd[1448]: time="2024-10-08T20:01:05.123887526Z" level=info msg="using legacy CRI server" Oct 8 20:01:05.124254 containerd[1448]: time="2024-10-08T20:01:05.123895681Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 20:01:05.124254 containerd[1448]: time="2024-10-08T20:01:05.123993314Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 20:01:05.127252 containerd[1448]: time="2024-10-08T20:01:05.126699981Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:01:05.129613 containerd[1448]: time="2024-10-08T20:01:05.126784599Z" level=info msg="Start subscribing containerd event" Oct 8 20:01:05.129613 containerd[1448]: time="2024-10-08T20:01:05.129421735Z" level=info msg="Start recovering state" Oct 8 20:01:05.129613 containerd[1448]: time="2024-10-08T20:01:05.127411385Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 20:01:05.129613 containerd[1448]: time="2024-10-08T20:01:05.129517825Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 20:01:05.130696 containerd[1448]: time="2024-10-08T20:01:05.130489568Z" level=info msg="Start event monitor" Oct 8 20:01:05.130696 containerd[1448]: time="2024-10-08T20:01:05.130524944Z" level=info msg="Start snapshots syncer" Oct 8 20:01:05.130696 containerd[1448]: time="2024-10-08T20:01:05.130537528Z" level=info msg="Start cni network conf syncer for default" Oct 8 20:01:05.130696 containerd[1448]: time="2024-10-08T20:01:05.130547116Z" level=info msg="Start streaming server" Oct 8 20:01:05.138160 containerd[1448]: time="2024-10-08T20:01:05.134397567Z" level=info msg="containerd successfully booted in 0.121840s" Oct 8 20:01:05.134521 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 20:01:05.335591 tar[1438]: linux-amd64/LICENSE Oct 8 20:01:05.335873 tar[1438]: linux-amd64/README.md Oct 8 20:01:05.348426 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 20:01:05.962200 sshd_keygen[1453]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 20:01:05.988181 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 20:01:05.992071 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 20:01:06.001074 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 20:01:06.008601 systemd[1]: Started sshd@0-172.24.4.139:22-172.24.4.1:60602.service - OpenSSH per-connection server daemon (172.24.4.1:60602). Oct 8 20:01:06.015779 systemd-networkd[1361]: eth0: Gained IPv6LL Oct 8 20:01:06.016854 systemd-timesyncd[1368]: Network configuration changed, trying to establish connection. Oct 8 20:01:06.018463 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 20:01:06.018640 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 20:01:06.023789 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 20:01:06.034882 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 20:01:06.053877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:01:06.060666 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 20:01:06.075641 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 20:01:06.108213 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 20:01:06.116634 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 20:01:06.129976 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 20:01:06.136955 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 20:01:06.138793 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 20:01:07.104996 sshd[1508]: Accepted publickey for core from 172.24.4.1 port 60602 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:01:07.109146 sshd[1508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:07.137566 systemd-logind[1429]: New session 1 of user core. Oct 8 20:01:07.138967 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 20:01:07.155148 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 20:01:07.194088 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 20:01:07.210031 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 20:01:07.230429 (systemd)[1528]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:01:07.354608 systemd[1528]: Queued start job for default target default.target. Oct 8 20:01:07.362679 systemd[1528]: Created slice app.slice - User Application Slice. Oct 8 20:01:07.362716 systemd[1528]: Reached target paths.target - Paths. Oct 8 20:01:07.362734 systemd[1528]: Reached target timers.target - Timers. Oct 8 20:01:07.367637 systemd[1528]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 20:01:07.389865 systemd[1528]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 20:01:07.390164 systemd[1528]: Reached target sockets.target - Sockets. Oct 8 20:01:07.390254 systemd[1528]: Reached target basic.target - Basic System. Oct 8 20:01:07.390459 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 20:01:07.393473 systemd[1528]: Reached target default.target - Main User Target. Oct 8 20:01:07.393608 systemd[1528]: Startup finished in 153ms. Oct 8 20:01:07.399702 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 20:01:07.820612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:01:07.828694 (kubelet)[1544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:01:07.873956 systemd[1]: Started sshd@1-172.24.4.139:22-172.24.4.1:39952.service - OpenSSH per-connection server daemon (172.24.4.1:39952). Oct 8 20:01:09.053900 kubelet[1544]: E1008 20:01:09.053798 1544 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:01:09.055994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:01:09.056163 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:01:09.056498 systemd[1]: kubelet.service: Consumed 1.911s CPU time. Oct 8 20:01:09.636013 sshd[1546]: Accepted publickey for core from 172.24.4.1 port 39952 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:01:09.638159 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:09.645881 systemd-logind[1429]: New session 2 of user core. Oct 8 20:01:09.654772 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 20:01:10.325835 sshd[1546]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:10.339614 systemd[1]: sshd@1-172.24.4.139:22-172.24.4.1:39952.service: Deactivated successfully. Oct 8 20:01:10.343842 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 20:01:10.345938 systemd-logind[1429]: Session 2 logged out. Waiting for processes to exit. Oct 8 20:01:10.358451 systemd[1]: Started sshd@2-172.24.4.139:22-172.24.4.1:39966.service - OpenSSH per-connection server daemon (172.24.4.1:39966). Oct 8 20:01:10.366122 systemd-logind[1429]: Removed session 2. Oct 8 20:01:11.186140 login[1522]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 8 20:01:11.195226 login[1523]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 8 20:01:11.199977 systemd-logind[1429]: New session 3 of user core. Oct 8 20:01:11.213817 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 20:01:11.221865 systemd-logind[1429]: New session 4 of user core. Oct 8 20:01:11.228998 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 20:01:11.269483 coreos-metadata[1414]: Oct 08 20:01:11.268 WARN failed to locate config-drive, using the metadata service API instead Oct 8 20:01:11.298129 coreos-metadata[1414]: Oct 08 20:01:11.298 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Oct 8 20:01:11.454922 coreos-metadata[1414]: Oct 08 20:01:11.454 INFO Fetch successful Oct 8 20:01:11.455138 coreos-metadata[1414]: Oct 08 20:01:11.455 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 8 20:01:11.469974 coreos-metadata[1414]: Oct 08 20:01:11.469 INFO Fetch successful Oct 8 20:01:11.470207 coreos-metadata[1414]: Oct 08 20:01:11.470 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Oct 8 20:01:11.488382 coreos-metadata[1414]: Oct 08 20:01:11.488 INFO Fetch successful Oct 8 20:01:11.488382 coreos-metadata[1414]: Oct 08 20:01:11.488 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Oct 8 20:01:11.503617 coreos-metadata[1414]: Oct 08 20:01:11.503 INFO Fetch successful Oct 8 20:01:11.503844 coreos-metadata[1414]: Oct 08 20:01:11.503 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Oct 8 20:01:11.519682 coreos-metadata[1414]: Oct 08 20:01:11.519 INFO Fetch successful Oct 8 20:01:11.519682 coreos-metadata[1414]: Oct 08 20:01:11.519 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Oct 8 20:01:11.533268 coreos-metadata[1414]: Oct 08 20:01:11.533 INFO Fetch successful Oct 8 20:01:11.583155 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 20:01:11.586240 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 20:01:11.899308 sshd[1560]: Accepted publickey for core from 172.24.4.1 port 39966 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:01:11.903143 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:11.912728 coreos-metadata[1484]: Oct 08 20:01:11.912 WARN failed to locate config-drive, using the metadata service API instead Oct 8 20:01:11.915809 systemd-logind[1429]: New session 5 of user core. Oct 8 20:01:11.924171 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 20:01:11.959455 coreos-metadata[1484]: Oct 08 20:01:11.959 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Oct 8 20:01:11.975628 coreos-metadata[1484]: Oct 08 20:01:11.975 INFO Fetch successful Oct 8 20:01:11.975628 coreos-metadata[1484]: Oct 08 20:01:11.975 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 8 20:01:11.991018 coreos-metadata[1484]: Oct 08 20:01:11.990 INFO Fetch successful Oct 8 20:01:12.001905 unknown[1484]: wrote ssh authorized keys file for user: core Oct 8 20:01:12.056783 update-ssh-keys[1599]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:01:12.058483 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 20:01:12.064944 systemd[1]: Finished sshkeys.service. Oct 8 20:01:12.067121 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 20:01:12.068004 systemd[1]: Startup finished in 1.064s (kernel) + 15.394s (initrd) + 11.931s (userspace) = 28.389s. Oct 8 20:01:12.668805 sshd[1560]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:12.676114 systemd-logind[1429]: Session 5 logged out. Waiting for processes to exit. Oct 8 20:01:12.677725 systemd[1]: sshd@2-172.24.4.139:22-172.24.4.1:39966.service: Deactivated successfully. Oct 8 20:01:12.681425 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 20:01:12.684141 systemd-logind[1429]: Removed session 5. Oct 8 20:01:19.280599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 20:01:19.288754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:01:19.752808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:01:19.756761 (kubelet)[1613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:01:19.908673 kubelet[1613]: E1008 20:01:19.908481 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:01:19.914452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:01:19.914782 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:01:22.688893 systemd[1]: Started sshd@3-172.24.4.139:22-172.24.4.1:56558.service - OpenSSH per-connection server daemon (172.24.4.1:56558). Oct 8 20:01:23.897799 sshd[1621]: Accepted publickey for core from 172.24.4.1 port 56558 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:01:23.899948 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:23.909761 systemd-logind[1429]: New session 6 of user core. Oct 8 20:01:23.916503 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 20:01:24.707900 sshd[1621]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:24.725749 systemd[1]: sshd@3-172.24.4.139:22-172.24.4.1:56558.service: Deactivated successfully. Oct 8 20:01:24.729092 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 20:01:24.730835 systemd-logind[1429]: Session 6 logged out. Waiting for processes to exit. Oct 8 20:01:24.740937 systemd[1]: Started sshd@4-172.24.4.139:22-172.24.4.1:55480.service - OpenSSH per-connection server daemon (172.24.4.1:55480). Oct 8 20:01:24.743821 systemd-logind[1429]: Removed session 6. Oct 8 20:01:26.537478 sshd[1628]: Accepted publickey for core from 172.24.4.1 port 55480 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:01:26.540675 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:26.550815 systemd-logind[1429]: New session 7 of user core. Oct 8 20:01:26.559702 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 20:01:27.445559 sshd[1628]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:27.456102 systemd[1]: sshd@4-172.24.4.139:22-172.24.4.1:55480.service: Deactivated successfully. Oct 8 20:01:27.459515 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 20:01:27.462646 systemd-logind[1429]: Session 7 logged out. Waiting for processes to exit. Oct 8 20:01:27.469914 systemd[1]: Started sshd@5-172.24.4.139:22-172.24.4.1:55496.service - OpenSSH per-connection server daemon (172.24.4.1:55496). Oct 8 20:01:27.471857 systemd-logind[1429]: Removed session 7. Oct 8 20:01:28.705552 sshd[1635]: Accepted publickey for core from 172.24.4.1 port 55496 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:01:28.708252 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:28.718773 systemd-logind[1429]: New session 8 of user core. Oct 8 20:01:28.728633 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 20:01:29.370268 sshd[1635]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:29.380025 systemd[1]: sshd@5-172.24.4.139:22-172.24.4.1:55496.service: Deactivated successfully. Oct 8 20:01:29.382507 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 20:01:29.385062 systemd-logind[1429]: Session 8 logged out. Waiting for processes to exit. Oct 8 20:01:29.391072 systemd[1]: Started sshd@6-172.24.4.139:22-172.24.4.1:55510.service - OpenSSH per-connection server daemon (172.24.4.1:55510). Oct 8 20:01:29.395235 systemd-logind[1429]: Removed session 8. Oct 8 20:01:30.030828 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 20:01:30.040752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:01:30.445665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:01:30.460217 (kubelet)[1652]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:01:30.671368 kubelet[1652]: E1008 20:01:30.671241 1652 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:01:30.674503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:01:30.674669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:01:30.773314 sshd[1642]: Accepted publickey for core from 172.24.4.1 port 55510 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:01:30.775743 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:30.785469 systemd-logind[1429]: New session 9 of user core. Oct 8 20:01:30.797853 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 20:01:31.408251 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 20:01:31.409093 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:01:31.430921 sudo[1660]: pam_unix(sudo:session): session closed for user root Oct 8 20:01:31.767864 sshd[1642]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:31.780283 systemd[1]: sshd@6-172.24.4.139:22-172.24.4.1:55510.service: Deactivated successfully. Oct 8 20:01:31.783229 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 20:01:31.787786 systemd-logind[1429]: Session 9 logged out. Waiting for processes to exit. Oct 8 20:01:31.794973 systemd[1]: Started sshd@7-172.24.4.139:22-172.24.4.1:55522.service - OpenSSH per-connection server daemon (172.24.4.1:55522). Oct 8 20:01:31.798014 systemd-logind[1429]: Removed session 9. Oct 8 20:01:33.128405 sshd[1665]: Accepted publickey for core from 172.24.4.1 port 55522 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:01:33.130922 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:33.139246 systemd-logind[1429]: New session 10 of user core. Oct 8 20:01:33.150784 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 20:01:33.574221 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 20:01:33.574905 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:01:33.580196 sudo[1669]: pam_unix(sudo:session): session closed for user root Oct 8 20:01:33.586639 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 20:01:33.586950 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:01:33.611994 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 20:01:33.614596 auditctl[1672]: No rules Oct 8 20:01:33.615239 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 20:01:33.615662 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 20:01:33.626318 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:01:33.673703 augenrules[1690]: No rules Oct 8 20:01:33.676482 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:01:33.678783 sudo[1668]: pam_unix(sudo:session): session closed for user root Oct 8 20:01:33.849189 sshd[1665]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:33.860044 systemd[1]: sshd@7-172.24.4.139:22-172.24.4.1:55522.service: Deactivated successfully. Oct 8 20:01:33.863822 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 20:01:33.869751 systemd-logind[1429]: Session 10 logged out. Waiting for processes to exit. Oct 8 20:01:33.873872 systemd[1]: Started sshd@8-172.24.4.139:22-172.24.4.1:45048.service - OpenSSH per-connection server daemon (172.24.4.1:45048). Oct 8 20:01:33.876486 systemd-logind[1429]: Removed session 10. Oct 8 20:01:34.965436 sshd[1698]: Accepted publickey for core from 172.24.4.1 port 45048 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:01:34.968120 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:34.979854 systemd-logind[1429]: New session 11 of user core. Oct 8 20:01:34.985742 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 20:01:35.442089 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 20:01:35.443164 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:01:36.810813 systemd-resolved[1362]: Clock change detected. Flushing caches. Oct 8 20:01:36.812941 systemd-timesyncd[1368]: Contacted time server 91.194.60.128:123 (2.flatcar.pool.ntp.org). Oct 8 20:01:36.814136 systemd-timesyncd[1368]: Initial clock synchronization to Tue 2024-10-08 20:01:36.810675 UTC. Oct 8 20:01:36.909867 (dockerd)[1717]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 20:01:36.910149 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 20:01:37.450356 dockerd[1717]: time="2024-10-08T20:01:37.450244784Z" level=info msg="Starting up" Oct 8 20:01:37.701124 dockerd[1717]: time="2024-10-08T20:01:37.700276929Z" level=info msg="Loading containers: start." Oct 8 20:01:37.919038 kernel: Initializing XFRM netlink socket Oct 8 20:01:38.074002 systemd-networkd[1361]: docker0: Link UP Oct 8 20:01:38.094778 dockerd[1717]: time="2024-10-08T20:01:38.094026997Z" level=info msg="Loading containers: done." Oct 8 20:01:38.120429 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1639560581-merged.mount: Deactivated successfully. Oct 8 20:01:38.127121 dockerd[1717]: time="2024-10-08T20:01:38.126992925Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 20:01:38.127448 dockerd[1717]: time="2024-10-08T20:01:38.127237774Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 20:01:38.127634 dockerd[1717]: time="2024-10-08T20:01:38.127561020Z" level=info msg="Daemon has completed initialization" Oct 8 20:01:38.203042 dockerd[1717]: time="2024-10-08T20:01:38.202311354Z" level=info msg="API listen on /run/docker.sock" Oct 8 20:01:38.203415 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 20:01:39.486330 containerd[1448]: time="2024-10-08T20:01:39.485696132Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 8 20:01:40.299940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2160227700.mount: Deactivated successfully. Oct 8 20:01:41.552063 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 20:01:41.565223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:01:41.696881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:01:41.706093 (kubelet)[1918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:01:41.754166 kubelet[1918]: E1008 20:01:41.754100 1918 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:01:41.756185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:01:41.756349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:01:42.312860 containerd[1448]: time="2024-10-08T20:01:42.312806687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:42.314308 containerd[1448]: time="2024-10-08T20:01:42.314101766Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=28066629" Oct 8 20:01:42.315412 containerd[1448]: time="2024-10-08T20:01:42.315353153Z" level=info msg="ImageCreate event name:\"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:42.318660 containerd[1448]: time="2024-10-08T20:01:42.318617866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:42.320226 containerd[1448]: time="2024-10-08T20:01:42.319793762Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"28063421\" in 2.833976311s" Oct 8 20:01:42.320226 containerd[1448]: time="2024-10-08T20:01:42.319839497Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\"" Oct 8 20:01:42.322215 containerd[1448]: time="2024-10-08T20:01:42.322181229Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 8 20:01:45.377964 containerd[1448]: time="2024-10-08T20:01:45.377725081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:45.380856 containerd[1448]: time="2024-10-08T20:01:45.380610152Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=24690930" Oct 8 20:01:45.382283 containerd[1448]: time="2024-10-08T20:01:45.382148337Z" level=info msg="ImageCreate event name:\"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:45.391084 containerd[1448]: time="2024-10-08T20:01:45.391009275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:45.395586 containerd[1448]: time="2024-10-08T20:01:45.395345278Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"26240868\" in 3.073113293s" Oct 8 20:01:45.395586 containerd[1448]: time="2024-10-08T20:01:45.395420849Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\"" Oct 8 20:01:45.396945 containerd[1448]: time="2024-10-08T20:01:45.396729243Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 8 20:01:47.431186 containerd[1448]: time="2024-10-08T20:01:47.429985545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:47.431579 containerd[1448]: time="2024-10-08T20:01:47.431545731Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=18646766" Oct 8 20:01:47.432815 containerd[1448]: time="2024-10-08T20:01:47.432795425Z" level=info msg="ImageCreate event name:\"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:47.436990 containerd[1448]: time="2024-10-08T20:01:47.436950578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:47.438290 containerd[1448]: time="2024-10-08T20:01:47.438260344Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"20196722\" in 2.04133872s" Oct 8 20:01:47.438354 containerd[1448]: time="2024-10-08T20:01:47.438296552Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\"" Oct 8 20:01:47.439057 containerd[1448]: time="2024-10-08T20:01:47.439011002Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 8 20:01:49.467582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2147973762.mount: Deactivated successfully. Oct 8 20:01:50.001068 containerd[1448]: time="2024-10-08T20:01:50.000961660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:50.002470 containerd[1448]: time="2024-10-08T20:01:50.002414605Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=30208889" Oct 8 20:01:50.003789 containerd[1448]: time="2024-10-08T20:01:50.003723450Z" level=info msg="ImageCreate event name:\"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:50.006465 containerd[1448]: time="2024-10-08T20:01:50.006422802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:50.007297 containerd[1448]: time="2024-10-08T20:01:50.007165565Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"30207900\" in 2.56812106s" Oct 8 20:01:50.007297 containerd[1448]: time="2024-10-08T20:01:50.007197145Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\"" Oct 8 20:01:50.007913 containerd[1448]: time="2024-10-08T20:01:50.007711229Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 20:01:50.453325 update_engine[1434]: I20241008 20:01:50.453112 1434 update_attempter.cc:509] Updating boot flags... Oct 8 20:01:50.530570 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1948) Oct 8 20:01:50.583818 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1949) Oct 8 20:01:50.636771 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1949) Oct 8 20:01:50.697674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1863138174.mount: Deactivated successfully. Oct 8 20:01:51.801116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 8 20:01:51.813211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:01:51.987118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:01:51.987121 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:01:52.035585 kubelet[1976]: E1008 20:01:52.035439 1976 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:01:52.039020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:01:52.039317 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:01:54.070673 containerd[1448]: time="2024-10-08T20:01:54.070359094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:54.079861 containerd[1448]: time="2024-10-08T20:01:54.079711053Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Oct 8 20:01:54.133451 containerd[1448]: time="2024-10-08T20:01:54.133301973Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:54.142230 containerd[1448]: time="2024-10-08T20:01:54.142091829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:54.145374 containerd[1448]: time="2024-10-08T20:01:54.145309644Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 4.137233761s" Oct 8 20:01:54.145778 containerd[1448]: time="2024-10-08T20:01:54.145572827Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 20:01:54.148006 containerd[1448]: time="2024-10-08T20:01:54.147547250Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 8 20:01:54.755402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679617103.mount: Deactivated successfully. Oct 8 20:01:54.764231 containerd[1448]: time="2024-10-08T20:01:54.764091990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:54.766271 containerd[1448]: time="2024-10-08T20:01:54.766116246Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Oct 8 20:01:54.768131 containerd[1448]: time="2024-10-08T20:01:54.768025808Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:54.775156 containerd[1448]: time="2024-10-08T20:01:54.775091990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:54.778315 containerd[1448]: time="2024-10-08T20:01:54.778236257Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 630.624927ms" Oct 8 20:01:54.778315 containerd[1448]: time="2024-10-08T20:01:54.778306950Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 8 20:01:54.779377 containerd[1448]: time="2024-10-08T20:01:54.779324478Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 8 20:01:55.592088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2459552781.mount: Deactivated successfully. Oct 8 20:01:58.566964 containerd[1448]: time="2024-10-08T20:01:58.566828411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:58.568206 containerd[1448]: time="2024-10-08T20:01:58.568168775Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56241748" Oct 8 20:01:58.569222 containerd[1448]: time="2024-10-08T20:01:58.569159252Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:58.572726 containerd[1448]: time="2024-10-08T20:01:58.572676028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:58.575015 containerd[1448]: time="2024-10-08T20:01:58.574068961Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.794494103s" Oct 8 20:01:58.575015 containerd[1448]: time="2024-10-08T20:01:58.574114015Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Oct 8 20:02:02.039449 systemd[1]: kubelet.service: Stop job pending for unit, skipping automatic restart. Oct 8 20:02:02.039938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:02.055280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:02:02.092315 systemd[1]: Reloading requested from client PID 2100 ('systemctl') (unit session-11.scope)... Oct 8 20:02:02.092331 systemd[1]: Reloading... Oct 8 20:02:02.173798 zram_generator::config[2135]: No configuration found. Oct 8 20:02:02.342294 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:02:02.426101 systemd[1]: Reloading finished in 333 ms. Oct 8 20:02:02.473675 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 20:02:02.473900 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 20:02:02.474233 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:02.480014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:02:03.100412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:03.120320 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:02:03.403716 kubelet[2204]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:02:03.403716 kubelet[2204]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:02:03.403716 kubelet[2204]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:02:03.403716 kubelet[2204]: I1008 20:02:03.403243 2204 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:02:04.066909 kubelet[2204]: I1008 20:02:04.066821 2204 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 20:02:04.066909 kubelet[2204]: I1008 20:02:04.066888 2204 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:02:04.068366 kubelet[2204]: I1008 20:02:04.067480 2204 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 20:02:04.133795 kubelet[2204]: E1008 20:02:04.133647 2204 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.139:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:02:04.134007 kubelet[2204]: I1008 20:02:04.133873 2204 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:02:04.161948 kubelet[2204]: E1008 20:02:04.161699 2204 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 20:02:04.161948 kubelet[2204]: I1008 20:02:04.161822 2204 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 20:02:04.172800 kubelet[2204]: I1008 20:02:04.171433 2204 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:02:04.172800 kubelet[2204]: I1008 20:02:04.171631 2204 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 20:02:04.172800 kubelet[2204]: I1008 20:02:04.171983 2204 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:02:04.173078 kubelet[2204]: I1008 20:02:04.172037 2204 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-1-0-b-d257b8cc02.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 20:02:04.173078 kubelet[2204]: I1008 20:02:04.172428 2204 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:02:04.173078 kubelet[2204]: I1008 20:02:04.172452 2204 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 20:02:04.173078 kubelet[2204]: I1008 20:02:04.172633 2204 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:02:04.179794 kubelet[2204]: I1008 20:02:04.179706 2204 kubelet.go:408] "Attempting to sync node with API server" Oct 8 20:02:04.179794 kubelet[2204]: I1008 20:02:04.179797 2204 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:02:04.179951 kubelet[2204]: I1008 20:02:04.179890 2204 kubelet.go:314] "Adding apiserver pod source" Oct 8 20:02:04.179951 kubelet[2204]: I1008 20:02:04.179935 2204 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:02:04.191153 kubelet[2204]: W1008 20:02:04.190651 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-b-d257b8cc02.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Oct 8 20:02:04.191153 kubelet[2204]: E1008 20:02:04.190846 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-b-d257b8cc02.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.139:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:02:04.194453 kubelet[2204]: W1008 20:02:04.194289 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Oct 8 20:02:04.194849 kubelet[2204]: E1008 20:02:04.194408 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.139:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:02:04.194849 kubelet[2204]: I1008 20:02:04.195029 2204 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:02:04.199313 kubelet[2204]: I1008 20:02:04.199089 2204 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:02:04.201444 kubelet[2204]: W1008 20:02:04.201313 2204 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 20:02:04.205672 kubelet[2204]: I1008 20:02:04.205623 2204 server.go:1269] "Started kubelet" Oct 8 20:02:04.208803 kubelet[2204]: I1008 20:02:04.208322 2204 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:02:04.210685 kubelet[2204]: I1008 20:02:04.210560 2204 server.go:460] "Adding debug handlers to kubelet server" Oct 8 20:02:04.217163 kubelet[2204]: I1008 20:02:04.216953 2204 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:02:04.218243 kubelet[2204]: I1008 20:02:04.217633 2204 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:02:04.218243 kubelet[2204]: I1008 20:02:04.217991 2204 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:02:04.224205 kubelet[2204]: E1008 20:02:04.217996 2204 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.139:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.139:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-1-0-b-d257b8cc02.novalocal.17fc92c33ac330a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-b-d257b8cc02.novalocal,UID:ci-4081-1-0-b-d257b8cc02.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-b-d257b8cc02.novalocal,},FirstTimestamp:2024-10-08 20:02:04.205576361 +0000 UTC m=+1.078895328,LastTimestamp:2024-10-08 20:02:04.205576361 +0000 UTC m=+1.078895328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-b-d257b8cc02.novalocal,}" Oct 8 20:02:04.226089 kubelet[2204]: I1008 20:02:04.226048 2204 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 20:02:04.229017 kubelet[2204]: W1008 20:02:04.228455 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Oct 8 20:02:04.229017 kubelet[2204]: E1008 20:02:04.228510 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.139:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:02:04.229017 kubelet[2204]: E1008 20:02:04.226838 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:04.229017 kubelet[2204]: E1008 20:02:04.228810 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-b-d257b8cc02.novalocal?timeout=10s\": dial tcp 172.24.4.139:6443: connect: connection refused" interval="200ms" Oct 8 20:02:04.229017 kubelet[2204]: I1008 20:02:04.225604 2204 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 20:02:04.229391 kubelet[2204]: I1008 20:02:04.229312 2204 reconciler.go:26] "Reconciler: start to sync state" Oct 8 20:02:04.229592 kubelet[2204]: I1008 20:02:04.226363 2204 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 20:02:04.232214 kubelet[2204]: I1008 20:02:04.232137 2204 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:02:04.233201 kubelet[2204]: E1008 20:02:04.233155 2204 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:02:04.234495 kubelet[2204]: I1008 20:02:04.234455 2204 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:02:04.234495 kubelet[2204]: I1008 20:02:04.234480 2204 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:02:04.247374 kubelet[2204]: I1008 20:02:04.247253 2204 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:02:04.249626 kubelet[2204]: I1008 20:02:04.249318 2204 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:02:04.249626 kubelet[2204]: I1008 20:02:04.249367 2204 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:02:04.249626 kubelet[2204]: I1008 20:02:04.249385 2204 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 20:02:04.249626 kubelet[2204]: E1008 20:02:04.249430 2204 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:02:04.255616 kubelet[2204]: W1008 20:02:04.255577 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Oct 8 20:02:04.255708 kubelet[2204]: E1008 20:02:04.255626 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.139:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:02:04.264294 kubelet[2204]: I1008 20:02:04.264064 2204 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:02:04.264294 kubelet[2204]: I1008 20:02:04.264082 2204 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:02:04.264294 kubelet[2204]: I1008 20:02:04.264099 2204 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:02:04.271387 kubelet[2204]: I1008 20:02:04.271374 2204 policy_none.go:49] "None policy: Start" Oct 8 20:02:04.272122 kubelet[2204]: I1008 20:02:04.272103 2204 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:02:04.272169 kubelet[2204]: I1008 20:02:04.272129 2204 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:02:04.278444 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 20:02:04.294160 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 20:02:04.297659 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 20:02:04.309666 kubelet[2204]: I1008 20:02:04.309647 2204 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:02:04.310199 kubelet[2204]: I1008 20:02:04.310186 2204 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 20:02:04.310314 kubelet[2204]: I1008 20:02:04.310283 2204 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 20:02:04.310713 kubelet[2204]: I1008 20:02:04.310664 2204 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:02:04.313081 kubelet[2204]: E1008 20:02:04.313033 2204 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:04.367456 systemd[1]: Created slice kubepods-burstable-pod815bfd9b4ff196f8d042b4393db601d4.slice - libcontainer container kubepods-burstable-pod815bfd9b4ff196f8d042b4393db601d4.slice. Oct 8 20:02:04.382081 systemd[1]: Created slice kubepods-burstable-pod6fc70062a84f5b179403e28fc150f45e.slice - libcontainer container kubepods-burstable-pod6fc70062a84f5b179403e28fc150f45e.slice. Oct 8 20:02:04.387684 systemd[1]: Created slice kubepods-burstable-poda046c905bd9ef4576db19637e61387e3.slice - libcontainer container kubepods-burstable-poda046c905bd9ef4576db19637e61387e3.slice. Oct 8 20:02:04.414636 kubelet[2204]: I1008 20:02:04.414096 2204 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:04.414636 kubelet[2204]: E1008 20:02:04.414890 2204 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.139:6443/api/v1/nodes\": dial tcp 172.24.4.139:6443: connect: connection refused" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:04.429579 kubelet[2204]: E1008 20:02:04.429505 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-b-d257b8cc02.novalocal?timeout=10s\": dial tcp 172.24.4.139:6443: connect: connection refused" interval="400ms" Oct 8 20:02:04.531663 kubelet[2204]: I1008 20:02:04.531575 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/815bfd9b4ff196f8d042b4393db601d4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"815bfd9b4ff196f8d042b4393db601d4\") " pod="kube-system/kube-apiserver-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:04.531663 kubelet[2204]: I1008 20:02:04.531658 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6fc70062a84f5b179403e28fc150f45e-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"6fc70062a84f5b179403e28fc150f45e\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:04.531992 kubelet[2204]: I1008 20:02:04.531710 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6fc70062a84f5b179403e28fc150f45e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"6fc70062a84f5b179403e28fc150f45e\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:04.531992 kubelet[2204]: I1008 20:02:04.531794 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6fc70062a84f5b179403e28fc150f45e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"6fc70062a84f5b179403e28fc150f45e\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:04.531992 kubelet[2204]: I1008 20:02:04.531886 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a046c905bd9ef4576db19637e61387e3-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"a046c905bd9ef4576db19637e61387e3\") " pod="kube-system/kube-scheduler-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:04.531992 kubelet[2204]: I1008 20:02:04.531931 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/815bfd9b4ff196f8d042b4393db601d4-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"815bfd9b4ff196f8d042b4393db601d4\") " pod="kube-system/kube-apiserver-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:04.531992 kubelet[2204]: I1008 20:02:04.531975 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/815bfd9b4ff196f8d042b4393db601d4-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"815bfd9b4ff196f8d042b4393db601d4\") " pod="kube-system/kube-apiserver-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:04.532289 kubelet[2204]: I1008 20:02:04.532017 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6fc70062a84f5b179403e28fc150f45e-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"6fc70062a84f5b179403e28fc150f45e\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:04.532289 kubelet[2204]: I1008 20:02:04.532062 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6fc70062a84f5b179403e28fc150f45e-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"6fc70062a84f5b179403e28fc150f45e\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:04.619174 kubelet[2204]: I1008 20:02:04.618493 2204 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:04.619174 kubelet[2204]: E1008 20:02:04.618999 2204 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.139:6443/api/v1/nodes\": dial tcp 172.24.4.139:6443: connect: connection refused" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:04.680144 containerd[1448]: time="2024-10-08T20:02:04.680042292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-b-d257b8cc02.novalocal,Uid:815bfd9b4ff196f8d042b4393db601d4,Namespace:kube-system,Attempt:0,}" Oct 8 20:02:04.704912 containerd[1448]: time="2024-10-08T20:02:04.704194680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal,Uid:6fc70062a84f5b179403e28fc150f45e,Namespace:kube-system,Attempt:0,}" Oct 8 20:02:04.708889 containerd[1448]: time="2024-10-08T20:02:04.704200631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-b-d257b8cc02.novalocal,Uid:a046c905bd9ef4576db19637e61387e3,Namespace:kube-system,Attempt:0,}" Oct 8 20:02:04.831321 kubelet[2204]: E1008 20:02:04.831181 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-b-d257b8cc02.novalocal?timeout=10s\": dial tcp 172.24.4.139:6443: connect: connection refused" interval="800ms" Oct 8 20:02:05.024073 kubelet[2204]: I1008 20:02:05.023476 2204 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:05.025136 kubelet[2204]: E1008 20:02:05.024926 2204 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.139:6443/api/v1/nodes\": dial tcp 172.24.4.139:6443: connect: connection refused" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:05.073294 kubelet[2204]: W1008 20:02:05.073020 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Oct 8 20:02:05.073504 kubelet[2204]: E1008 20:02:05.073306 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.139:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:02:05.089640 kubelet[2204]: W1008 20:02:05.089456 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Oct 8 20:02:05.089640 kubelet[2204]: E1008 20:02:05.089587 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.139:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:02:05.092415 kubelet[2204]: W1008 20:02:05.092345 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Oct 8 20:02:05.092600 kubelet[2204]: E1008 20:02:05.092417 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.139:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:02:05.230086 kubelet[2204]: W1008 20:02:05.229963 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-b-d257b8cc02.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Oct 8 20:02:05.230238 kubelet[2204]: E1008 20:02:05.230107 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-b-d257b8cc02.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.139:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:02:05.632813 kubelet[2204]: E1008 20:02:05.632632 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-b-d257b8cc02.novalocal?timeout=10s\": dial tcp 172.24.4.139:6443: connect: connection refused" interval="1.6s" Oct 8 20:02:05.829128 kubelet[2204]: I1008 20:02:05.828818 2204 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:06.079811 kubelet[2204]: E1008 20:02:05.829672 2204 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.139:6443/api/v1/nodes\": dial tcp 172.24.4.139:6443: connect: connection refused" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:06.261376 kubelet[2204]: E1008 20:02:06.261028 2204 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.139:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:02:06.324614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount385497919.mount: Deactivated successfully. Oct 8 20:02:06.336842 containerd[1448]: time="2024-10-08T20:02:06.335902678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:02:06.339290 containerd[1448]: time="2024-10-08T20:02:06.339211383Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:02:06.341256 containerd[1448]: time="2024-10-08T20:02:06.341167292Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Oct 8 20:02:06.343958 containerd[1448]: time="2024-10-08T20:02:06.343816350Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:02:06.346708 containerd[1448]: time="2024-10-08T20:02:06.346638352Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:02:06.347299 containerd[1448]: time="2024-10-08T20:02:06.347025849Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:02:06.349481 containerd[1448]: time="2024-10-08T20:02:06.349380666Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:02:06.354498 containerd[1448]: time="2024-10-08T20:02:06.354398968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:02:06.359627 containerd[1448]: time="2024-10-08T20:02:06.359194461Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.654801239s" Oct 8 20:02:06.363819 containerd[1448]: time="2024-10-08T20:02:06.363453239Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.68324708s" Oct 8 20:02:06.369306 containerd[1448]: time="2024-10-08T20:02:06.369120077Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.660682359s" Oct 8 20:02:06.620690 containerd[1448]: time="2024-10-08T20:02:06.619436616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:02:06.620690 containerd[1448]: time="2024-10-08T20:02:06.619570807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:02:06.620690 containerd[1448]: time="2024-10-08T20:02:06.619591676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:06.620690 containerd[1448]: time="2024-10-08T20:02:06.619687045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:06.622128 containerd[1448]: time="2024-10-08T20:02:06.621863647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:02:06.622128 containerd[1448]: time="2024-10-08T20:02:06.621922808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:02:06.622128 containerd[1448]: time="2024-10-08T20:02:06.621965458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:06.625758 containerd[1448]: time="2024-10-08T20:02:06.622307310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:06.633164 containerd[1448]: time="2024-10-08T20:02:06.633027565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:02:06.633164 containerd[1448]: time="2024-10-08T20:02:06.633108537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:02:06.633164 containerd[1448]: time="2024-10-08T20:02:06.633128354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:06.634409 containerd[1448]: time="2024-10-08T20:02:06.634320220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:06.717389 systemd[1]: Started cri-containerd-d6369f3c7d34db00f689c24be047c8f931609e21a70cc41b622bde5a7ab02f5d.scope - libcontainer container d6369f3c7d34db00f689c24be047c8f931609e21a70cc41b622bde5a7ab02f5d. Oct 8 20:02:06.728992 systemd[1]: Started cri-containerd-5805226df827a594fe3f3007d5b76fcdc7e5918d93f2747839a3ca843b21ab9e.scope - libcontainer container 5805226df827a594fe3f3007d5b76fcdc7e5918d93f2747839a3ca843b21ab9e. Oct 8 20:02:06.732731 systemd[1]: Started cri-containerd-9af28f402300b8e573cc37b3a00cc4a744e02a5cd82307118453a61611b1b6c7.scope - libcontainer container 9af28f402300b8e573cc37b3a00cc4a744e02a5cd82307118453a61611b1b6c7. Oct 8 20:02:06.824581 containerd[1448]: time="2024-10-08T20:02:06.824450770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-b-d257b8cc02.novalocal,Uid:815bfd9b4ff196f8d042b4393db601d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9af28f402300b8e573cc37b3a00cc4a744e02a5cd82307118453a61611b1b6c7\"" Oct 8 20:02:06.824710 containerd[1448]: time="2024-10-08T20:02:06.824622201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal,Uid:6fc70062a84f5b179403e28fc150f45e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6369f3c7d34db00f689c24be047c8f931609e21a70cc41b622bde5a7ab02f5d\"" Oct 8 20:02:06.825876 containerd[1448]: time="2024-10-08T20:02:06.825714800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-b-d257b8cc02.novalocal,Uid:a046c905bd9ef4576db19637e61387e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5805226df827a594fe3f3007d5b76fcdc7e5918d93f2747839a3ca843b21ab9e\"" Oct 8 20:02:06.881835 containerd[1448]: time="2024-10-08T20:02:06.881673891Z" level=info msg="CreateContainer within sandbox \"d6369f3c7d34db00f689c24be047c8f931609e21a70cc41b622bde5a7ab02f5d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 20:02:06.882096 containerd[1448]: time="2024-10-08T20:02:06.882070095Z" level=info msg="CreateContainer within sandbox \"9af28f402300b8e573cc37b3a00cc4a744e02a5cd82307118453a61611b1b6c7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 20:02:06.882727 containerd[1448]: time="2024-10-08T20:02:06.882642127Z" level=info msg="CreateContainer within sandbox \"5805226df827a594fe3f3007d5b76fcdc7e5918d93f2747839a3ca843b21ab9e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 20:02:07.198515 containerd[1448]: time="2024-10-08T20:02:07.197888973Z" level=info msg="CreateContainer within sandbox \"5805226df827a594fe3f3007d5b76fcdc7e5918d93f2747839a3ca843b21ab9e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c16571670550c6f0e66ed230fba7de0bd906bc1f9d0003a5e8743eeb919ac42f\"" Oct 8 20:02:07.199354 containerd[1448]: time="2024-10-08T20:02:07.199206875Z" level=info msg="StartContainer for \"c16571670550c6f0e66ed230fba7de0bd906bc1f9d0003a5e8743eeb919ac42f\"" Oct 8 20:02:07.205159 containerd[1448]: time="2024-10-08T20:02:07.205091872Z" level=info msg="CreateContainer within sandbox \"d6369f3c7d34db00f689c24be047c8f931609e21a70cc41b622bde5a7ab02f5d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"017d7ebc5824a99032546dfb822bde60b3293cb66b65776fc35ad7cfd469b5e6\"" Oct 8 20:02:07.207405 containerd[1448]: time="2024-10-08T20:02:07.207127179Z" level=info msg="StartContainer for \"017d7ebc5824a99032546dfb822bde60b3293cb66b65776fc35ad7cfd469b5e6\"" Oct 8 20:02:07.211197 containerd[1448]: time="2024-10-08T20:02:07.211140987Z" level=info msg="CreateContainer within sandbox \"9af28f402300b8e573cc37b3a00cc4a744e02a5cd82307118453a61611b1b6c7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1aef8b028f1ec7ab2e64c41062d77098c0aabb66ee9f2d896f088008d1562d9d\"" Oct 8 20:02:07.213142 containerd[1448]: time="2024-10-08T20:02:07.213062561Z" level=info msg="StartContainer for \"1aef8b028f1ec7ab2e64c41062d77098c0aabb66ee9f2d896f088008d1562d9d\"" Oct 8 20:02:07.217596 kubelet[2204]: W1008 20:02:07.217537 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-b-d257b8cc02.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Oct 8 20:02:07.218696 kubelet[2204]: E1008 20:02:07.217907 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-b-d257b8cc02.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.139:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:02:07.234160 kubelet[2204]: E1008 20:02:07.233929 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-b-d257b8cc02.novalocal?timeout=10s\": dial tcp 172.24.4.139:6443: connect: connection refused" interval="3.2s" Oct 8 20:02:07.272903 systemd[1]: Started cri-containerd-c16571670550c6f0e66ed230fba7de0bd906bc1f9d0003a5e8743eeb919ac42f.scope - libcontainer container c16571670550c6f0e66ed230fba7de0bd906bc1f9d0003a5e8743eeb919ac42f. Oct 8 20:02:07.306151 kubelet[2204]: W1008 20:02:07.306110 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Oct 8 20:02:07.306259 kubelet[2204]: E1008 20:02:07.306161 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.139:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:02:07.312882 systemd[1]: Started cri-containerd-017d7ebc5824a99032546dfb822bde60b3293cb66b65776fc35ad7cfd469b5e6.scope - libcontainer container 017d7ebc5824a99032546dfb822bde60b3293cb66b65776fc35ad7cfd469b5e6. Oct 8 20:02:07.313821 systemd[1]: Started cri-containerd-1aef8b028f1ec7ab2e64c41062d77098c0aabb66ee9f2d896f088008d1562d9d.scope - libcontainer container 1aef8b028f1ec7ab2e64c41062d77098c0aabb66ee9f2d896f088008d1562d9d. Oct 8 20:02:07.361840 containerd[1448]: time="2024-10-08T20:02:07.361796501Z" level=info msg="StartContainer for \"c16571670550c6f0e66ed230fba7de0bd906bc1f9d0003a5e8743eeb919ac42f\" returns successfully" Oct 8 20:02:07.412717 containerd[1448]: time="2024-10-08T20:02:07.412632775Z" level=info msg="StartContainer for \"017d7ebc5824a99032546dfb822bde60b3293cb66b65776fc35ad7cfd469b5e6\" returns successfully" Oct 8 20:02:07.413319 containerd[1448]: time="2024-10-08T20:02:07.412633646Z" level=info msg="StartContainer for \"1aef8b028f1ec7ab2e64c41062d77098c0aabb66ee9f2d896f088008d1562d9d\" returns successfully" Oct 8 20:02:07.432303 kubelet[2204]: I1008 20:02:07.432148 2204 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:07.432585 kubelet[2204]: E1008 20:02:07.432543 2204 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.139:6443/api/v1/nodes\": dial tcp 172.24.4.139:6443: connect: connection refused" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:09.630927 kubelet[2204]: E1008 20:02:09.629720 2204 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-1-0-b-d257b8cc02.novalocal.17fc92c33ac330a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-b-d257b8cc02.novalocal,UID:ci-4081-1-0-b-d257b8cc02.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-b-d257b8cc02.novalocal,},FirstTimestamp:2024-10-08 20:02:04.205576361 +0000 UTC m=+1.078895328,LastTimestamp:2024-10-08 20:02:04.205576361 +0000 UTC m=+1.078895328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-b-d257b8cc02.novalocal,}" Oct 8 20:02:09.690762 kubelet[2204]: E1008 20:02:09.690560 2204 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-1-0-b-d257b8cc02.novalocal.17fc92c33c67d276 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-b-d257b8cc02.novalocal,UID:ci-4081-1-0-b-d257b8cc02.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-b-d257b8cc02.novalocal,},FirstTimestamp:2024-10-08 20:02:04.233142902 +0000 UTC m=+1.106461819,LastTimestamp:2024-10-08 20:02:04.233142902 +0000 UTC m=+1.106461819,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-b-d257b8cc02.novalocal,}" Oct 8 20:02:09.744502 kubelet[2204]: E1008 20:02:09.743858 2204 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-1-0-b-d257b8cc02.novalocal.17fc92c33e363216 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-b-d257b8cc02.novalocal,UID:ci-4081-1-0-b-d257b8cc02.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081-1-0-b-d257b8cc02.novalocal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-b-d257b8cc02.novalocal,},FirstTimestamp:2024-10-08 20:02:04.263445014 +0000 UTC m=+1.136763931,LastTimestamp:2024-10-08 20:02:04.263445014 +0000 UTC m=+1.136763931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-b-d257b8cc02.novalocal,}" Oct 8 20:02:09.946452 kubelet[2204]: E1008 20:02:09.946291 2204 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-1-0-b-d257b8cc02.novalocal" not found Oct 8 20:02:10.325629 kubelet[2204]: E1008 20:02:10.325556 2204 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-1-0-b-d257b8cc02.novalocal" not found Oct 8 20:02:10.448006 kubelet[2204]: E1008 20:02:10.447789 2204 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:10.637838 kubelet[2204]: I1008 20:02:10.637344 2204 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:10.661680 kubelet[2204]: I1008 20:02:10.661142 2204 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:10.661680 kubelet[2204]: E1008 20:02:10.661242 2204 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-1-0-b-d257b8cc02.novalocal\": node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:10.687187 kubelet[2204]: E1008 20:02:10.687120 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:10.787533 kubelet[2204]: E1008 20:02:10.787448 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:10.888777 kubelet[2204]: E1008 20:02:10.888524 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:10.989730 kubelet[2204]: E1008 20:02:10.989640 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:11.090898 kubelet[2204]: E1008 20:02:11.090837 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:11.191646 kubelet[2204]: E1008 20:02:11.191436 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:11.291692 kubelet[2204]: E1008 20:02:11.291590 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:11.392241 kubelet[2204]: E1008 20:02:11.392175 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:11.493300 kubelet[2204]: E1008 20:02:11.493118 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:11.594324 kubelet[2204]: E1008 20:02:11.594240 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:11.695136 kubelet[2204]: E1008 20:02:11.695045 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:11.796394 kubelet[2204]: E1008 20:02:11.796200 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:11.896686 kubelet[2204]: E1008 20:02:11.896594 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:11.997862 kubelet[2204]: E1008 20:02:11.997734 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:12.098649 kubelet[2204]: E1008 20:02:12.098572 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:12.188258 systemd[1]: Reloading requested from client PID 2482 ('systemctl') (unit session-11.scope)... Oct 8 20:02:12.188296 systemd[1]: Reloading... Oct 8 20:02:12.199849 kubelet[2204]: E1008 20:02:12.199787 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:12.295814 zram_generator::config[2521]: No configuration found. Oct 8 20:02:12.300184 kubelet[2204]: E1008 20:02:12.300149 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:12.400685 kubelet[2204]: E1008 20:02:12.400598 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:12.438494 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:02:12.501827 kubelet[2204]: E1008 20:02:12.501730 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:12.540325 systemd[1]: Reloading finished in 351 ms. Oct 8 20:02:12.580888 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:02:12.601376 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:02:12.601567 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:12.601612 systemd[1]: kubelet.service: Consumed 1.199s CPU time, 113.4M memory peak, 0B memory swap peak. Oct 8 20:02:12.609719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:02:12.817925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:12.830539 (kubelet)[2584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:02:13.353732 kubelet[2584]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:02:13.356337 kubelet[2584]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:02:13.356337 kubelet[2584]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:02:13.356949 kubelet[2584]: I1008 20:02:13.356370 2584 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:02:13.368907 kubelet[2584]: I1008 20:02:13.367224 2584 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 20:02:13.369918 kubelet[2584]: I1008 20:02:13.369405 2584 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:02:13.369918 kubelet[2584]: I1008 20:02:13.369673 2584 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 20:02:13.372572 kubelet[2584]: I1008 20:02:13.372528 2584 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 20:02:13.391407 kubelet[2584]: I1008 20:02:13.391175 2584 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:02:13.396909 kubelet[2584]: E1008 20:02:13.396858 2584 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 20:02:13.396909 kubelet[2584]: I1008 20:02:13.396892 2584 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 20:02:13.404766 kubelet[2584]: I1008 20:02:13.404470 2584 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:02:13.404766 kubelet[2584]: I1008 20:02:13.404591 2584 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 20:02:13.404766 kubelet[2584]: I1008 20:02:13.404683 2584 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:02:13.405169 kubelet[2584]: I1008 20:02:13.404713 2584 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-1-0-b-d257b8cc02.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 20:02:13.405316 kubelet[2584]: I1008 20:02:13.405303 2584 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:02:13.405376 kubelet[2584]: I1008 20:02:13.405369 2584 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 20:02:13.406792 kubelet[2584]: I1008 20:02:13.406656 2584 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:02:13.406892 kubelet[2584]: I1008 20:02:13.406879 2584 kubelet.go:408] "Attempting to sync node with API server" Oct 8 20:02:13.407058 kubelet[2584]: I1008 20:02:13.407047 2584 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:02:13.407803 kubelet[2584]: I1008 20:02:13.407126 2584 kubelet.go:314] "Adding apiserver pod source" Oct 8 20:02:13.407803 kubelet[2584]: I1008 20:02:13.407139 2584 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:02:13.415985 kubelet[2584]: I1008 20:02:13.415968 2584 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:02:13.416722 kubelet[2584]: I1008 20:02:13.416710 2584 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:02:13.417914 kubelet[2584]: I1008 20:02:13.417815 2584 server.go:1269] "Started kubelet" Oct 8 20:02:13.421673 kubelet[2584]: I1008 20:02:13.421658 2584 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:02:13.430645 kubelet[2584]: I1008 20:02:13.430059 2584 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:02:13.435027 kubelet[2584]: I1008 20:02:13.434983 2584 server.go:460] "Adding debug handlers to kubelet server" Oct 8 20:02:13.439875 kubelet[2584]: I1008 20:02:13.439816 2584 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:02:13.440138 kubelet[2584]: I1008 20:02:13.440053 2584 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:02:13.440365 kubelet[2584]: I1008 20:02:13.440290 2584 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 20:02:13.446182 kubelet[2584]: I1008 20:02:13.446002 2584 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 20:02:13.448652 kubelet[2584]: E1008 20:02:13.447909 2584 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-b-d257b8cc02.novalocal\" not found" Oct 8 20:02:13.459226 kubelet[2584]: I1008 20:02:13.458673 2584 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 20:02:13.459226 kubelet[2584]: I1008 20:02:13.458861 2584 reconciler.go:26] "Reconciler: start to sync state" Oct 8 20:02:13.473239 kubelet[2584]: I1008 20:02:13.472616 2584 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:02:13.473453 kubelet[2584]: I1008 20:02:13.473416 2584 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:02:13.473650 kubelet[2584]: I1008 20:02:13.473631 2584 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:02:13.474474 kubelet[2584]: I1008 20:02:13.474436 2584 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:02:13.476681 kubelet[2584]: I1008 20:02:13.476656 2584 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:02:13.476760 kubelet[2584]: I1008 20:02:13.476685 2584 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:02:13.476760 kubelet[2584]: I1008 20:02:13.476703 2584 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 20:02:13.476815 kubelet[2584]: E1008 20:02:13.476776 2584 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:02:13.486987 kubelet[2584]: E1008 20:02:13.486936 2584 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:02:13.535005 kubelet[2584]: I1008 20:02:13.534979 2584 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:02:13.535648 kubelet[2584]: I1008 20:02:13.535197 2584 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:02:13.535648 kubelet[2584]: I1008 20:02:13.535232 2584 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:02:13.535648 kubelet[2584]: I1008 20:02:13.535420 2584 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 20:02:13.535648 kubelet[2584]: I1008 20:02:13.535432 2584 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 20:02:13.535648 kubelet[2584]: I1008 20:02:13.535450 2584 policy_none.go:49] "None policy: Start" Oct 8 20:02:13.537020 kubelet[2584]: I1008 20:02:13.536995 2584 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:02:13.537074 kubelet[2584]: I1008 20:02:13.537023 2584 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:02:13.537241 kubelet[2584]: I1008 20:02:13.537200 2584 state_mem.go:75] "Updated machine memory state" Oct 8 20:02:13.548203 kubelet[2584]: I1008 20:02:13.547463 2584 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:02:13.548203 kubelet[2584]: I1008 20:02:13.547615 2584 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 20:02:13.548203 kubelet[2584]: I1008 20:02:13.547625 2584 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 20:02:13.548203 kubelet[2584]: I1008 20:02:13.547838 2584 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:02:13.591507 kubelet[2584]: W1008 20:02:13.591345 2584 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 8 20:02:13.591988 kubelet[2584]: W1008 20:02:13.591348 2584 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 8 20:02:13.593926 kubelet[2584]: W1008 20:02:13.591472 2584 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 8 20:02:13.655285 kubelet[2584]: I1008 20:02:13.654654 2584 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:13.659473 kubelet[2584]: I1008 20:02:13.659316 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/815bfd9b4ff196f8d042b4393db601d4-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"815bfd9b4ff196f8d042b4393db601d4\") " pod="kube-system/kube-apiserver-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:13.659473 kubelet[2584]: I1008 20:02:13.659356 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/815bfd9b4ff196f8d042b4393db601d4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"815bfd9b4ff196f8d042b4393db601d4\") " pod="kube-system/kube-apiserver-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:13.659473 kubelet[2584]: I1008 20:02:13.659408 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6fc70062a84f5b179403e28fc150f45e-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"6fc70062a84f5b179403e28fc150f45e\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:13.659473 kubelet[2584]: I1008 20:02:13.659435 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6fc70062a84f5b179403e28fc150f45e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"6fc70062a84f5b179403e28fc150f45e\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:13.660959 kubelet[2584]: I1008 20:02:13.659700 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6fc70062a84f5b179403e28fc150f45e-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"6fc70062a84f5b179403e28fc150f45e\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:13.661155 kubelet[2584]: I1008 20:02:13.661061 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6fc70062a84f5b179403e28fc150f45e-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"6fc70062a84f5b179403e28fc150f45e\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:13.661289 kubelet[2584]: I1008 20:02:13.661270 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6fc70062a84f5b179403e28fc150f45e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"6fc70062a84f5b179403e28fc150f45e\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:13.661538 kubelet[2584]: I1008 20:02:13.661519 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a046c905bd9ef4576db19637e61387e3-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"a046c905bd9ef4576db19637e61387e3\") " pod="kube-system/kube-scheduler-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:13.662781 kubelet[2584]: I1008 20:02:13.662760 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/815bfd9b4ff196f8d042b4393db601d4-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-b-d257b8cc02.novalocal\" (UID: \"815bfd9b4ff196f8d042b4393db601d4\") " pod="kube-system/kube-apiserver-ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:13.667252 kubelet[2584]: I1008 20:02:13.667231 2584 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:13.667423 kubelet[2584]: I1008 20:02:13.667410 2584 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:14.413437 kubelet[2584]: I1008 20:02:14.413319 2584 apiserver.go:52] "Watching apiserver" Oct 8 20:02:14.459404 kubelet[2584]: I1008 20:02:14.459319 2584 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 20:02:14.572775 kubelet[2584]: I1008 20:02:14.571953 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-1-0-b-d257b8cc02.novalocal" podStartSLOduration=1.571938109 podStartE2EDuration="1.571938109s" podCreationTimestamp="2024-10-08 20:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:02:14.571183565 +0000 UTC m=+1.325821109" watchObservedRunningTime="2024-10-08 20:02:14.571938109 +0000 UTC m=+1.326575642" Oct 8 20:02:14.629577 kubelet[2584]: I1008 20:02:14.629252 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-1-0-b-d257b8cc02.novalocal" podStartSLOduration=1.6292334259999999 podStartE2EDuration="1.629233426s" podCreationTimestamp="2024-10-08 20:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:02:14.609761909 +0000 UTC m=+1.364399452" watchObservedRunningTime="2024-10-08 20:02:14.629233426 +0000 UTC m=+1.383870959" Oct 8 20:02:14.651179 kubelet[2584]: I1008 20:02:14.650954 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-1-0-b-d257b8cc02.novalocal" podStartSLOduration=1.650940077 podStartE2EDuration="1.650940077s" podCreationTimestamp="2024-10-08 20:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:02:14.6294638 +0000 UTC m=+1.384101333" watchObservedRunningTime="2024-10-08 20:02:14.650940077 +0000 UTC m=+1.405577610" Oct 8 20:02:18.111723 kubelet[2584]: I1008 20:02:18.111660 2584 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 20:02:18.118698 containerd[1448]: time="2024-10-08T20:02:18.118622505Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 20:02:18.119350 kubelet[2584]: I1008 20:02:18.119011 2584 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 20:02:18.863294 systemd[1]: Created slice kubepods-besteffort-pod708357e9_d029_46b3_86c7_875c0ff602ee.slice - libcontainer container kubepods-besteffort-pod708357e9_d029_46b3_86c7_875c0ff602ee.slice. Oct 8 20:02:18.898020 kubelet[2584]: I1008 20:02:18.897868 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/708357e9-d029-46b3-86c7-875c0ff602ee-kube-proxy\") pod \"kube-proxy-2bt4l\" (UID: \"708357e9-d029-46b3-86c7-875c0ff602ee\") " pod="kube-system/kube-proxy-2bt4l" Oct 8 20:02:18.898020 kubelet[2584]: I1008 20:02:18.897908 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/708357e9-d029-46b3-86c7-875c0ff602ee-xtables-lock\") pod \"kube-proxy-2bt4l\" (UID: \"708357e9-d029-46b3-86c7-875c0ff602ee\") " pod="kube-system/kube-proxy-2bt4l" Oct 8 20:02:18.898020 kubelet[2584]: I1008 20:02:18.897929 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/708357e9-d029-46b3-86c7-875c0ff602ee-lib-modules\") pod \"kube-proxy-2bt4l\" (UID: \"708357e9-d029-46b3-86c7-875c0ff602ee\") " pod="kube-system/kube-proxy-2bt4l" Oct 8 20:02:18.898020 kubelet[2584]: I1008 20:02:18.897949 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkvff\" (UniqueName: \"kubernetes.io/projected/708357e9-d029-46b3-86c7-875c0ff602ee-kube-api-access-tkvff\") pod \"kube-proxy-2bt4l\" (UID: \"708357e9-d029-46b3-86c7-875c0ff602ee\") " pod="kube-system/kube-proxy-2bt4l" Oct 8 20:02:19.008541 kubelet[2584]: E1008 20:02:19.008033 2584 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 8 20:02:19.008541 kubelet[2584]: E1008 20:02:19.008061 2584 projected.go:194] Error preparing data for projected volume kube-api-access-tkvff for pod kube-system/kube-proxy-2bt4l: configmap "kube-root-ca.crt" not found Oct 8 20:02:19.008541 kubelet[2584]: E1008 20:02:19.008114 2584 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/708357e9-d029-46b3-86c7-875c0ff602ee-kube-api-access-tkvff podName:708357e9-d029-46b3-86c7-875c0ff602ee nodeName:}" failed. No retries permitted until 2024-10-08 20:02:19.508094857 +0000 UTC m=+6.262732390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tkvff" (UniqueName: "kubernetes.io/projected/708357e9-d029-46b3-86c7-875c0ff602ee-kube-api-access-tkvff") pod "kube-proxy-2bt4l" (UID: "708357e9-d029-46b3-86c7-875c0ff602ee") : configmap "kube-root-ca.crt" not found Oct 8 20:02:19.253764 systemd[1]: Created slice kubepods-besteffort-podca9908fd_8bf1_433a_99c2_d351915da22f.slice - libcontainer container kubepods-besteffort-podca9908fd_8bf1_433a_99c2_d351915da22f.slice. Oct 8 20:02:19.300645 kubelet[2584]: I1008 20:02:19.300603 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqvcl\" (UniqueName: \"kubernetes.io/projected/ca9908fd-8bf1-433a-99c2-d351915da22f-kube-api-access-tqvcl\") pod \"tigera-operator-55748b469f-szr8f\" (UID: \"ca9908fd-8bf1-433a-99c2-d351915da22f\") " pod="tigera-operator/tigera-operator-55748b469f-szr8f" Oct 8 20:02:19.301167 kubelet[2584]: I1008 20:02:19.301087 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ca9908fd-8bf1-433a-99c2-d351915da22f-var-lib-calico\") pod \"tigera-operator-55748b469f-szr8f\" (UID: \"ca9908fd-8bf1-433a-99c2-d351915da22f\") " pod="tigera-operator/tigera-operator-55748b469f-szr8f" Oct 8 20:02:19.505598 sudo[1701]: pam_unix(sudo:session): session closed for user root Oct 8 20:02:19.560678 containerd[1448]: time="2024-10-08T20:02:19.559830137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-szr8f,Uid:ca9908fd-8bf1-433a-99c2-d351915da22f,Namespace:tigera-operator,Attempt:0,}" Oct 8 20:02:19.636448 containerd[1448]: time="2024-10-08T20:02:19.635894872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:02:19.636448 containerd[1448]: time="2024-10-08T20:02:19.636081684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:02:19.636448 containerd[1448]: time="2024-10-08T20:02:19.636121339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:19.636448 containerd[1448]: time="2024-10-08T20:02:19.636243980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:19.658539 sshd[1698]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:19.666902 systemd[1]: Started cri-containerd-88060a2d71acfb3101434c961f3553f0ea6118b6a510fe93f1d1382ab12793ac.scope - libcontainer container 88060a2d71acfb3101434c961f3553f0ea6118b6a510fe93f1d1382ab12793ac. Oct 8 20:02:19.667354 systemd[1]: sshd@8-172.24.4.139:22-172.24.4.1:45048.service: Deactivated successfully. Oct 8 20:02:19.669156 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 20:02:19.669686 systemd[1]: session-11.scope: Consumed 6.616s CPU time, 153.1M memory peak, 0B memory swap peak. Oct 8 20:02:19.671963 systemd-logind[1429]: Session 11 logged out. Waiting for processes to exit. Oct 8 20:02:19.674675 systemd-logind[1429]: Removed session 11. Oct 8 20:02:19.708783 containerd[1448]: time="2024-10-08T20:02:19.708268597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-szr8f,Uid:ca9908fd-8bf1-433a-99c2-d351915da22f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"88060a2d71acfb3101434c961f3553f0ea6118b6a510fe93f1d1382ab12793ac\"" Oct 8 20:02:19.711301 containerd[1448]: time="2024-10-08T20:02:19.711271272Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 20:02:19.771819 containerd[1448]: time="2024-10-08T20:02:19.771456082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2bt4l,Uid:708357e9-d029-46b3-86c7-875c0ff602ee,Namespace:kube-system,Attempt:0,}" Oct 8 20:02:19.807562 containerd[1448]: time="2024-10-08T20:02:19.807355979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:02:19.807871 containerd[1448]: time="2024-10-08T20:02:19.807622581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:02:19.807871 containerd[1448]: time="2024-10-08T20:02:19.807661384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:19.808198 containerd[1448]: time="2024-10-08T20:02:19.808094751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:19.838097 systemd[1]: Started cri-containerd-7e24be5cf35b4fed098cbd0403d4c244449dc754681dc68d5397180b6c364e6a.scope - libcontainer container 7e24be5cf35b4fed098cbd0403d4c244449dc754681dc68d5397180b6c364e6a. Oct 8 20:02:19.866138 containerd[1448]: time="2024-10-08T20:02:19.866087562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2bt4l,Uid:708357e9-d029-46b3-86c7-875c0ff602ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e24be5cf35b4fed098cbd0403d4c244449dc754681dc68d5397180b6c364e6a\"" Oct 8 20:02:19.872527 containerd[1448]: time="2024-10-08T20:02:19.872467086Z" level=info msg="CreateContainer within sandbox \"7e24be5cf35b4fed098cbd0403d4c244449dc754681dc68d5397180b6c364e6a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 20:02:19.905416 containerd[1448]: time="2024-10-08T20:02:19.904997158Z" level=info msg="CreateContainer within sandbox \"7e24be5cf35b4fed098cbd0403d4c244449dc754681dc68d5397180b6c364e6a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7db08a291eba87c514d139955cd7c421e2e03759f80b096d6ca63b028b8af0e6\"" Oct 8 20:02:19.908022 containerd[1448]: time="2024-10-08T20:02:19.905835837Z" level=info msg="StartContainer for \"7db08a291eba87c514d139955cd7c421e2e03759f80b096d6ca63b028b8af0e6\"" Oct 8 20:02:19.948926 systemd[1]: Started cri-containerd-7db08a291eba87c514d139955cd7c421e2e03759f80b096d6ca63b028b8af0e6.scope - libcontainer container 7db08a291eba87c514d139955cd7c421e2e03759f80b096d6ca63b028b8af0e6. Oct 8 20:02:19.990112 containerd[1448]: time="2024-10-08T20:02:19.990037556Z" level=info msg="StartContainer for \"7db08a291eba87c514d139955cd7c421e2e03759f80b096d6ca63b028b8af0e6\" returns successfully" Oct 8 20:02:20.466564 systemd[1]: run-containerd-runc-k8s.io-88060a2d71acfb3101434c961f3553f0ea6118b6a510fe93f1d1382ab12793ac-runc.Npu0iF.mount: Deactivated successfully. Oct 8 20:02:20.661367 kubelet[2584]: I1008 20:02:20.661124 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2bt4l" podStartSLOduration=2.6610910690000003 podStartE2EDuration="2.661091069s" podCreationTimestamp="2024-10-08 20:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:02:20.66067204 +0000 UTC m=+7.415309623" watchObservedRunningTime="2024-10-08 20:02:20.661091069 +0000 UTC m=+7.415728652" Oct 8 20:02:21.750110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3724835616.mount: Deactivated successfully. Oct 8 20:02:22.547808 containerd[1448]: time="2024-10-08T20:02:22.547769363Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:22.549526 containerd[1448]: time="2024-10-08T20:02:22.549183123Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136557" Oct 8 20:02:22.550652 containerd[1448]: time="2024-10-08T20:02:22.550617763Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:22.553415 containerd[1448]: time="2024-10-08T20:02:22.553383198Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:22.555713 containerd[1448]: time="2024-10-08T20:02:22.555679518Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.844371107s" Oct 8 20:02:22.555787 containerd[1448]: time="2024-10-08T20:02:22.555713193Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 8 20:02:22.568344 containerd[1448]: time="2024-10-08T20:02:22.568292365Z" level=info msg="CreateContainer within sandbox \"88060a2d71acfb3101434c961f3553f0ea6118b6a510fe93f1d1382ab12793ac\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 20:02:22.586989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2506160618.mount: Deactivated successfully. Oct 8 20:02:22.590003 containerd[1448]: time="2024-10-08T20:02:22.589900407Z" level=info msg="CreateContainer within sandbox \"88060a2d71acfb3101434c961f3553f0ea6118b6a510fe93f1d1382ab12793ac\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e270d0bae2b1b6f8bbfe7c0792f8c64e16589fc67a6eeb67b99f7cc9d8087443\"" Oct 8 20:02:22.590458 containerd[1448]: time="2024-10-08T20:02:22.590432578Z" level=info msg="StartContainer for \"e270d0bae2b1b6f8bbfe7c0792f8c64e16589fc67a6eeb67b99f7cc9d8087443\"" Oct 8 20:02:22.630119 systemd[1]: Started cri-containerd-e270d0bae2b1b6f8bbfe7c0792f8c64e16589fc67a6eeb67b99f7cc9d8087443.scope - libcontainer container e270d0bae2b1b6f8bbfe7c0792f8c64e16589fc67a6eeb67b99f7cc9d8087443. Oct 8 20:02:22.664983 containerd[1448]: time="2024-10-08T20:02:22.664951712Z" level=info msg="StartContainer for \"e270d0bae2b1b6f8bbfe7c0792f8c64e16589fc67a6eeb67b99f7cc9d8087443\" returns successfully" Oct 8 20:02:23.561553 kubelet[2584]: I1008 20:02:23.560989 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-55748b469f-szr8f" podStartSLOduration=1.705141292 podStartE2EDuration="4.560963138s" podCreationTimestamp="2024-10-08 20:02:19 +0000 UTC" firstStartedPulling="2024-10-08 20:02:19.710804131 +0000 UTC m=+6.465441665" lastFinishedPulling="2024-10-08 20:02:22.566625968 +0000 UTC m=+9.321263511" observedRunningTime="2024-10-08 20:02:23.56093735 +0000 UTC m=+10.315574923" watchObservedRunningTime="2024-10-08 20:02:23.560963138 +0000 UTC m=+10.315600711" Oct 8 20:02:26.143829 systemd[1]: Created slice kubepods-besteffort-pode176156d_45ec_4d0e_8a31_d6cb05fb31e7.slice - libcontainer container kubepods-besteffort-pode176156d_45ec_4d0e_8a31_d6cb05fb31e7.slice. Oct 8 20:02:26.148790 kubelet[2584]: I1008 20:02:26.148657 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e176156d-45ec-4d0e-8a31-d6cb05fb31e7-typha-certs\") pod \"calico-typha-8567998667-fvf5w\" (UID: \"e176156d-45ec-4d0e-8a31-d6cb05fb31e7\") " pod="calico-system/calico-typha-8567998667-fvf5w" Oct 8 20:02:26.148790 kubelet[2584]: I1008 20:02:26.148698 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7vbx\" (UniqueName: \"kubernetes.io/projected/e176156d-45ec-4d0e-8a31-d6cb05fb31e7-kube-api-access-z7vbx\") pod \"calico-typha-8567998667-fvf5w\" (UID: \"e176156d-45ec-4d0e-8a31-d6cb05fb31e7\") " pod="calico-system/calico-typha-8567998667-fvf5w" Oct 8 20:02:26.148790 kubelet[2584]: I1008 20:02:26.148725 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e176156d-45ec-4d0e-8a31-d6cb05fb31e7-tigera-ca-bundle\") pod \"calico-typha-8567998667-fvf5w\" (UID: \"e176156d-45ec-4d0e-8a31-d6cb05fb31e7\") " pod="calico-system/calico-typha-8567998667-fvf5w" Oct 8 20:02:26.249116 kubelet[2584]: I1008 20:02:26.249080 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48ee1546-ca8d-485c-be26-f49ba9831531-tigera-ca-bundle\") pod \"calico-node-2t5ql\" (UID: \"48ee1546-ca8d-485c-be26-f49ba9831531\") " pod="calico-system/calico-node-2t5ql" Oct 8 20:02:26.249116 kubelet[2584]: I1008 20:02:26.249119 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/48ee1546-ca8d-485c-be26-f49ba9831531-var-run-calico\") pod \"calico-node-2t5ql\" (UID: \"48ee1546-ca8d-485c-be26-f49ba9831531\") " pod="calico-system/calico-node-2t5ql" Oct 8 20:02:26.249282 kubelet[2584]: I1008 20:02:26.249138 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/48ee1546-ca8d-485c-be26-f49ba9831531-var-lib-calico\") pod \"calico-node-2t5ql\" (UID: \"48ee1546-ca8d-485c-be26-f49ba9831531\") " pod="calico-system/calico-node-2t5ql" Oct 8 20:02:26.249282 kubelet[2584]: I1008 20:02:26.249208 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48ee1546-ca8d-485c-be26-f49ba9831531-lib-modules\") pod \"calico-node-2t5ql\" (UID: \"48ee1546-ca8d-485c-be26-f49ba9831531\") " pod="calico-system/calico-node-2t5ql" Oct 8 20:02:26.249282 kubelet[2584]: I1008 20:02:26.249229 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/48ee1546-ca8d-485c-be26-f49ba9831531-cni-net-dir\") pod \"calico-node-2t5ql\" (UID: \"48ee1546-ca8d-485c-be26-f49ba9831531\") " pod="calico-system/calico-node-2t5ql" Oct 8 20:02:26.249282 kubelet[2584]: I1008 20:02:26.249249 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/48ee1546-ca8d-485c-be26-f49ba9831531-cni-bin-dir\") pod \"calico-node-2t5ql\" (UID: \"48ee1546-ca8d-485c-be26-f49ba9831531\") " pod="calico-system/calico-node-2t5ql" Oct 8 20:02:26.249282 kubelet[2584]: I1008 20:02:26.249269 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/48ee1546-ca8d-485c-be26-f49ba9831531-cni-log-dir\") pod \"calico-node-2t5ql\" (UID: \"48ee1546-ca8d-485c-be26-f49ba9831531\") " pod="calico-system/calico-node-2t5ql" Oct 8 20:02:26.249425 kubelet[2584]: I1008 20:02:26.249288 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48ee1546-ca8d-485c-be26-f49ba9831531-xtables-lock\") pod \"calico-node-2t5ql\" (UID: \"48ee1546-ca8d-485c-be26-f49ba9831531\") " pod="calico-system/calico-node-2t5ql" Oct 8 20:02:26.249425 kubelet[2584]: I1008 20:02:26.249308 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/48ee1546-ca8d-485c-be26-f49ba9831531-policysync\") pod \"calico-node-2t5ql\" (UID: \"48ee1546-ca8d-485c-be26-f49ba9831531\") " pod="calico-system/calico-node-2t5ql" Oct 8 20:02:26.249425 kubelet[2584]: I1008 20:02:26.249350 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/48ee1546-ca8d-485c-be26-f49ba9831531-flexvol-driver-host\") pod \"calico-node-2t5ql\" (UID: \"48ee1546-ca8d-485c-be26-f49ba9831531\") " pod="calico-system/calico-node-2t5ql" Oct 8 20:02:26.249425 kubelet[2584]: I1008 20:02:26.249382 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/48ee1546-ca8d-485c-be26-f49ba9831531-node-certs\") pod \"calico-node-2t5ql\" (UID: \"48ee1546-ca8d-485c-be26-f49ba9831531\") " pod="calico-system/calico-node-2t5ql" Oct 8 20:02:26.249425 kubelet[2584]: I1008 20:02:26.249400 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz42s\" (UniqueName: \"kubernetes.io/projected/48ee1546-ca8d-485c-be26-f49ba9831531-kube-api-access-wz42s\") pod \"calico-node-2t5ql\" (UID: \"48ee1546-ca8d-485c-be26-f49ba9831531\") " pod="calico-system/calico-node-2t5ql" Oct 8 20:02:26.265916 systemd[1]: Created slice kubepods-besteffort-pod48ee1546_ca8d_485c_be26_f49ba9831531.slice - libcontainer container kubepods-besteffort-pod48ee1546_ca8d_485c_be26_f49ba9831531.slice. Oct 8 20:02:26.354976 kubelet[2584]: E1008 20:02:26.354422 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.354976 kubelet[2584]: W1008 20:02:26.354451 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.354976 kubelet[2584]: E1008 20:02:26.354479 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.360228 kubelet[2584]: E1008 20:02:26.360202 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.360228 kubelet[2584]: W1008 20:02:26.360221 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.360331 kubelet[2584]: E1008 20:02:26.360280 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.378679 kubelet[2584]: E1008 20:02:26.377923 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.378679 kubelet[2584]: W1008 20:02:26.377967 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.378679 kubelet[2584]: E1008 20:02:26.377986 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.382441 kubelet[2584]: E1008 20:02:26.382382 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7lz5k" podUID="b58e5c77-b9e1-40fc-b328-51c2c4af45f8" Oct 8 20:02:26.446338 kubelet[2584]: E1008 20:02:26.445706 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.446338 kubelet[2584]: W1008 20:02:26.445758 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.446338 kubelet[2584]: E1008 20:02:26.445808 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.446338 kubelet[2584]: E1008 20:02:26.446202 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.446338 kubelet[2584]: W1008 20:02:26.446213 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.446338 kubelet[2584]: E1008 20:02:26.446225 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.446609 kubelet[2584]: E1008 20:02:26.446452 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.446609 kubelet[2584]: W1008 20:02:26.446462 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.446609 kubelet[2584]: E1008 20:02:26.446493 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.447774 kubelet[2584]: E1008 20:02:26.447723 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.447774 kubelet[2584]: W1008 20:02:26.447756 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.447867 kubelet[2584]: E1008 20:02:26.447776 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.448161 kubelet[2584]: E1008 20:02:26.448010 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.448161 kubelet[2584]: W1008 20:02:26.448025 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.448161 kubelet[2584]: E1008 20:02:26.448034 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.448527 kubelet[2584]: E1008 20:02:26.448496 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.448527 kubelet[2584]: W1008 20:02:26.448511 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.448527 kubelet[2584]: E1008 20:02:26.448523 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.449399 kubelet[2584]: E1008 20:02:26.449370 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.449399 kubelet[2584]: W1008 20:02:26.449387 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.449399 kubelet[2584]: E1008 20:02:26.449398 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.449910 kubelet[2584]: E1008 20:02:26.449886 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.449910 kubelet[2584]: W1008 20:02:26.449901 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.449910 kubelet[2584]: E1008 20:02:26.449911 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.450502 kubelet[2584]: E1008 20:02:26.450477 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.450502 kubelet[2584]: W1008 20:02:26.450493 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.450584 kubelet[2584]: E1008 20:02:26.450505 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.451311 kubelet[2584]: E1008 20:02:26.451289 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.451383 kubelet[2584]: W1008 20:02:26.451329 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.451383 kubelet[2584]: E1008 20:02:26.451342 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.452136 kubelet[2584]: E1008 20:02:26.452098 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.452136 kubelet[2584]: W1008 20:02:26.452117 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.452136 kubelet[2584]: E1008 20:02:26.452128 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.452375 kubelet[2584]: E1008 20:02:26.452353 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.452375 kubelet[2584]: W1008 20:02:26.452370 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.452445 kubelet[2584]: E1008 20:02:26.452380 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.453288 containerd[1448]: time="2024-10-08T20:02:26.453239488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8567998667-fvf5w,Uid:e176156d-45ec-4d0e-8a31-d6cb05fb31e7,Namespace:calico-system,Attempt:0,}" Oct 8 20:02:26.453949 kubelet[2584]: E1008 20:02:26.453600 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.453949 kubelet[2584]: W1008 20:02:26.453615 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.453949 kubelet[2584]: E1008 20:02:26.453630 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.453949 kubelet[2584]: I1008 20:02:26.453653 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b58e5c77-b9e1-40fc-b328-51c2c4af45f8-kubelet-dir\") pod \"csi-node-driver-7lz5k\" (UID: \"b58e5c77-b9e1-40fc-b328-51c2c4af45f8\") " pod="calico-system/csi-node-driver-7lz5k" Oct 8 20:02:26.454322 kubelet[2584]: E1008 20:02:26.454300 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.454322 kubelet[2584]: W1008 20:02:26.454318 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.454611 kubelet[2584]: E1008 20:02:26.454330 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.454611 kubelet[2584]: I1008 20:02:26.454350 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b58e5c77-b9e1-40fc-b328-51c2c4af45f8-varrun\") pod \"csi-node-driver-7lz5k\" (UID: \"b58e5c77-b9e1-40fc-b328-51c2c4af45f8\") " pod="calico-system/csi-node-driver-7lz5k" Oct 8 20:02:26.455134 kubelet[2584]: E1008 20:02:26.454708 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.455134 kubelet[2584]: W1008 20:02:26.455132 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.455380 kubelet[2584]: E1008 20:02:26.455345 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.455380 kubelet[2584]: I1008 20:02:26.455374 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b58e5c77-b9e1-40fc-b328-51c2c4af45f8-socket-dir\") pod \"csi-node-driver-7lz5k\" (UID: \"b58e5c77-b9e1-40fc-b328-51c2c4af45f8\") " pod="calico-system/csi-node-driver-7lz5k" Oct 8 20:02:26.455662 kubelet[2584]: E1008 20:02:26.455621 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.455662 kubelet[2584]: W1008 20:02:26.455640 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.456144 kubelet[2584]: E1008 20:02:26.455676 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.456340 kubelet[2584]: E1008 20:02:26.456298 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.456340 kubelet[2584]: W1008 20:02:26.456335 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.456516 kubelet[2584]: E1008 20:02:26.456480 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.459458 kubelet[2584]: E1008 20:02:26.459432 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.459458 kubelet[2584]: W1008 20:02:26.459451 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.459624 kubelet[2584]: E1008 20:02:26.459549 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.459798 kubelet[2584]: E1008 20:02:26.459779 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.459798 kubelet[2584]: W1008 20:02:26.459793 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.460834 kubelet[2584]: E1008 20:02:26.459882 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.462098 kubelet[2584]: E1008 20:02:26.462066 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.462098 kubelet[2584]: W1008 20:02:26.462092 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.462291 kubelet[2584]: E1008 20:02:26.462216 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.462335 kubelet[2584]: E1008 20:02:26.462318 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.462335 kubelet[2584]: W1008 20:02:26.462327 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.462516 kubelet[2584]: E1008 20:02:26.462442 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.462568 kubelet[2584]: E1008 20:02:26.462541 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.462568 kubelet[2584]: W1008 20:02:26.462549 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.462568 kubelet[2584]: E1008 20:02:26.462561 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.462882 kubelet[2584]: E1008 20:02:26.462861 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.462944 kubelet[2584]: W1008 20:02:26.462876 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.462973 kubelet[2584]: E1008 20:02:26.462963 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.463274 kubelet[2584]: E1008 20:02:26.463235 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.463274 kubelet[2584]: W1008 20:02:26.463252 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.463345 kubelet[2584]: E1008 20:02:26.463276 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.463974 kubelet[2584]: E1008 20:02:26.463829 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.463974 kubelet[2584]: W1008 20:02:26.463846 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.463974 kubelet[2584]: E1008 20:02:26.463856 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.464257 kubelet[2584]: E1008 20:02:26.464231 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.464257 kubelet[2584]: W1008 20:02:26.464247 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.464257 kubelet[2584]: E1008 20:02:26.464257 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.464809 kubelet[2584]: E1008 20:02:26.464644 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.464809 kubelet[2584]: W1008 20:02:26.464654 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.464809 kubelet[2584]: E1008 20:02:26.464667 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.467638 kubelet[2584]: E1008 20:02:26.465819 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.467638 kubelet[2584]: W1008 20:02:26.465834 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.467638 kubelet[2584]: E1008 20:02:26.465855 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.467638 kubelet[2584]: E1008 20:02:26.466098 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.467638 kubelet[2584]: W1008 20:02:26.466108 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.467638 kubelet[2584]: E1008 20:02:26.466118 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.501627 containerd[1448]: time="2024-10-08T20:02:26.500716985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:02:26.501627 containerd[1448]: time="2024-10-08T20:02:26.500803147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:02:26.501627 containerd[1448]: time="2024-10-08T20:02:26.500856578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:26.503774 containerd[1448]: time="2024-10-08T20:02:26.501671851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:26.525689 systemd[1]: Started cri-containerd-955e2930bdcd5986a5493b4cf227abfc78a731683853dec8d6842eec51206a06.scope - libcontainer container 955e2930bdcd5986a5493b4cf227abfc78a731683853dec8d6842eec51206a06. Oct 8 20:02:26.557017 kubelet[2584]: E1008 20:02:26.556944 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.557017 kubelet[2584]: W1008 20:02:26.556967 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.557017 kubelet[2584]: E1008 20:02:26.556986 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.557017 kubelet[2584]: I1008 20:02:26.557017 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b58e5c77-b9e1-40fc-b328-51c2c4af45f8-registration-dir\") pod \"csi-node-driver-7lz5k\" (UID: \"b58e5c77-b9e1-40fc-b328-51c2c4af45f8\") " pod="calico-system/csi-node-driver-7lz5k" Oct 8 20:02:26.557398 kubelet[2584]: E1008 20:02:26.557302 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.557398 kubelet[2584]: W1008 20:02:26.557314 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.557398 kubelet[2584]: E1008 20:02:26.557333 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.557709 kubelet[2584]: I1008 20:02:26.557499 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s2cz\" (UniqueName: \"kubernetes.io/projected/b58e5c77-b9e1-40fc-b328-51c2c4af45f8-kube-api-access-8s2cz\") pod \"csi-node-driver-7lz5k\" (UID: \"b58e5c77-b9e1-40fc-b328-51c2c4af45f8\") " pod="calico-system/csi-node-driver-7lz5k" Oct 8 20:02:26.557709 kubelet[2584]: E1008 20:02:26.557615 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.557709 kubelet[2584]: W1008 20:02:26.557624 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.557709 kubelet[2584]: E1008 20:02:26.557641 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.557898 kubelet[2584]: E1008 20:02:26.557855 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.557898 kubelet[2584]: W1008 20:02:26.557864 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.557898 kubelet[2584]: E1008 20:02:26.557874 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.558104 kubelet[2584]: E1008 20:02:26.558079 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.558104 kubelet[2584]: W1008 20:02:26.558092 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.558309 kubelet[2584]: E1008 20:02:26.558111 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.558376 kubelet[2584]: E1008 20:02:26.558358 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.558376 kubelet[2584]: W1008 20:02:26.558369 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.558643 kubelet[2584]: E1008 20:02:26.558388 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.559379 kubelet[2584]: E1008 20:02:26.559256 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.559379 kubelet[2584]: W1008 20:02:26.559279 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.559379 kubelet[2584]: E1008 20:02:26.559305 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.559946 kubelet[2584]: E1008 20:02:26.559933 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.560015 kubelet[2584]: W1008 20:02:26.560005 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.560209 kubelet[2584]: E1008 20:02:26.560081 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.560518 kubelet[2584]: E1008 20:02:26.560506 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.560703 kubelet[2584]: W1008 20:02:26.560570 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.560703 kubelet[2584]: E1008 20:02:26.560611 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.560882 kubelet[2584]: E1008 20:02:26.560870 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.560939 kubelet[2584]: W1008 20:02:26.560928 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.561024 kubelet[2584]: E1008 20:02:26.561001 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.561873 kubelet[2584]: E1008 20:02:26.561670 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.561873 kubelet[2584]: W1008 20:02:26.561682 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.561873 kubelet[2584]: E1008 20:02:26.561716 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.563098 kubelet[2584]: E1008 20:02:26.563048 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.563425 kubelet[2584]: W1008 20:02:26.563240 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.563425 kubelet[2584]: E1008 20:02:26.563277 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.563789 kubelet[2584]: E1008 20:02:26.563699 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.564105 kubelet[2584]: W1008 20:02:26.563946 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.564105 kubelet[2584]: E1008 20:02:26.563986 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.564617 kubelet[2584]: E1008 20:02:26.564511 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.564617 kubelet[2584]: W1008 20:02:26.564523 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.564617 kubelet[2584]: E1008 20:02:26.564550 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.564865 kubelet[2584]: E1008 20:02:26.564853 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.565065 kubelet[2584]: W1008 20:02:26.564913 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.565065 kubelet[2584]: E1008 20:02:26.564945 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.565524 kubelet[2584]: E1008 20:02:26.565511 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.566872 kubelet[2584]: W1008 20:02:26.566766 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.566872 kubelet[2584]: E1008 20:02:26.566794 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.567030 kubelet[2584]: E1008 20:02:26.567020 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.567122 kubelet[2584]: W1008 20:02:26.567099 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.567276 kubelet[2584]: E1008 20:02:26.567221 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.567409 kubelet[2584]: E1008 20:02:26.567398 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.567578 kubelet[2584]: W1008 20:02:26.567467 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.567694 kubelet[2584]: E1008 20:02:26.567670 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.568135 kubelet[2584]: E1008 20:02:26.567789 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.568135 kubelet[2584]: W1008 20:02:26.567800 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.568135 kubelet[2584]: E1008 20:02:26.567812 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.568512 kubelet[2584]: E1008 20:02:26.568451 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.568834 kubelet[2584]: W1008 20:02:26.568661 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.568834 kubelet[2584]: E1008 20:02:26.568682 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.569753 kubelet[2584]: E1008 20:02:26.569638 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.569753 kubelet[2584]: W1008 20:02:26.569650 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.569753 kubelet[2584]: E1008 20:02:26.569660 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.572760 containerd[1448]: time="2024-10-08T20:02:26.572443019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2t5ql,Uid:48ee1546-ca8d-485c-be26-f49ba9831531,Namespace:calico-system,Attempt:0,}" Oct 8 20:02:26.586330 containerd[1448]: time="2024-10-08T20:02:26.586273216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8567998667-fvf5w,Uid:e176156d-45ec-4d0e-8a31-d6cb05fb31e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"955e2930bdcd5986a5493b4cf227abfc78a731683853dec8d6842eec51206a06\"" Oct 8 20:02:26.588918 containerd[1448]: time="2024-10-08T20:02:26.588838308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 20:02:26.629017 containerd[1448]: time="2024-10-08T20:02:26.628123330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:02:26.629017 containerd[1448]: time="2024-10-08T20:02:26.628222267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:02:26.629017 containerd[1448]: time="2024-10-08T20:02:26.628258384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:26.629017 containerd[1448]: time="2024-10-08T20:02:26.628590399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:26.662614 kubelet[2584]: E1008 20:02:26.662417 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.662614 kubelet[2584]: W1008 20:02:26.662440 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.662614 kubelet[2584]: E1008 20:02:26.662462 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.663776 kubelet[2584]: E1008 20:02:26.662930 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.663776 kubelet[2584]: W1008 20:02:26.663603 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.663776 kubelet[2584]: E1008 20:02:26.663626 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.664700 kubelet[2584]: E1008 20:02:26.664283 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.664700 kubelet[2584]: W1008 20:02:26.664296 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.664700 kubelet[2584]: E1008 20:02:26.664314 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.665088 systemd[1]: Started cri-containerd-9742e9cfed906fc6377d81c54ed7754b1a951d5912c79000d4035cf4c1bb29a4.scope - libcontainer container 9742e9cfed906fc6377d81c54ed7754b1a951d5912c79000d4035cf4c1bb29a4. Oct 8 20:02:26.666579 kubelet[2584]: E1008 20:02:26.666315 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.666579 kubelet[2584]: W1008 20:02:26.666333 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.666579 kubelet[2584]: E1008 20:02:26.666372 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.666981 kubelet[2584]: E1008 20:02:26.666722 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.666981 kubelet[2584]: W1008 20:02:26.666750 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.666981 kubelet[2584]: E1008 20:02:26.666770 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.667488 kubelet[2584]: E1008 20:02:26.667234 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.667488 kubelet[2584]: W1008 20:02:26.667247 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.667488 kubelet[2584]: E1008 20:02:26.667266 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.667692 kubelet[2584]: E1008 20:02:26.667678 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.667886 kubelet[2584]: W1008 20:02:26.667837 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.668123 kubelet[2584]: E1008 20:02:26.668046 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.668458 kubelet[2584]: E1008 20:02:26.668321 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.668458 kubelet[2584]: W1008 20:02:26.668333 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.668458 kubelet[2584]: E1008 20:02:26.668350 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.668707 kubelet[2584]: E1008 20:02:26.668686 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.668950 kubelet[2584]: W1008 20:02:26.668806 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.668950 kubelet[2584]: E1008 20:02:26.668832 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.670054 kubelet[2584]: E1008 20:02:26.670013 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.670054 kubelet[2584]: W1008 20:02:26.670034 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.670170 kubelet[2584]: E1008 20:02:26.670072 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.684714 kubelet[2584]: E1008 20:02:26.684583 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:26.684714 kubelet[2584]: W1008 20:02:26.684606 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:26.684714 kubelet[2584]: E1008 20:02:26.684651 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:26.710183 containerd[1448]: time="2024-10-08T20:02:26.710076025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2t5ql,Uid:48ee1546-ca8d-485c-be26-f49ba9831531,Namespace:calico-system,Attempt:0,} returns sandbox id \"9742e9cfed906fc6377d81c54ed7754b1a951d5912c79000d4035cf4c1bb29a4\"" Oct 8 20:02:28.518401 kubelet[2584]: E1008 20:02:28.518358 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7lz5k" podUID="b58e5c77-b9e1-40fc-b328-51c2c4af45f8" Oct 8 20:02:29.913775 containerd[1448]: time="2024-10-08T20:02:29.913661312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:29.916680 containerd[1448]: time="2024-10-08T20:02:29.916347439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 8 20:02:29.921188 containerd[1448]: time="2024-10-08T20:02:29.918761627Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:29.922862 containerd[1448]: time="2024-10-08T20:02:29.922560749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:29.924159 containerd[1448]: time="2024-10-08T20:02:29.924129156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.335251013s" Oct 8 20:02:29.924285 containerd[1448]: time="2024-10-08T20:02:29.924262156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 8 20:02:30.148712 containerd[1448]: time="2024-10-08T20:02:30.148557260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 20:02:30.299559 containerd[1448]: time="2024-10-08T20:02:30.299514412Z" level=info msg="CreateContainer within sandbox \"955e2930bdcd5986a5493b4cf227abfc78a731683853dec8d6842eec51206a06\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 20:02:30.344571 containerd[1448]: time="2024-10-08T20:02:30.344443377Z" level=info msg="CreateContainer within sandbox \"955e2930bdcd5986a5493b4cf227abfc78a731683853dec8d6842eec51206a06\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"38eef79e79f77cc8576a655d3d7d2b8ce0ab01cc28f93e6db7f084310ad851b1\"" Oct 8 20:02:30.345888 containerd[1448]: time="2024-10-08T20:02:30.345848788Z" level=info msg="StartContainer for \"38eef79e79f77cc8576a655d3d7d2b8ce0ab01cc28f93e6db7f084310ad851b1\"" Oct 8 20:02:30.396957 systemd[1]: Started cri-containerd-38eef79e79f77cc8576a655d3d7d2b8ce0ab01cc28f93e6db7f084310ad851b1.scope - libcontainer container 38eef79e79f77cc8576a655d3d7d2b8ce0ab01cc28f93e6db7f084310ad851b1. Oct 8 20:02:30.470173 containerd[1448]: time="2024-10-08T20:02:30.470100163Z" level=info msg="StartContainer for \"38eef79e79f77cc8576a655d3d7d2b8ce0ab01cc28f93e6db7f084310ad851b1\" returns successfully" Oct 8 20:02:30.479823 kubelet[2584]: E1008 20:02:30.479761 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7lz5k" podUID="b58e5c77-b9e1-40fc-b328-51c2c4af45f8" Oct 8 20:02:30.667200 kubelet[2584]: E1008 20:02:30.667101 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.667454 kubelet[2584]: W1008 20:02:30.667332 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.667454 kubelet[2584]: E1008 20:02:30.667358 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.667689 kubelet[2584]: E1008 20:02:30.667620 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.667689 kubelet[2584]: W1008 20:02:30.667633 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.667689 kubelet[2584]: E1008 20:02:30.667645 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.668655 kubelet[2584]: E1008 20:02:30.668407 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.668655 kubelet[2584]: W1008 20:02:30.668421 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.668655 kubelet[2584]: E1008 20:02:30.668432 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.668833 kubelet[2584]: E1008 20:02:30.668821 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.669001 kubelet[2584]: W1008 20:02:30.668892 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.669001 kubelet[2584]: E1008 20:02:30.668914 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.669212 kubelet[2584]: E1008 20:02:30.669197 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.669365 kubelet[2584]: W1008 20:02:30.669272 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.669365 kubelet[2584]: E1008 20:02:30.669289 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.669519 kubelet[2584]: E1008 20:02:30.669506 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.669676 kubelet[2584]: W1008 20:02:30.669579 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.669676 kubelet[2584]: E1008 20:02:30.669595 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.669868 kubelet[2584]: E1008 20:02:30.669854 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.669943 kubelet[2584]: W1008 20:02:30.669930 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.670094 kubelet[2584]: E1008 20:02:30.670008 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.670251 kubelet[2584]: E1008 20:02:30.670240 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.670401 kubelet[2584]: W1008 20:02:30.670308 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.670401 kubelet[2584]: E1008 20:02:30.670322 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.670542 kubelet[2584]: E1008 20:02:30.670530 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.670618 kubelet[2584]: W1008 20:02:30.670605 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.670683 kubelet[2584]: E1008 20:02:30.670673 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.671026 kubelet[2584]: E1008 20:02:30.670924 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.671026 kubelet[2584]: W1008 20:02:30.670936 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.671026 kubelet[2584]: E1008 20:02:30.670948 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.671238 kubelet[2584]: E1008 20:02:30.671201 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.671800 kubelet[2584]: W1008 20:02:30.671786 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.671971 kubelet[2584]: E1008 20:02:30.671880 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.672083 kubelet[2584]: E1008 20:02:30.672072 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.672153 kubelet[2584]: W1008 20:02:30.672140 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.672222 kubelet[2584]: E1008 20:02:30.672211 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.672467 kubelet[2584]: E1008 20:02:30.672455 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.672634 kubelet[2584]: W1008 20:02:30.672524 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.672634 kubelet[2584]: E1008 20:02:30.672539 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.673503 kubelet[2584]: E1008 20:02:30.673489 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.673587 kubelet[2584]: W1008 20:02:30.673575 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.673655 kubelet[2584]: E1008 20:02:30.673644 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.673995 kubelet[2584]: E1008 20:02:30.673923 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.673995 kubelet[2584]: W1008 20:02:30.673936 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.673995 kubelet[2584]: E1008 20:02:30.673947 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.758850 kubelet[2584]: E1008 20:02:30.758807 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.758850 kubelet[2584]: W1008 20:02:30.758834 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.758850 kubelet[2584]: E1008 20:02:30.758854 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.759656 kubelet[2584]: E1008 20:02:30.759631 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.759656 kubelet[2584]: W1008 20:02:30.759649 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.759982 kubelet[2584]: E1008 20:02:30.759670 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.760023 kubelet[2584]: E1008 20:02:30.759981 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.760023 kubelet[2584]: W1008 20:02:30.759992 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.760023 kubelet[2584]: E1008 20:02:30.760004 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.760793 kubelet[2584]: E1008 20:02:30.760731 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.760793 kubelet[2584]: W1008 20:02:30.760769 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.760793 kubelet[2584]: E1008 20:02:30.760787 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.761144 kubelet[2584]: E1008 20:02:30.761118 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.761144 kubelet[2584]: W1008 20:02:30.761129 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.761144 kubelet[2584]: E1008 20:02:30.761139 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.761706 kubelet[2584]: E1008 20:02:30.761687 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.761706 kubelet[2584]: W1008 20:02:30.761702 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.761995 kubelet[2584]: E1008 20:02:30.761801 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.762472 kubelet[2584]: E1008 20:02:30.762444 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.762472 kubelet[2584]: W1008 20:02:30.762462 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.763859 kubelet[2584]: E1008 20:02:30.762999 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.763859 kubelet[2584]: W1008 20:02:30.763016 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.763859 kubelet[2584]: E1008 20:02:30.763030 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.763859 kubelet[2584]: E1008 20:02:30.763817 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.764479 kubelet[2584]: E1008 20:02:30.764453 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.764479 kubelet[2584]: W1008 20:02:30.764470 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.764702 kubelet[2584]: E1008 20:02:30.764490 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.765091 kubelet[2584]: E1008 20:02:30.765072 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.765091 kubelet[2584]: W1008 20:02:30.765087 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.765312 kubelet[2584]: E1008 20:02:30.765104 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.765800 kubelet[2584]: E1008 20:02:30.765781 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.765800 kubelet[2584]: W1008 20:02:30.765797 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.766157 kubelet[2584]: E1008 20:02:30.765875 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.766904 kubelet[2584]: E1008 20:02:30.766890 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.767005 kubelet[2584]: W1008 20:02:30.766990 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.767326 kubelet[2584]: E1008 20:02:30.767240 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.767326 kubelet[2584]: W1008 20:02:30.767252 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.768274 kubelet[2584]: E1008 20:02:30.767481 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.768274 kubelet[2584]: W1008 20:02:30.767493 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.768274 kubelet[2584]: E1008 20:02:30.767507 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.768274 kubelet[2584]: E1008 20:02:30.767535 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.768454 kubelet[2584]: E1008 20:02:30.768439 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.768632 kubelet[2584]: E1008 20:02:30.768621 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.768708 kubelet[2584]: W1008 20:02:30.768696 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.768887 kubelet[2584]: E1008 20:02:30.768873 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.769190 kubelet[2584]: E1008 20:02:30.769164 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.769271 kubelet[2584]: W1008 20:02:30.769259 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.769339 kubelet[2584]: E1008 20:02:30.769328 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.769602 kubelet[2584]: E1008 20:02:30.769591 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.769669 kubelet[2584]: W1008 20:02:30.769659 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.769780 kubelet[2584]: E1008 20:02:30.769726 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:30.770151 kubelet[2584]: E1008 20:02:30.770139 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:30.770234 kubelet[2584]: W1008 20:02:30.770222 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:30.770354 kubelet[2584]: E1008 20:02:30.770341 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.629820 kubelet[2584]: I1008 20:02:31.629755 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8567998667-fvf5w" podStartSLOduration=2.18234656 podStartE2EDuration="5.629716887s" podCreationTimestamp="2024-10-08 20:02:26 +0000 UTC" firstStartedPulling="2024-10-08 20:02:26.588269139 +0000 UTC m=+13.342906692" lastFinishedPulling="2024-10-08 20:02:30.035639406 +0000 UTC m=+16.790277019" observedRunningTime="2024-10-08 20:02:30.653383965 +0000 UTC m=+17.408021498" watchObservedRunningTime="2024-10-08 20:02:31.629716887 +0000 UTC m=+18.384354431" Oct 8 20:02:31.680537 kubelet[2584]: E1008 20:02:31.680152 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.680537 kubelet[2584]: W1008 20:02:31.680173 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.680537 kubelet[2584]: E1008 20:02:31.680214 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.680537 kubelet[2584]: E1008 20:02:31.680415 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.680537 kubelet[2584]: W1008 20:02:31.680426 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.680537 kubelet[2584]: E1008 20:02:31.680436 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.681059 kubelet[2584]: E1008 20:02:31.680670 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.681059 kubelet[2584]: W1008 20:02:31.680681 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.681059 kubelet[2584]: E1008 20:02:31.680691 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.681842 kubelet[2584]: E1008 20:02:31.681702 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.681842 kubelet[2584]: W1008 20:02:31.681715 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.681842 kubelet[2584]: E1008 20:02:31.681727 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.682331 kubelet[2584]: E1008 20:02:31.682100 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.682331 kubelet[2584]: W1008 20:02:31.682112 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.682331 kubelet[2584]: E1008 20:02:31.682122 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.682712 kubelet[2584]: E1008 20:02:31.682575 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.682712 kubelet[2584]: W1008 20:02:31.682587 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.682712 kubelet[2584]: E1008 20:02:31.682602 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.683364 kubelet[2584]: E1008 20:02:31.683152 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.683364 kubelet[2584]: W1008 20:02:31.683164 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.683364 kubelet[2584]: E1008 20:02:31.683175 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.683766 kubelet[2584]: E1008 20:02:31.683615 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.683766 kubelet[2584]: W1008 20:02:31.683626 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.683766 kubelet[2584]: E1008 20:02:31.683636 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.684068 kubelet[2584]: E1008 20:02:31.683972 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.684068 kubelet[2584]: W1008 20:02:31.683984 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.684068 kubelet[2584]: E1008 20:02:31.683994 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.684371 kubelet[2584]: E1008 20:02:31.684272 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.684371 kubelet[2584]: W1008 20:02:31.684283 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.684371 kubelet[2584]: E1008 20:02:31.684293 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.684777 kubelet[2584]: E1008 20:02:31.684671 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.684777 kubelet[2584]: W1008 20:02:31.684682 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.684777 kubelet[2584]: E1008 20:02:31.684693 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.685228 kubelet[2584]: E1008 20:02:31.685103 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.685228 kubelet[2584]: W1008 20:02:31.685115 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.685228 kubelet[2584]: E1008 20:02:31.685125 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.685474 kubelet[2584]: E1008 20:02:31.685365 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.685474 kubelet[2584]: W1008 20:02:31.685374 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.685474 kubelet[2584]: E1008 20:02:31.685384 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.686063 kubelet[2584]: E1008 20:02:31.685942 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.686063 kubelet[2584]: W1008 20:02:31.685955 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.686063 kubelet[2584]: E1008 20:02:31.685965 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.686695 kubelet[2584]: E1008 20:02:31.686608 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.686695 kubelet[2584]: W1008 20:02:31.686621 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.686695 kubelet[2584]: E1008 20:02:31.686631 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.769776 kubelet[2584]: E1008 20:02:31.767684 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.769776 kubelet[2584]: W1008 20:02:31.767711 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.769776 kubelet[2584]: E1008 20:02:31.767731 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.770311 kubelet[2584]: E1008 20:02:31.770073 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.770311 kubelet[2584]: W1008 20:02:31.770093 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.770311 kubelet[2584]: E1008 20:02:31.770121 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.770783 kubelet[2584]: E1008 20:02:31.770582 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.770783 kubelet[2584]: W1008 20:02:31.770596 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.770783 kubelet[2584]: E1008 20:02:31.770616 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.771005 kubelet[2584]: E1008 20:02:31.770992 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.771083 kubelet[2584]: W1008 20:02:31.771071 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.771272 kubelet[2584]: E1008 20:02:31.771188 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.771470 kubelet[2584]: E1008 20:02:31.771376 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.771470 kubelet[2584]: W1008 20:02:31.771388 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.771470 kubelet[2584]: E1008 20:02:31.771424 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.771779 kubelet[2584]: E1008 20:02:31.771668 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.771779 kubelet[2584]: W1008 20:02:31.771681 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.771779 kubelet[2584]: E1008 20:02:31.771771 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.772127 kubelet[2584]: E1008 20:02:31.772009 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.772127 kubelet[2584]: W1008 20:02:31.772021 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.772127 kubelet[2584]: E1008 20:02:31.772040 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.772463 kubelet[2584]: E1008 20:02:31.772330 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.772463 kubelet[2584]: W1008 20:02:31.772343 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.772463 kubelet[2584]: E1008 20:02:31.772360 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.773050 kubelet[2584]: E1008 20:02:31.772677 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.773050 kubelet[2584]: W1008 20:02:31.772690 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.773050 kubelet[2584]: E1008 20:02:31.772708 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.774041 kubelet[2584]: E1008 20:02:31.773940 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.774041 kubelet[2584]: W1008 20:02:31.773954 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.774041 kubelet[2584]: E1008 20:02:31.773969 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.774378 kubelet[2584]: E1008 20:02:31.774307 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.774378 kubelet[2584]: W1008 20:02:31.774319 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.774608 kubelet[2584]: E1008 20:02:31.774492 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.774798 kubelet[2584]: E1008 20:02:31.774700 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.774798 kubelet[2584]: W1008 20:02:31.774711 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.774989 kubelet[2584]: E1008 20:02:31.774900 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.775283 kubelet[2584]: E1008 20:02:31.775269 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.775443 kubelet[2584]: W1008 20:02:31.775348 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.775562 kubelet[2584]: E1008 20:02:31.775526 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.776556 kubelet[2584]: E1008 20:02:31.776524 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.776556 kubelet[2584]: W1008 20:02:31.776553 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.776676 kubelet[2584]: E1008 20:02:31.776596 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.777204 kubelet[2584]: E1008 20:02:31.776982 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.777204 kubelet[2584]: W1008 20:02:31.776997 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.777204 kubelet[2584]: E1008 20:02:31.777030 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.777715 kubelet[2584]: E1008 20:02:31.777440 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.777715 kubelet[2584]: W1008 20:02:31.777453 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.777715 kubelet[2584]: E1008 20:02:31.777467 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.778091 kubelet[2584]: E1008 20:02:31.778079 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.778170 kubelet[2584]: W1008 20:02:31.778155 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.778308 kubelet[2584]: E1008 20:02:31.778274 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:31.778487 kubelet[2584]: E1008 20:02:31.778438 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:31.778487 kubelet[2584]: W1008 20:02:31.778450 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:31.778487 kubelet[2584]: E1008 20:02:31.778460 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.100192 kubelet[2584]: E1008 20:02:33.099231 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7lz5k" podUID="b58e5c77-b9e1-40fc-b328-51c2c4af45f8" Oct 8 20:02:33.221144 kubelet[2584]: E1008 20:02:33.220817 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.221144 kubelet[2584]: W1008 20:02:33.220883 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.221144 kubelet[2584]: E1008 20:02:33.220916 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.222921 kubelet[2584]: E1008 20:02:33.221375 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.222921 kubelet[2584]: W1008 20:02:33.221397 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.222921 kubelet[2584]: E1008 20:02:33.221419 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.222921 kubelet[2584]: E1008 20:02:33.222061 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.222921 kubelet[2584]: W1008 20:02:33.222228 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.222921 kubelet[2584]: E1008 20:02:33.222257 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.223716 kubelet[2584]: E1008 20:02:33.223432 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.223716 kubelet[2584]: W1008 20:02:33.223458 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.223716 kubelet[2584]: E1008 20:02:33.223503 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.224369 kubelet[2584]: E1008 20:02:33.223949 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.224369 kubelet[2584]: W1008 20:02:33.223969 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.224369 kubelet[2584]: E1008 20:02:33.224015 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.224984 kubelet[2584]: E1008 20:02:33.224659 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.224984 kubelet[2584]: W1008 20:02:33.224724 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.224984 kubelet[2584]: E1008 20:02:33.224784 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.225568 kubelet[2584]: E1008 20:02:33.225302 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.225568 kubelet[2584]: W1008 20:02:33.225324 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.225568 kubelet[2584]: E1008 20:02:33.225345 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.226015 kubelet[2584]: E1008 20:02:33.225989 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.226159 kubelet[2584]: W1008 20:02:33.226137 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.226292 kubelet[2584]: E1008 20:02:33.226269 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.227066 kubelet[2584]: E1008 20:02:33.226859 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.227066 kubelet[2584]: W1008 20:02:33.226884 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.227066 kubelet[2584]: E1008 20:02:33.226905 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.228027 kubelet[2584]: E1008 20:02:33.227253 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.228027 kubelet[2584]: W1008 20:02:33.227272 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.228027 kubelet[2584]: E1008 20:02:33.227295 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.228027 kubelet[2584]: E1008 20:02:33.227591 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.228027 kubelet[2584]: W1008 20:02:33.227609 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.228027 kubelet[2584]: E1008 20:02:33.227627 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.229575 kubelet[2584]: E1008 20:02:33.228089 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.229575 kubelet[2584]: W1008 20:02:33.228109 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.229575 kubelet[2584]: E1008 20:02:33.228129 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.229575 kubelet[2584]: E1008 20:02:33.228443 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.229575 kubelet[2584]: W1008 20:02:33.228462 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.229575 kubelet[2584]: E1008 20:02:33.228481 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.229575 kubelet[2584]: E1008 20:02:33.228812 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.229575 kubelet[2584]: W1008 20:02:33.228830 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.229575 kubelet[2584]: E1008 20:02:33.228850 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.230213 containerd[1448]: time="2024-10-08T20:02:33.229350512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:33.231065 kubelet[2584]: E1008 20:02:33.230983 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.231065 kubelet[2584]: W1008 20:02:33.231018 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.231065 kubelet[2584]: E1008 20:02:33.231040 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.231484 kubelet[2584]: E1008 20:02:33.231468 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.231612 kubelet[2584]: W1008 20:02:33.231487 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.231612 kubelet[2584]: E1008 20:02:33.231507 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.231951 kubelet[2584]: E1008 20:02:33.231904 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.231951 kubelet[2584]: W1008 20:02:33.231923 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.231951 kubelet[2584]: E1008 20:02:33.231946 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.232503 kubelet[2584]: E1008 20:02:33.232473 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.232503 kubelet[2584]: W1008 20:02:33.232500 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.232824 kubelet[2584]: E1008 20:02:33.232533 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.233064 kubelet[2584]: E1008 20:02:33.233035 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.233166 kubelet[2584]: W1008 20:02:33.233115 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.233479 kubelet[2584]: E1008 20:02:33.233436 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.233479 kubelet[2584]: W1008 20:02:33.233454 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.233825 kubelet[2584]: E1008 20:02:33.233410 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.233825 kubelet[2584]: E1008 20:02:33.233723 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.233825 kubelet[2584]: W1008 20:02:33.233804 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.234030 kubelet[2584]: E1008 20:02:33.233830 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.234299 kubelet[2584]: E1008 20:02:33.234209 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.234387 kubelet[2584]: E1008 20:02:33.234320 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.234387 kubelet[2584]: W1008 20:02:33.234345 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.234387 kubelet[2584]: E1008 20:02:33.234366 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.234678 kubelet[2584]: E1008 20:02:33.234651 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.234678 kubelet[2584]: W1008 20:02:33.234676 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.235121 kubelet[2584]: E1008 20:02:33.234707 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.235121 kubelet[2584]: E1008 20:02:33.235113 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.235455 kubelet[2584]: W1008 20:02:33.235132 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.235455 kubelet[2584]: E1008 20:02:33.235164 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.236452 kubelet[2584]: E1008 20:02:33.236189 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.236452 kubelet[2584]: W1008 20:02:33.236223 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.236452 kubelet[2584]: E1008 20:02:33.236266 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.236836 containerd[1448]: time="2024-10-08T20:02:33.236401007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 8 20:02:33.237464 kubelet[2584]: E1008 20:02:33.237257 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.237464 kubelet[2584]: W1008 20:02:33.237279 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.237464 kubelet[2584]: E1008 20:02:33.237332 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.238032 kubelet[2584]: E1008 20:02:33.238005 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.238391 kubelet[2584]: W1008 20:02:33.238173 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.238391 kubelet[2584]: E1008 20:02:33.238246 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.238693 kubelet[2584]: E1008 20:02:33.238666 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.239066 kubelet[2584]: W1008 20:02:33.238838 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.239066 kubelet[2584]: E1008 20:02:33.238903 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.239540 kubelet[2584]: E1008 20:02:33.239350 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.239540 kubelet[2584]: W1008 20:02:33.239377 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.239540 kubelet[2584]: E1008 20:02:33.239450 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.240393 kubelet[2584]: E1008 20:02:33.240124 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.240393 kubelet[2584]: W1008 20:02:33.240150 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.240393 kubelet[2584]: E1008 20:02:33.240192 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.240822 kubelet[2584]: E1008 20:02:33.240794 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.240995 kubelet[2584]: W1008 20:02:33.240969 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.241509 kubelet[2584]: E1008 20:02:33.241131 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.241846 kubelet[2584]: E1008 20:02:33.241730 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.241959 kubelet[2584]: W1008 20:02:33.241847 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.241959 kubelet[2584]: E1008 20:02:33.241936 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.242522 kubelet[2584]: E1008 20:02:33.242486 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:02:33.242522 kubelet[2584]: W1008 20:02:33.242517 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:02:33.242701 kubelet[2584]: E1008 20:02:33.242541 2584 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:02:33.255365 containerd[1448]: time="2024-10-08T20:02:33.255239417Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:33.263135 containerd[1448]: time="2024-10-08T20:02:33.263019222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:33.265223 containerd[1448]: time="2024-10-08T20:02:33.264910525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 3.116275569s" Oct 8 20:02:33.265223 containerd[1448]: time="2024-10-08T20:02:33.264997089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 8 20:02:33.270421 containerd[1448]: time="2024-10-08T20:02:33.270356697Z" level=info msg="CreateContainer within sandbox \"9742e9cfed906fc6377d81c54ed7754b1a951d5912c79000d4035cf4c1bb29a4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 20:02:33.406867 containerd[1448]: time="2024-10-08T20:02:33.403098488Z" level=info msg="CreateContainer within sandbox \"9742e9cfed906fc6377d81c54ed7754b1a951d5912c79000d4035cf4c1bb29a4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"67338d7d3638fc2b9dcfd31bd0832f5b066f40fab9027d739d5a7dbf021fcb1c\"" Oct 8 20:02:33.406867 containerd[1448]: time="2024-10-08T20:02:33.405237016Z" level=info msg="StartContainer for \"67338d7d3638fc2b9dcfd31bd0832f5b066f40fab9027d739d5a7dbf021fcb1c\"" Oct 8 20:02:33.482916 systemd[1]: Started cri-containerd-67338d7d3638fc2b9dcfd31bd0832f5b066f40fab9027d739d5a7dbf021fcb1c.scope - libcontainer container 67338d7d3638fc2b9dcfd31bd0832f5b066f40fab9027d739d5a7dbf021fcb1c. Oct 8 20:02:33.524018 containerd[1448]: time="2024-10-08T20:02:33.523772920Z" level=info msg="StartContainer for \"67338d7d3638fc2b9dcfd31bd0832f5b066f40fab9027d739d5a7dbf021fcb1c\" returns successfully" Oct 8 20:02:33.532106 systemd[1]: cri-containerd-67338d7d3638fc2b9dcfd31bd0832f5b066f40fab9027d739d5a7dbf021fcb1c.scope: Deactivated successfully. Oct 8 20:02:33.561047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67338d7d3638fc2b9dcfd31bd0832f5b066f40fab9027d739d5a7dbf021fcb1c-rootfs.mount: Deactivated successfully. Oct 8 20:02:33.708156 containerd[1448]: time="2024-10-08T20:02:33.707793445Z" level=info msg="shim disconnected" id=67338d7d3638fc2b9dcfd31bd0832f5b066f40fab9027d739d5a7dbf021fcb1c namespace=k8s.io Oct 8 20:02:33.708156 containerd[1448]: time="2024-10-08T20:02:33.707943758Z" level=warning msg="cleaning up after shim disconnected" id=67338d7d3638fc2b9dcfd31bd0832f5b066f40fab9027d739d5a7dbf021fcb1c namespace=k8s.io Oct 8 20:02:33.708156 containerd[1448]: time="2024-10-08T20:02:33.707969326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:34.182619 containerd[1448]: time="2024-10-08T20:02:34.182491607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 20:02:34.477774 kubelet[2584]: E1008 20:02:34.477533 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7lz5k" podUID="b58e5c77-b9e1-40fc-b328-51c2c4af45f8" Oct 8 20:02:36.479792 kubelet[2584]: E1008 20:02:36.477655 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7lz5k" podUID="b58e5c77-b9e1-40fc-b328-51c2c4af45f8" Oct 8 20:02:38.478135 kubelet[2584]: E1008 20:02:38.478016 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7lz5k" podUID="b58e5c77-b9e1-40fc-b328-51c2c4af45f8" Oct 8 20:02:40.477350 kubelet[2584]: E1008 20:02:40.477245 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7lz5k" podUID="b58e5c77-b9e1-40fc-b328-51c2c4af45f8" Oct 8 20:02:40.683817 containerd[1448]: time="2024-10-08T20:02:40.682841178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:40.688332 containerd[1448]: time="2024-10-08T20:02:40.688235165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 8 20:02:40.690376 containerd[1448]: time="2024-10-08T20:02:40.690271708Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:40.694026 containerd[1448]: time="2024-10-08T20:02:40.693861759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:40.696570 containerd[1448]: time="2024-10-08T20:02:40.696268908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 6.513661814s" Oct 8 20:02:40.696570 containerd[1448]: time="2024-10-08T20:02:40.696345873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 8 20:02:40.703036 containerd[1448]: time="2024-10-08T20:02:40.702934223Z" level=info msg="CreateContainer within sandbox \"9742e9cfed906fc6377d81c54ed7754b1a951d5912c79000d4035cf4c1bb29a4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 20:02:40.861110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount403658820.mount: Deactivated successfully. Oct 8 20:02:40.875512 containerd[1448]: time="2024-10-08T20:02:40.875385407Z" level=info msg="CreateContainer within sandbox \"9742e9cfed906fc6377d81c54ed7754b1a951d5912c79000d4035cf4c1bb29a4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0e52fb06f4b432e6c7e695e3fab772cc081a3596913c9b792fa49ff893d11874\"" Oct 8 20:02:40.878357 containerd[1448]: time="2024-10-08T20:02:40.877726141Z" level=info msg="StartContainer for \"0e52fb06f4b432e6c7e695e3fab772cc081a3596913c9b792fa49ff893d11874\"" Oct 8 20:02:41.008371 systemd[1]: run-containerd-runc-k8s.io-0e52fb06f4b432e6c7e695e3fab772cc081a3596913c9b792fa49ff893d11874-runc.yzJE6g.mount: Deactivated successfully. Oct 8 20:02:41.017927 systemd[1]: Started cri-containerd-0e52fb06f4b432e6c7e695e3fab772cc081a3596913c9b792fa49ff893d11874.scope - libcontainer container 0e52fb06f4b432e6c7e695e3fab772cc081a3596913c9b792fa49ff893d11874. Oct 8 20:02:41.053273 containerd[1448]: time="2024-10-08T20:02:41.053232796Z" level=info msg="StartContainer for \"0e52fb06f4b432e6c7e695e3fab772cc081a3596913c9b792fa49ff893d11874\" returns successfully" Oct 8 20:02:42.477393 kubelet[2584]: E1008 20:02:42.477203 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7lz5k" podUID="b58e5c77-b9e1-40fc-b328-51c2c4af45f8" Oct 8 20:02:42.587823 systemd[1]: cri-containerd-0e52fb06f4b432e6c7e695e3fab772cc081a3596913c9b792fa49ff893d11874.scope: Deactivated successfully. Oct 8 20:02:42.641212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e52fb06f4b432e6c7e695e3fab772cc081a3596913c9b792fa49ff893d11874-rootfs.mount: Deactivated successfully. Oct 8 20:02:42.916287 kubelet[2584]: I1008 20:02:42.870305 2584 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 8 20:02:43.399964 containerd[1448]: time="2024-10-08T20:02:43.399407971Z" level=info msg="shim disconnected" id=0e52fb06f4b432e6c7e695e3fab772cc081a3596913c9b792fa49ff893d11874 namespace=k8s.io Oct 8 20:02:43.403104 containerd[1448]: time="2024-10-08T20:02:43.400090462Z" level=warning msg="cleaning up after shim disconnected" id=0e52fb06f4b432e6c7e695e3fab772cc081a3596913c9b792fa49ff893d11874 namespace=k8s.io Oct 8 20:02:43.403104 containerd[1448]: time="2024-10-08T20:02:43.401820850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:43.424500 kubelet[2584]: I1008 20:02:43.423955 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwtzh\" (UniqueName: \"kubernetes.io/projected/5180ec07-7d2b-4c4f-81ef-844e22c96463-kube-api-access-xwtzh\") pod \"calico-kube-controllers-5b597657d9-5blcx\" (UID: \"5180ec07-7d2b-4c4f-81ef-844e22c96463\") " pod="calico-system/calico-kube-controllers-5b597657d9-5blcx" Oct 8 20:02:43.424769 kubelet[2584]: I1008 20:02:43.424550 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26hp5\" (UniqueName: \"kubernetes.io/projected/abaa7ba7-8fa9-43d2-952d-c6239abb7b81-kube-api-access-26hp5\") pod \"coredns-6f6b679f8f-pk7qz\" (UID: \"abaa7ba7-8fa9-43d2-952d-c6239abb7b81\") " pod="kube-system/coredns-6f6b679f8f-pk7qz" Oct 8 20:02:43.424769 kubelet[2584]: I1008 20:02:43.424608 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8917b79-40cc-443c-bc04-f6952b0b2955-config-volume\") pod \"coredns-6f6b679f8f-7g2zh\" (UID: \"f8917b79-40cc-443c-bc04-f6952b0b2955\") " pod="kube-system/coredns-6f6b679f8f-7g2zh" Oct 8 20:02:43.424769 kubelet[2584]: I1008 20:02:43.424664 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5nr6\" (UniqueName: \"kubernetes.io/projected/f8917b79-40cc-443c-bc04-f6952b0b2955-kube-api-access-b5nr6\") pod \"coredns-6f6b679f8f-7g2zh\" (UID: \"f8917b79-40cc-443c-bc04-f6952b0b2955\") " pod="kube-system/coredns-6f6b679f8f-7g2zh" Oct 8 20:02:43.424769 kubelet[2584]: I1008 20:02:43.424711 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abaa7ba7-8fa9-43d2-952d-c6239abb7b81-config-volume\") pod \"coredns-6f6b679f8f-pk7qz\" (UID: \"abaa7ba7-8fa9-43d2-952d-c6239abb7b81\") " pod="kube-system/coredns-6f6b679f8f-pk7qz" Oct 8 20:02:43.427871 kubelet[2584]: I1008 20:02:43.424799 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5180ec07-7d2b-4c4f-81ef-844e22c96463-tigera-ca-bundle\") pod \"calico-kube-controllers-5b597657d9-5blcx\" (UID: \"5180ec07-7d2b-4c4f-81ef-844e22c96463\") " pod="calico-system/calico-kube-controllers-5b597657d9-5blcx" Oct 8 20:02:43.436915 systemd[1]: Created slice kubepods-burstable-podf8917b79_40cc_443c_bc04_f6952b0b2955.slice - libcontainer container kubepods-burstable-podf8917b79_40cc_443c_bc04_f6952b0b2955.slice. Oct 8 20:02:43.460002 systemd[1]: Created slice kubepods-burstable-podabaa7ba7_8fa9_43d2_952d_c6239abb7b81.slice - libcontainer container kubepods-burstable-podabaa7ba7_8fa9_43d2_952d_c6239abb7b81.slice. Oct 8 20:02:43.470626 systemd[1]: Created slice kubepods-besteffort-pod5180ec07_7d2b_4c4f_81ef_844e22c96463.slice - libcontainer container kubepods-besteffort-pod5180ec07_7d2b_4c4f_81ef_844e22c96463.slice. Oct 8 20:02:43.752913 containerd[1448]: time="2024-10-08T20:02:43.752701662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7g2zh,Uid:f8917b79-40cc-443c-bc04-f6952b0b2955,Namespace:kube-system,Attempt:0,}" Oct 8 20:02:43.769866 containerd[1448]: time="2024-10-08T20:02:43.769727231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pk7qz,Uid:abaa7ba7-8fa9-43d2-952d-c6239abb7b81,Namespace:kube-system,Attempt:0,}" Oct 8 20:02:43.778626 containerd[1448]: time="2024-10-08T20:02:43.778334939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b597657d9-5blcx,Uid:5180ec07-7d2b-4c4f-81ef-844e22c96463,Namespace:calico-system,Attempt:0,}" Oct 8 20:02:44.172805 containerd[1448]: time="2024-10-08T20:02:44.171830637Z" level=error msg="Failed to destroy network for sandbox \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.174448 containerd[1448]: time="2024-10-08T20:02:44.173102684Z" level=error msg="Failed to destroy network for sandbox \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.177517 containerd[1448]: time="2024-10-08T20:02:44.177466094Z" level=error msg="encountered an error cleaning up failed sandbox \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.178949 containerd[1448]: time="2024-10-08T20:02:44.177543360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7g2zh,Uid:f8917b79-40cc-443c-bc04-f6952b0b2955,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.178949 containerd[1448]: time="2024-10-08T20:02:44.177633208Z" level=error msg="encountered an error cleaning up failed sandbox \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.178949 containerd[1448]: time="2024-10-08T20:02:44.177697870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b597657d9-5blcx,Uid:5180ec07-7d2b-4c4f-81ef-844e22c96463,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.185413 containerd[1448]: time="2024-10-08T20:02:44.184508182Z" level=error msg="Failed to destroy network for sandbox \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.185413 containerd[1448]: time="2024-10-08T20:02:44.184889569Z" level=error msg="encountered an error cleaning up failed sandbox \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.185413 containerd[1448]: time="2024-10-08T20:02:44.184930776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pk7qz,Uid:abaa7ba7-8fa9-43d2-952d-c6239abb7b81,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.187056 kubelet[2584]: E1008 20:02:44.186257 2584 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.187056 kubelet[2584]: E1008 20:02:44.186346 2584 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7g2zh" Oct 8 20:02:44.187056 kubelet[2584]: E1008 20:02:44.186370 2584 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7g2zh" Oct 8 20:02:44.187442 kubelet[2584]: E1008 20:02:44.186420 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7g2zh_kube-system(f8917b79-40cc-443c-bc04-f6952b0b2955)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7g2zh_kube-system(f8917b79-40cc-443c-bc04-f6952b0b2955)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7g2zh" podUID="f8917b79-40cc-443c-bc04-f6952b0b2955" Oct 8 20:02:44.187442 kubelet[2584]: E1008 20:02:44.186685 2584 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.187442 kubelet[2584]: E1008 20:02:44.186711 2584 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b597657d9-5blcx" Oct 8 20:02:44.187560 kubelet[2584]: E1008 20:02:44.186731 2584 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b597657d9-5blcx" Oct 8 20:02:44.187560 kubelet[2584]: E1008 20:02:44.186795 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b597657d9-5blcx_calico-system(5180ec07-7d2b-4c4f-81ef-844e22c96463)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b597657d9-5blcx_calico-system(5180ec07-7d2b-4c4f-81ef-844e22c96463)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b597657d9-5blcx" podUID="5180ec07-7d2b-4c4f-81ef-844e22c96463" Oct 8 20:02:44.187560 kubelet[2584]: E1008 20:02:44.186799 2584 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.187702 kubelet[2584]: E1008 20:02:44.186897 2584 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pk7qz" Oct 8 20:02:44.187702 kubelet[2584]: E1008 20:02:44.186920 2584 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pk7qz" Oct 8 20:02:44.187702 kubelet[2584]: E1008 20:02:44.186989 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-pk7qz_kube-system(abaa7ba7-8fa9-43d2-952d-c6239abb7b81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-pk7qz_kube-system(abaa7ba7-8fa9-43d2-952d-c6239abb7b81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pk7qz" podUID="abaa7ba7-8fa9-43d2-952d-c6239abb7b81" Oct 8 20:02:44.207409 kubelet[2584]: I1008 20:02:44.207371 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Oct 8 20:02:44.209413 kubelet[2584]: I1008 20:02:44.209373 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Oct 8 20:02:44.216046 containerd[1448]: time="2024-10-08T20:02:44.215805859Z" level=info msg="StopPodSandbox for \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\"" Oct 8 20:02:44.218273 containerd[1448]: time="2024-10-08T20:02:44.217590760Z" level=info msg="StopPodSandbox for \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\"" Oct 8 20:02:44.218381 containerd[1448]: time="2024-10-08T20:02:44.218359132Z" level=info msg="Ensure that sandbox 345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40 in task-service has been cleanup successfully" Oct 8 20:02:44.218797 containerd[1448]: time="2024-10-08T20:02:44.218766517Z" level=info msg="Ensure that sandbox 88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96 in task-service has been cleanup successfully" Oct 8 20:02:44.223839 kubelet[2584]: I1008 20:02:44.223806 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Oct 8 20:02:44.224671 containerd[1448]: time="2024-10-08T20:02:44.224350999Z" level=info msg="StopPodSandbox for \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\"" Oct 8 20:02:44.224671 containerd[1448]: time="2024-10-08T20:02:44.224502594Z" level=info msg="Ensure that sandbox 2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504 in task-service has been cleanup successfully" Oct 8 20:02:44.236833 containerd[1448]: time="2024-10-08T20:02:44.236421776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 20:02:44.283788 containerd[1448]: time="2024-10-08T20:02:44.283729314Z" level=error msg="StopPodSandbox for \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\" failed" error="failed to destroy network for sandbox \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.284310 kubelet[2584]: E1008 20:02:44.284118 2584 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Oct 8 20:02:44.284310 kubelet[2584]: E1008 20:02:44.284189 2584 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96"} Oct 8 20:02:44.284310 kubelet[2584]: E1008 20:02:44.284250 2584 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5180ec07-7d2b-4c4f-81ef-844e22c96463\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:02:44.284310 kubelet[2584]: E1008 20:02:44.284277 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5180ec07-7d2b-4c4f-81ef-844e22c96463\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b597657d9-5blcx" podUID="5180ec07-7d2b-4c4f-81ef-844e22c96463" Oct 8 20:02:44.293766 containerd[1448]: time="2024-10-08T20:02:44.293551902Z" level=error msg="StopPodSandbox for \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\" failed" error="failed to destroy network for sandbox \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.293897 kubelet[2584]: E1008 20:02:44.293840 2584 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Oct 8 20:02:44.293943 kubelet[2584]: E1008 20:02:44.293915 2584 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40"} Oct 8 20:02:44.294066 kubelet[2584]: E1008 20:02:44.293981 2584 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"abaa7ba7-8fa9-43d2-952d-c6239abb7b81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:02:44.294066 kubelet[2584]: E1008 20:02:44.294016 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"abaa7ba7-8fa9-43d2-952d-c6239abb7b81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pk7qz" podUID="abaa7ba7-8fa9-43d2-952d-c6239abb7b81" Oct 8 20:02:44.294860 containerd[1448]: time="2024-10-08T20:02:44.294814722Z" level=error msg="StopPodSandbox for \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\" failed" error="failed to destroy network for sandbox \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.295191 kubelet[2584]: E1008 20:02:44.295048 2584 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Oct 8 20:02:44.295191 kubelet[2584]: E1008 20:02:44.295077 2584 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504"} Oct 8 20:02:44.295191 kubelet[2584]: E1008 20:02:44.295102 2584 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f8917b79-40cc-443c-bc04-f6952b0b2955\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:02:44.295191 kubelet[2584]: E1008 20:02:44.295123 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f8917b79-40cc-443c-bc04-f6952b0b2955\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7g2zh" podUID="f8917b79-40cc-443c-bc04-f6952b0b2955" Oct 8 20:02:44.491483 systemd[1]: Created slice kubepods-besteffort-podb58e5c77_b9e1_40fc_b328_51c2c4af45f8.slice - libcontainer container kubepods-besteffort-podb58e5c77_b9e1_40fc_b328_51c2c4af45f8.slice. Oct 8 20:02:44.501230 containerd[1448]: time="2024-10-08T20:02:44.500429658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7lz5k,Uid:b58e5c77-b9e1-40fc-b328-51c2c4af45f8,Namespace:calico-system,Attempt:0,}" Oct 8 20:02:44.619845 containerd[1448]: time="2024-10-08T20:02:44.619796033Z" level=error msg="Failed to destroy network for sandbox \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.620359 containerd[1448]: time="2024-10-08T20:02:44.620333502Z" level=error msg="encountered an error cleaning up failed sandbox \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.620499 containerd[1448]: time="2024-10-08T20:02:44.620469988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7lz5k,Uid:b58e5c77-b9e1-40fc-b328-51c2c4af45f8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.620833 kubelet[2584]: E1008 20:02:44.620794 2584 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:44.620967 kubelet[2584]: E1008 20:02:44.620948 2584 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7lz5k" Oct 8 20:02:44.621076 kubelet[2584]: E1008 20:02:44.621057 2584 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7lz5k" Oct 8 20:02:44.621240 kubelet[2584]: E1008 20:02:44.621190 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7lz5k_calico-system(b58e5c77-b9e1-40fc-b328-51c2c4af45f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7lz5k_calico-system(b58e5c77-b9e1-40fc-b328-51c2c4af45f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7lz5k" podUID="b58e5c77-b9e1-40fc-b328-51c2c4af45f8" Oct 8 20:02:44.808178 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96-shm.mount: Deactivated successfully. Oct 8 20:02:44.808633 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40-shm.mount: Deactivated successfully. Oct 8 20:02:44.809050 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504-shm.mount: Deactivated successfully. Oct 8 20:02:45.234988 kubelet[2584]: I1008 20:02:45.234798 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Oct 8 20:02:45.238327 containerd[1448]: time="2024-10-08T20:02:45.237690329Z" level=info msg="StopPodSandbox for \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\"" Oct 8 20:02:45.240196 containerd[1448]: time="2024-10-08T20:02:45.240087940Z" level=info msg="Ensure that sandbox 1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb in task-service has been cleanup successfully" Oct 8 20:02:45.306502 containerd[1448]: time="2024-10-08T20:02:45.306346883Z" level=error msg="StopPodSandbox for \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\" failed" error="failed to destroy network for sandbox \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:45.306984 kubelet[2584]: E1008 20:02:45.306861 2584 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Oct 8 20:02:45.306984 kubelet[2584]: E1008 20:02:45.306960 2584 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb"} Oct 8 20:02:45.307249 kubelet[2584]: E1008 20:02:45.307037 2584 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b58e5c77-b9e1-40fc-b328-51c2c4af45f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:02:45.307249 kubelet[2584]: E1008 20:02:45.307100 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b58e5c77-b9e1-40fc-b328-51c2c4af45f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7lz5k" podUID="b58e5c77-b9e1-40fc-b328-51c2c4af45f8" Oct 8 20:02:52.490307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2341589305.mount: Deactivated successfully. Oct 8 20:02:54.470627 containerd[1448]: time="2024-10-08T20:02:54.449205658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 8 20:02:54.471541 containerd[1448]: time="2024-10-08T20:02:54.435166637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:54.488840 containerd[1448]: time="2024-10-08T20:02:54.487357342Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:54.505207 containerd[1448]: time="2024-10-08T20:02:54.505123386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:54.508832 containerd[1448]: time="2024-10-08T20:02:54.508659221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 10.272177803s" Oct 8 20:02:54.509014 containerd[1448]: time="2024-10-08T20:02:54.508881728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 8 20:02:54.541814 kubelet[2584]: E1008 20:02:54.541718 2584 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.065s" Oct 8 20:02:54.603864 containerd[1448]: time="2024-10-08T20:02:54.599709980Z" level=info msg="StopPodSandbox for \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\"" Oct 8 20:02:54.674170 containerd[1448]: time="2024-10-08T20:02:54.674112141Z" level=error msg="StopPodSandbox for \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\" failed" error="failed to destroy network for sandbox \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:02:54.681284 kubelet[2584]: E1008 20:02:54.681253 2584 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Oct 8 20:02:54.681493 kubelet[2584]: E1008 20:02:54.681396 2584 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96"} Oct 8 20:02:54.681493 kubelet[2584]: E1008 20:02:54.681437 2584 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5180ec07-7d2b-4c4f-81ef-844e22c96463\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:02:54.681493 kubelet[2584]: E1008 20:02:54.681463 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5180ec07-7d2b-4c4f-81ef-844e22c96463\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b597657d9-5blcx" podUID="5180ec07-7d2b-4c4f-81ef-844e22c96463" Oct 8 20:02:54.681888 containerd[1448]: time="2024-10-08T20:02:54.681846413Z" level=info msg="CreateContainer within sandbox \"9742e9cfed906fc6377d81c54ed7754b1a951d5912c79000d4035cf4c1bb29a4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 20:02:54.732992 containerd[1448]: time="2024-10-08T20:02:54.732765523Z" level=info msg="CreateContainer within sandbox \"9742e9cfed906fc6377d81c54ed7754b1a951d5912c79000d4035cf4c1bb29a4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fe37dd6a37401a4dba4ca663e04fb11cae77d561d4d5af2666c7d760528ce9df\"" Oct 8 20:02:54.734936 containerd[1448]: time="2024-10-08T20:02:54.734582782Z" level=info msg="StartContainer for \"fe37dd6a37401a4dba4ca663e04fb11cae77d561d4d5af2666c7d760528ce9df\"" Oct 8 20:02:54.879726 systemd[1]: Started cri-containerd-fe37dd6a37401a4dba4ca663e04fb11cae77d561d4d5af2666c7d760528ce9df.scope - libcontainer container fe37dd6a37401a4dba4ca663e04fb11cae77d561d4d5af2666c7d760528ce9df. Oct 8 20:02:54.932921 containerd[1448]: time="2024-10-08T20:02:54.932871021Z" level=info msg="StartContainer for \"fe37dd6a37401a4dba4ca663e04fb11cae77d561d4d5af2666c7d760528ce9df\" returns successfully" Oct 8 20:02:55.072905 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 20:02:55.074567 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 20:02:55.480643 containerd[1448]: time="2024-10-08T20:02:55.479939616Z" level=info msg="StopPodSandbox for \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\"" Oct 8 20:02:55.710657 systemd[1]: run-containerd-runc-k8s.io-fe37dd6a37401a4dba4ca663e04fb11cae77d561d4d5af2666c7d760528ce9df-runc.W5Qey7.mount: Deactivated successfully. Oct 8 20:02:56.481863 containerd[1448]: time="2024-10-08T20:02:56.481175616Z" level=info msg="StopPodSandbox for \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\"" Oct 8 20:02:56.619938 containerd[1448]: 2024-10-08 20:02:55.624 [INFO][3707] k8s.go 608: Cleaning up netns ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Oct 8 20:02:56.619938 containerd[1448]: 2024-10-08 20:02:55.626 [INFO][3707] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" iface="eth0" netns="/var/run/netns/cni-b06d8eae-b1e8-28b5-5877-c4d64b076c4c" Oct 8 20:02:56.619938 containerd[1448]: 2024-10-08 20:02:55.629 [INFO][3707] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" iface="eth0" netns="/var/run/netns/cni-b06d8eae-b1e8-28b5-5877-c4d64b076c4c" Oct 8 20:02:56.619938 containerd[1448]: 2024-10-08 20:02:55.631 [INFO][3707] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" iface="eth0" netns="/var/run/netns/cni-b06d8eae-b1e8-28b5-5877-c4d64b076c4c" Oct 8 20:02:56.619938 containerd[1448]: 2024-10-08 20:02:55.631 [INFO][3707] k8s.go 615: Releasing IP address(es) ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Oct 8 20:02:56.619938 containerd[1448]: 2024-10-08 20:02:55.631 [INFO][3707] utils.go 188: Calico CNI releasing IP address ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Oct 8 20:02:56.619938 containerd[1448]: 2024-10-08 20:02:56.251 [INFO][3722] ipam_plugin.go 417: Releasing address using handleID ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" HandleID="k8s-pod-network.345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:02:56.619938 containerd[1448]: 2024-10-08 20:02:56.268 [INFO][3722] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:02:56.619938 containerd[1448]: 2024-10-08 20:02:56.278 [INFO][3722] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:02:56.619938 containerd[1448]: 2024-10-08 20:02:56.601 [WARNING][3722] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" HandleID="k8s-pod-network.345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:02:56.619938 containerd[1448]: 2024-10-08 20:02:56.602 [INFO][3722] ipam_plugin.go 445: Releasing address using workloadID ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" HandleID="k8s-pod-network.345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:02:56.619938 containerd[1448]: 2024-10-08 20:02:56.606 [INFO][3722] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:02:56.619938 containerd[1448]: 2024-10-08 20:02:56.609 [INFO][3707] k8s.go 621: Teardown processing complete. ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Oct 8 20:02:56.625471 containerd[1448]: time="2024-10-08T20:02:56.620218570Z" level=info msg="TearDown network for sandbox \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\" successfully" Oct 8 20:02:56.625471 containerd[1448]: time="2024-10-08T20:02:56.620258214Z" level=info msg="StopPodSandbox for \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\" returns successfully" Oct 8 20:02:56.621488 systemd[1]: run-netns-cni\x2db06d8eae\x2db1e8\x2d28b5\x2d5877\x2dc4d64b076c4c.mount: Deactivated successfully. Oct 8 20:02:56.654433 containerd[1448]: time="2024-10-08T20:02:56.653986195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pk7qz,Uid:abaa7ba7-8fa9-43d2-952d-c6239abb7b81,Namespace:kube-system,Attempt:1,}" Oct 8 20:02:56.705216 kubelet[2584]: I1008 20:02:56.700704 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2t5ql" podStartSLOduration=2.782411321 podStartE2EDuration="30.700681134s" podCreationTimestamp="2024-10-08 20:02:26 +0000 UTC" firstStartedPulling="2024-10-08 20:02:26.713125639 +0000 UTC m=+13.467763182" lastFinishedPulling="2024-10-08 20:02:54.631395461 +0000 UTC m=+41.386032995" observedRunningTime="2024-10-08 20:02:55.628599088 +0000 UTC m=+42.383236631" watchObservedRunningTime="2024-10-08 20:02:56.700681134 +0000 UTC m=+43.455318677" Oct 8 20:02:56.784946 containerd[1448]: 2024-10-08 20:02:56.685 [INFO][3765] k8s.go 608: Cleaning up netns ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Oct 8 20:02:56.784946 containerd[1448]: 2024-10-08 20:02:56.685 [INFO][3765] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" iface="eth0" netns="/var/run/netns/cni-057495f7-33c3-232e-bc30-ed61a53e8e0c" Oct 8 20:02:56.784946 containerd[1448]: 2024-10-08 20:02:56.685 [INFO][3765] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" iface="eth0" netns="/var/run/netns/cni-057495f7-33c3-232e-bc30-ed61a53e8e0c" Oct 8 20:02:56.784946 containerd[1448]: 2024-10-08 20:02:56.685 [INFO][3765] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" iface="eth0" netns="/var/run/netns/cni-057495f7-33c3-232e-bc30-ed61a53e8e0c" Oct 8 20:02:56.784946 containerd[1448]: 2024-10-08 20:02:56.685 [INFO][3765] k8s.go 615: Releasing IP address(es) ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Oct 8 20:02:56.784946 containerd[1448]: 2024-10-08 20:02:56.685 [INFO][3765] utils.go 188: Calico CNI releasing IP address ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Oct 8 20:02:56.784946 containerd[1448]: 2024-10-08 20:02:56.760 [INFO][3808] ipam_plugin.go 417: Releasing address using handleID ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" HandleID="k8s-pod-network.2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:02:56.784946 containerd[1448]: 2024-10-08 20:02:56.760 [INFO][3808] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:02:56.784946 containerd[1448]: 2024-10-08 20:02:56.761 [INFO][3808] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:02:56.784946 containerd[1448]: 2024-10-08 20:02:56.774 [WARNING][3808] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" HandleID="k8s-pod-network.2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:02:56.784946 containerd[1448]: 2024-10-08 20:02:56.774 [INFO][3808] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" HandleID="k8s-pod-network.2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:02:56.784946 containerd[1448]: 2024-10-08 20:02:56.777 [INFO][3808] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:02:56.784946 containerd[1448]: 2024-10-08 20:02:56.780 [INFO][3765] k8s.go 621: Teardown processing complete. ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Oct 8 20:02:56.790083 containerd[1448]: time="2024-10-08T20:02:56.789383328Z" level=info msg="TearDown network for sandbox \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\" successfully" Oct 8 20:02:56.790083 containerd[1448]: time="2024-10-08T20:02:56.789418444Z" level=info msg="StopPodSandbox for \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\" returns successfully" Oct 8 20:02:56.791271 systemd[1]: run-netns-cni\x2d057495f7\x2d33c3\x2d232e\x2dbc30\x2ded61a53e8e0c.mount: Deactivated successfully. Oct 8 20:02:56.793948 containerd[1448]: time="2024-10-08T20:02:56.793912295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7g2zh,Uid:f8917b79-40cc-443c-bc04-f6952b0b2955,Namespace:kube-system,Attempt:1,}" Oct 8 20:02:57.739359 systemd-networkd[1361]: calif48558c668b: Link UP Oct 8 20:02:57.746065 systemd-networkd[1361]: calif48558c668b: Gained carrier Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:56.797 [INFO][3832] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:56.818 [INFO][3832] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0 coredns-6f6b679f8f- kube-system abaa7ba7-8fa9-43d2-952d-c6239abb7b81 714 0 2024-10-08 20:02:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-1-0-b-d257b8cc02.novalocal coredns-6f6b679f8f-pk7qz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif48558c668b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" Namespace="kube-system" Pod="coredns-6f6b679f8f-pk7qz" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-" Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:56.820 [INFO][3832] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" Namespace="kube-system" Pod="coredns-6f6b679f8f-pk7qz" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.008 [INFO][3875] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" HandleID="k8s-pod-network.a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.076 [INFO][3875] ipam_plugin.go 270: Auto assigning IP ContainerID="a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" HandleID="k8s-pod-network.a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003589d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-1-0-b-d257b8cc02.novalocal", "pod":"coredns-6f6b679f8f-pk7qz", "timestamp":"2024-10-08 20:02:57.008551382 +0000 UTC"}, Hostname:"ci-4081-1-0-b-d257b8cc02.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.077 [INFO][3875] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.077 [INFO][3875] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.077 [INFO][3875] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-b-d257b8cc02.novalocal' Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.098 [INFO][3875] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.594 [INFO][3875] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.633 [INFO][3875] ipam.go 489: Trying affinity for 192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.640 [INFO][3875] ipam.go 155: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.643 [INFO][3875] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.644 [INFO][3875] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.646 [INFO][3875] ipam.go 1685: Creating new handle: k8s-pod-network.a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.674 [INFO][3875] ipam.go 1203: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.695 [INFO][3875] ipam.go 1216: Successfully claimed IPs: [192.168.61.65/26] block=192.168.61.64/26 handle="k8s-pod-network.a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.696 [INFO][3875] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.65/26] handle="k8s-pod-network.a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.696 [INFO][3875] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:02:57.798718 containerd[1448]: 2024-10-08 20:02:57.696 [INFO][3875] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.61.65/26] IPv6=[] ContainerID="a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" HandleID="k8s-pod-network.a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:02:57.803636 containerd[1448]: 2024-10-08 20:02:57.700 [INFO][3832] k8s.go 386: Populated endpoint ContainerID="a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" Namespace="kube-system" Pod="coredns-6f6b679f8f-pk7qz" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"abaa7ba7-8fa9-43d2-952d-c6239abb7b81", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-pk7qz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif48558c668b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:02:57.803636 containerd[1448]: 2024-10-08 20:02:57.700 [INFO][3832] k8s.go 387: Calico CNI using IPs: [192.168.61.65/32] ContainerID="a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" Namespace="kube-system" Pod="coredns-6f6b679f8f-pk7qz" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:02:57.803636 containerd[1448]: 2024-10-08 20:02:57.700 [INFO][3832] dataplane_linux.go 68: Setting the host side veth name to calif48558c668b ContainerID="a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" Namespace="kube-system" Pod="coredns-6f6b679f8f-pk7qz" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:02:57.803636 containerd[1448]: 2024-10-08 20:02:57.746 [INFO][3832] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" Namespace="kube-system" Pod="coredns-6f6b679f8f-pk7qz" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:02:57.803636 containerd[1448]: 2024-10-08 20:02:57.748 [INFO][3832] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" Namespace="kube-system" Pod="coredns-6f6b679f8f-pk7qz" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"abaa7ba7-8fa9-43d2-952d-c6239abb7b81", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e", Pod:"coredns-6f6b679f8f-pk7qz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif48558c668b", MAC:"46:17:91:2c:43:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:02:57.803636 containerd[1448]: 2024-10-08 20:02:57.791 [INFO][3832] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e" Namespace="kube-system" Pod="coredns-6f6b679f8f-pk7qz" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:02:57.879367 containerd[1448]: time="2024-10-08T20:02:57.879100444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:02:57.880105 containerd[1448]: time="2024-10-08T20:02:57.879790249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:02:57.880105 containerd[1448]: time="2024-10-08T20:02:57.879861512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:57.883076 containerd[1448]: time="2024-10-08T20:02:57.881910426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:57.921800 systemd-networkd[1361]: cali35288cd761a: Link UP Oct 8 20:02:57.922511 systemd-networkd[1361]: cali35288cd761a: Gained carrier Oct 8 20:02:57.960966 systemd[1]: Started cri-containerd-a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e.scope - libcontainer container a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e. Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.582 [INFO][3910] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.634 [INFO][3910] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0 coredns-6f6b679f8f- kube-system f8917b79-40cc-443c-bc04-f6952b0b2955 720 0 2024-10-08 20:02:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-1-0-b-d257b8cc02.novalocal coredns-6f6b679f8f-7g2zh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali35288cd761a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" Namespace="kube-system" Pod="coredns-6f6b679f8f-7g2zh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-" Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.634 [INFO][3910] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" Namespace="kube-system" Pod="coredns-6f6b679f8f-7g2zh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.778 [INFO][3921] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" HandleID="k8s-pod-network.22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.797 [INFO][3921] ipam_plugin.go 270: Auto assigning IP ContainerID="22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" HandleID="k8s-pod-network.22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000f0d40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-1-0-b-d257b8cc02.novalocal", "pod":"coredns-6f6b679f8f-7g2zh", "timestamp":"2024-10-08 20:02:57.778792841 +0000 UTC"}, Hostname:"ci-4081-1-0-b-d257b8cc02.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.797 [INFO][3921] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.797 [INFO][3921] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.798 [INFO][3921] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-b-d257b8cc02.novalocal' Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.802 [INFO][3921] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.817 [INFO][3921] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.828 [INFO][3921] ipam.go 489: Trying affinity for 192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.847 [INFO][3921] ipam.go 155: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.862 [INFO][3921] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.862 [INFO][3921] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.867 [INFO][3921] ipam.go 1685: Creating new handle: k8s-pod-network.22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.882 [INFO][3921] ipam.go 1203: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.906 [INFO][3921] ipam.go 1216: Successfully claimed IPs: [192.168.61.66/26] block=192.168.61.64/26 handle="k8s-pod-network.22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.908 [INFO][3921] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.66/26] handle="k8s-pod-network.22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.908 [INFO][3921] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:02:58.008041 containerd[1448]: 2024-10-08 20:02:57.908 [INFO][3921] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.61.66/26] IPv6=[] ContainerID="22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" HandleID="k8s-pod-network.22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:02:58.093489 containerd[1448]: 2024-10-08 20:02:57.917 [INFO][3910] k8s.go 386: Populated endpoint ContainerID="22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" Namespace="kube-system" Pod="coredns-6f6b679f8f-7g2zh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f8917b79-40cc-443c-bc04-f6952b0b2955", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-7g2zh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35288cd761a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:02:58.093489 containerd[1448]: 2024-10-08 20:02:57.918 [INFO][3910] k8s.go 387: Calico CNI using IPs: [192.168.61.66/32] ContainerID="22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" Namespace="kube-system" Pod="coredns-6f6b679f8f-7g2zh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:02:58.093489 containerd[1448]: 2024-10-08 20:02:57.918 [INFO][3910] dataplane_linux.go 68: Setting the host side veth name to cali35288cd761a ContainerID="22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" Namespace="kube-system" Pod="coredns-6f6b679f8f-7g2zh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:02:58.093489 containerd[1448]: 2024-10-08 20:02:57.923 [INFO][3910] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" Namespace="kube-system" Pod="coredns-6f6b679f8f-7g2zh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:02:58.093489 containerd[1448]: 2024-10-08 20:02:57.926 [INFO][3910] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" Namespace="kube-system" Pod="coredns-6f6b679f8f-7g2zh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f8917b79-40cc-443c-bc04-f6952b0b2955", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b", Pod:"coredns-6f6b679f8f-7g2zh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35288cd761a", MAC:"56:4a:17:8f:36:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:02:58.093489 containerd[1448]: 2024-10-08 20:02:58.002 [INFO][3910] k8s.go 500: Wrote updated endpoint to datastore ContainerID="22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b" Namespace="kube-system" Pod="coredns-6f6b679f8f-7g2zh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:02:58.093489 containerd[1448]: time="2024-10-08T20:02:58.051379987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pk7qz,Uid:abaa7ba7-8fa9-43d2-952d-c6239abb7b81,Namespace:kube-system,Attempt:1,} returns sandbox id \"a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e\"" Oct 8 20:02:58.136302 containerd[1448]: time="2024-10-08T20:02:58.135520324Z" level=info msg="CreateContainer within sandbox \"a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:02:58.143134 containerd[1448]: time="2024-10-08T20:02:58.143045162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:02:58.143445 containerd[1448]: time="2024-10-08T20:02:58.143283870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:02:58.143445 containerd[1448]: time="2024-10-08T20:02:58.143308155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:58.143675 containerd[1448]: time="2024-10-08T20:02:58.143588521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:58.199711 systemd[1]: Started cri-containerd-22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b.scope - libcontainer container 22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b. Oct 8 20:02:58.221499 containerd[1448]: time="2024-10-08T20:02:58.220890519Z" level=info msg="CreateContainer within sandbox \"a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a2c24a90bdd12504e8a28617254991524788a03fef9e6b2736cf28d82256f936\"" Oct 8 20:02:58.228432 containerd[1448]: time="2024-10-08T20:02:58.228246309Z" level=info msg="StartContainer for \"a2c24a90bdd12504e8a28617254991524788a03fef9e6b2736cf28d82256f936\"" Oct 8 20:02:58.274665 systemd[1]: Started cri-containerd-a2c24a90bdd12504e8a28617254991524788a03fef9e6b2736cf28d82256f936.scope - libcontainer container a2c24a90bdd12504e8a28617254991524788a03fef9e6b2736cf28d82256f936. Oct 8 20:02:58.301061 containerd[1448]: time="2024-10-08T20:02:58.301025682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7g2zh,Uid:f8917b79-40cc-443c-bc04-f6952b0b2955,Namespace:kube-system,Attempt:1,} returns sandbox id \"22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b\"" Oct 8 20:02:58.307102 containerd[1448]: time="2024-10-08T20:02:58.307071635Z" level=info msg="CreateContainer within sandbox \"22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:02:58.356005 containerd[1448]: time="2024-10-08T20:02:58.355432917Z" level=info msg="CreateContainer within sandbox \"22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b26725600ae64ff5b33e214e2a018bdeb09ec0f2cf7d1cf686efef567d8b4cee\"" Oct 8 20:02:58.356513 containerd[1448]: time="2024-10-08T20:02:58.356475573Z" level=info msg="StartContainer for \"b26725600ae64ff5b33e214e2a018bdeb09ec0f2cf7d1cf686efef567d8b4cee\"" Oct 8 20:02:58.392138 containerd[1448]: time="2024-10-08T20:02:58.392080452Z" level=info msg="StartContainer for \"a2c24a90bdd12504e8a28617254991524788a03fef9e6b2736cf28d82256f936\" returns successfully" Oct 8 20:02:58.409026 systemd[1]: Started cri-containerd-b26725600ae64ff5b33e214e2a018bdeb09ec0f2cf7d1cf686efef567d8b4cee.scope - libcontainer container b26725600ae64ff5b33e214e2a018bdeb09ec0f2cf7d1cf686efef567d8b4cee. Oct 8 20:02:58.435070 kernel: bpftool[4115]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 20:02:58.473360 containerd[1448]: time="2024-10-08T20:02:58.473315900Z" level=info msg="StartContainer for \"b26725600ae64ff5b33e214e2a018bdeb09ec0f2cf7d1cf686efef567d8b4cee\" returns successfully" Oct 8 20:02:58.478566 containerd[1448]: time="2024-10-08T20:02:58.478251370Z" level=info msg="StopPodSandbox for \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\"" Oct 8 20:02:58.637511 containerd[1448]: 2024-10-08 20:02:58.575 [INFO][4141] k8s.go 608: Cleaning up netns ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Oct 8 20:02:58.637511 containerd[1448]: 2024-10-08 20:02:58.575 [INFO][4141] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" iface="eth0" netns="/var/run/netns/cni-5452e465-fb97-02c0-60b0-01677d1cec2a" Oct 8 20:02:58.637511 containerd[1448]: 2024-10-08 20:02:58.575 [INFO][4141] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" iface="eth0" netns="/var/run/netns/cni-5452e465-fb97-02c0-60b0-01677d1cec2a" Oct 8 20:02:58.637511 containerd[1448]: 2024-10-08 20:02:58.576 [INFO][4141] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" iface="eth0" netns="/var/run/netns/cni-5452e465-fb97-02c0-60b0-01677d1cec2a" Oct 8 20:02:58.637511 containerd[1448]: 2024-10-08 20:02:58.576 [INFO][4141] k8s.go 615: Releasing IP address(es) ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Oct 8 20:02:58.637511 containerd[1448]: 2024-10-08 20:02:58.576 [INFO][4141] utils.go 188: Calico CNI releasing IP address ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Oct 8 20:02:58.637511 containerd[1448]: 2024-10-08 20:02:58.622 [INFO][4148] ipam_plugin.go 417: Releasing address using handleID ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" HandleID="k8s-pod-network.1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:02:58.637511 containerd[1448]: 2024-10-08 20:02:58.622 [INFO][4148] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:02:58.637511 containerd[1448]: 2024-10-08 20:02:58.622 [INFO][4148] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:02:58.637511 containerd[1448]: 2024-10-08 20:02:58.630 [WARNING][4148] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" HandleID="k8s-pod-network.1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:02:58.637511 containerd[1448]: 2024-10-08 20:02:58.631 [INFO][4148] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" HandleID="k8s-pod-network.1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:02:58.637511 containerd[1448]: 2024-10-08 20:02:58.633 [INFO][4148] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:02:58.637511 containerd[1448]: 2024-10-08 20:02:58.635 [INFO][4141] k8s.go 621: Teardown processing complete. ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Oct 8 20:02:58.639888 containerd[1448]: time="2024-10-08T20:02:58.637770707Z" level=info msg="TearDown network for sandbox \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\" successfully" Oct 8 20:02:58.639888 containerd[1448]: time="2024-10-08T20:02:58.637800693Z" level=info msg="StopPodSandbox for \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\" returns successfully" Oct 8 20:02:58.641246 containerd[1448]: time="2024-10-08T20:02:58.640726913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7lz5k,Uid:b58e5c77-b9e1-40fc-b328-51c2c4af45f8,Namespace:calico-system,Attempt:1,}" Oct 8 20:02:58.757349 kubelet[2584]: I1008 20:02:58.756191 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pk7qz" podStartSLOduration=39.756166232 podStartE2EDuration="39.756166232s" podCreationTimestamp="2024-10-08 20:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:02:58.736245527 +0000 UTC m=+45.490883080" watchObservedRunningTime="2024-10-08 20:02:58.756166232 +0000 UTC m=+45.510803765" Oct 8 20:02:58.757349 kubelet[2584]: I1008 20:02:58.756412 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-7g2zh" podStartSLOduration=39.756405921 podStartE2EDuration="39.756405921s" podCreationTimestamp="2024-10-08 20:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:02:58.754859941 +0000 UTC m=+45.509497504" watchObservedRunningTime="2024-10-08 20:02:58.756405921 +0000 UTC m=+45.511043464" Oct 8 20:02:58.904363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3343209267.mount: Deactivated successfully. Oct 8 20:02:58.904509 systemd[1]: run-netns-cni\x2d5452e465\x2dfb97\x2d02c0\x2d60b0\x2d01677d1cec2a.mount: Deactivated successfully. Oct 8 20:02:58.918674 systemd-networkd[1361]: calif48558c668b: Gained IPv6LL Oct 8 20:02:58.952605 systemd-networkd[1361]: calib3148c6ba8e: Link UP Oct 8 20:02:58.954992 systemd-networkd[1361]: calib3148c6ba8e: Gained carrier Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.768 [INFO][4158] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0 csi-node-driver- calico-system b58e5c77-b9e1-40fc-b328-51c2c4af45f8 738 0 2024-10-08 20:02:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:779867c8f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4081-1-0-b-d257b8cc02.novalocal csi-node-driver-7lz5k eth0 default [] [] [kns.calico-system ksa.calico-system.default] calib3148c6ba8e [] []}} ContainerID="ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" Namespace="calico-system" Pod="csi-node-driver-7lz5k" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-" Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.768 [INFO][4158] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" Namespace="calico-system" Pod="csi-node-driver-7lz5k" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.849 [INFO][4171] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" HandleID="k8s-pod-network.ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.864 [INFO][4171] ipam_plugin.go 270: Auto assigning IP ContainerID="ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" HandleID="k8s-pod-network.ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026f990), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-1-0-b-d257b8cc02.novalocal", "pod":"csi-node-driver-7lz5k", "timestamp":"2024-10-08 20:02:58.849166091 +0000 UTC"}, Hostname:"ci-4081-1-0-b-d257b8cc02.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.864 [INFO][4171] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.864 [INFO][4171] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.865 [INFO][4171] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-b-d257b8cc02.novalocal' Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.870 [INFO][4171] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.884 [INFO][4171] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.893 [INFO][4171] ipam.go 489: Trying affinity for 192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.898 [INFO][4171] ipam.go 155: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.920 [INFO][4171] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.920 [INFO][4171] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.923 [INFO][4171] ipam.go 1685: Creating new handle: k8s-pod-network.ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1 Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.930 [INFO][4171] ipam.go 1203: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.945 [INFO][4171] ipam.go 1216: Successfully claimed IPs: [192.168.61.67/26] block=192.168.61.64/26 handle="k8s-pod-network.ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.945 [INFO][4171] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.67/26] handle="k8s-pod-network.ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.945 [INFO][4171] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:02:58.992064 containerd[1448]: 2024-10-08 20:02:58.945 [INFO][4171] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.61.67/26] IPv6=[] ContainerID="ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" HandleID="k8s-pod-network.ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:02:58.993022 containerd[1448]: 2024-10-08 20:02:58.947 [INFO][4158] k8s.go 386: Populated endpoint ContainerID="ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" Namespace="calico-system" Pod="csi-node-driver-7lz5k" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b58e5c77-b9e1-40fc-b328-51c2c4af45f8", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"", Pod:"csi-node-driver-7lz5k", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib3148c6ba8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:02:58.993022 containerd[1448]: 2024-10-08 20:02:58.948 [INFO][4158] k8s.go 387: Calico CNI using IPs: [192.168.61.67/32] ContainerID="ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" Namespace="calico-system" Pod="csi-node-driver-7lz5k" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:02:58.993022 containerd[1448]: 2024-10-08 20:02:58.948 [INFO][4158] dataplane_linux.go 68: Setting the host side veth name to calib3148c6ba8e ContainerID="ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" Namespace="calico-system" Pod="csi-node-driver-7lz5k" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:02:58.993022 containerd[1448]: 2024-10-08 20:02:58.952 [INFO][4158] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" Namespace="calico-system" Pod="csi-node-driver-7lz5k" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:02:58.993022 containerd[1448]: 2024-10-08 20:02:58.954 [INFO][4158] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" Namespace="calico-system" Pod="csi-node-driver-7lz5k" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b58e5c77-b9e1-40fc-b328-51c2c4af45f8", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1", Pod:"csi-node-driver-7lz5k", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib3148c6ba8e", MAC:"ba:92:0b:b1:f8:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:02:58.993022 containerd[1448]: 2024-10-08 20:02:58.989 [INFO][4158] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1" Namespace="calico-system" Pod="csi-node-driver-7lz5k" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:02:59.036292 containerd[1448]: time="2024-10-08T20:02:59.036161325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:02:59.036292 containerd[1448]: time="2024-10-08T20:02:59.036227279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:02:59.036292 containerd[1448]: time="2024-10-08T20:02:59.036240594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:59.036559 containerd[1448]: time="2024-10-08T20:02:59.036462069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:59.076943 systemd[1]: Started cri-containerd-ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1.scope - libcontainer container ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1. Oct 8 20:02:59.184144 containerd[1448]: time="2024-10-08T20:02:59.183499634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7lz5k,Uid:b58e5c77-b9e1-40fc-b328-51c2c4af45f8,Namespace:calico-system,Attempt:1,} returns sandbox id \"ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1\"" Oct 8 20:02:59.202465 containerd[1448]: time="2024-10-08T20:02:59.200295745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 20:02:59.374655 systemd-networkd[1361]: vxlan.calico: Link UP Oct 8 20:02:59.374664 systemd-networkd[1361]: vxlan.calico: Gained carrier Oct 8 20:02:59.748664 systemd-networkd[1361]: cali35288cd761a: Gained IPv6LL Oct 8 20:03:00.003180 systemd-networkd[1361]: calib3148c6ba8e: Gained IPv6LL Oct 8 20:03:01.090322 systemd-networkd[1361]: vxlan.calico: Gained IPv6LL Oct 8 20:03:02.200602 containerd[1448]: time="2024-10-08T20:03:02.200476504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:02.201591 containerd[1448]: time="2024-10-08T20:03:02.201445401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 8 20:03:02.202826 containerd[1448]: time="2024-10-08T20:03:02.202791537Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:02.206265 containerd[1448]: time="2024-10-08T20:03:02.206174454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:02.207100 containerd[1448]: time="2024-10-08T20:03:02.206909171Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 3.006573321s" Oct 8 20:03:02.207100 containerd[1448]: time="2024-10-08T20:03:02.206947033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 8 20:03:02.210076 containerd[1448]: time="2024-10-08T20:03:02.209923356Z" level=info msg="CreateContainer within sandbox \"ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 20:03:02.249296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount315212534.mount: Deactivated successfully. Oct 8 20:03:02.253332 containerd[1448]: time="2024-10-08T20:03:02.253226993Z" level=info msg="CreateContainer within sandbox \"ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c1f1b7b51e2579a597fdc56fcabbca2ee0f5b6a30fb617db9beda5b54e0de5f4\"" Oct 8 20:03:02.258897 containerd[1448]: time="2024-10-08T20:03:02.255275676Z" level=info msg="StartContainer for \"c1f1b7b51e2579a597fdc56fcabbca2ee0f5b6a30fb617db9beda5b54e0de5f4\"" Oct 8 20:03:02.305268 systemd[1]: Started cri-containerd-c1f1b7b51e2579a597fdc56fcabbca2ee0f5b6a30fb617db9beda5b54e0de5f4.scope - libcontainer container c1f1b7b51e2579a597fdc56fcabbca2ee0f5b6a30fb617db9beda5b54e0de5f4. Oct 8 20:03:02.387534 containerd[1448]: time="2024-10-08T20:03:02.387481121Z" level=info msg="StartContainer for \"c1f1b7b51e2579a597fdc56fcabbca2ee0f5b6a30fb617db9beda5b54e0de5f4\" returns successfully" Oct 8 20:03:02.389567 containerd[1448]: time="2024-10-08T20:03:02.389506921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 20:03:04.876579 containerd[1448]: time="2024-10-08T20:03:04.876083989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:04.880316 containerd[1448]: time="2024-10-08T20:03:04.879387115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 8 20:03:04.881211 containerd[1448]: time="2024-10-08T20:03:04.881107142Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:04.889108 containerd[1448]: time="2024-10-08T20:03:04.888909037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:04.894821 containerd[1448]: time="2024-10-08T20:03:04.894560129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.505004095s" Oct 8 20:03:04.894821 containerd[1448]: time="2024-10-08T20:03:04.894635320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 8 20:03:04.900991 containerd[1448]: time="2024-10-08T20:03:04.899194742Z" level=info msg="CreateContainer within sandbox \"ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 20:03:04.956900 containerd[1448]: time="2024-10-08T20:03:04.956699207Z" level=info msg="CreateContainer within sandbox \"ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"251ceaa8699c5fb766cae59b355b03fb59135161c6b52c5ea8a4c3ba1351fa3c\"" Oct 8 20:03:04.958450 containerd[1448]: time="2024-10-08T20:03:04.958120733Z" level=info msg="StartContainer for \"251ceaa8699c5fb766cae59b355b03fb59135161c6b52c5ea8a4c3ba1351fa3c\"" Oct 8 20:03:05.021892 systemd[1]: Started cri-containerd-251ceaa8699c5fb766cae59b355b03fb59135161c6b52c5ea8a4c3ba1351fa3c.scope - libcontainer container 251ceaa8699c5fb766cae59b355b03fb59135161c6b52c5ea8a4c3ba1351fa3c. Oct 8 20:03:05.077677 containerd[1448]: time="2024-10-08T20:03:05.076539486Z" level=info msg="StartContainer for \"251ceaa8699c5fb766cae59b355b03fb59135161c6b52c5ea8a4c3ba1351fa3c\" returns successfully" Oct 8 20:03:05.267687 kubelet[2584]: I1008 20:03:05.267503 2584 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 20:03:05.270470 kubelet[2584]: I1008 20:03:05.270427 2584 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 20:03:06.199052 kubelet[2584]: I1008 20:03:06.198834 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7lz5k" podStartSLOduration=34.489170866 podStartE2EDuration="40.198711158s" podCreationTimestamp="2024-10-08 20:02:26 +0000 UTC" firstStartedPulling="2024-10-08 20:02:59.186391489 +0000 UTC m=+45.941029022" lastFinishedPulling="2024-10-08 20:03:04.895931721 +0000 UTC m=+51.650569314" observedRunningTime="2024-10-08 20:03:05.985098207 +0000 UTC m=+52.739735820" watchObservedRunningTime="2024-10-08 20:03:06.198711158 +0000 UTC m=+52.953349082" Oct 8 20:03:10.490207 containerd[1448]: time="2024-10-08T20:03:10.489133559Z" level=info msg="StopPodSandbox for \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\"" Oct 8 20:03:10.647202 containerd[1448]: 2024-10-08 20:03:10.597 [INFO][4423] k8s.go 608: Cleaning up netns ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Oct 8 20:03:10.647202 containerd[1448]: 2024-10-08 20:03:10.597 [INFO][4423] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" iface="eth0" netns="/var/run/netns/cni-288b22a4-92fb-827e-a04f-36aa8ae1cbf5" Oct 8 20:03:10.647202 containerd[1448]: 2024-10-08 20:03:10.600 [INFO][4423] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" iface="eth0" netns="/var/run/netns/cni-288b22a4-92fb-827e-a04f-36aa8ae1cbf5" Oct 8 20:03:10.647202 containerd[1448]: 2024-10-08 20:03:10.600 [INFO][4423] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" iface="eth0" netns="/var/run/netns/cni-288b22a4-92fb-827e-a04f-36aa8ae1cbf5" Oct 8 20:03:10.647202 containerd[1448]: 2024-10-08 20:03:10.600 [INFO][4423] k8s.go 615: Releasing IP address(es) ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Oct 8 20:03:10.647202 containerd[1448]: 2024-10-08 20:03:10.600 [INFO][4423] utils.go 188: Calico CNI releasing IP address ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Oct 8 20:03:10.647202 containerd[1448]: 2024-10-08 20:03:10.635 [INFO][4430] ipam_plugin.go 417: Releasing address using handleID ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" HandleID="k8s-pod-network.88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:10.647202 containerd[1448]: 2024-10-08 20:03:10.635 [INFO][4430] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:10.647202 containerd[1448]: 2024-10-08 20:03:10.636 [INFO][4430] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:10.647202 containerd[1448]: 2024-10-08 20:03:10.642 [WARNING][4430] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" HandleID="k8s-pod-network.88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:10.647202 containerd[1448]: 2024-10-08 20:03:10.643 [INFO][4430] ipam_plugin.go 445: Releasing address using workloadID ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" HandleID="k8s-pod-network.88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:10.647202 containerd[1448]: 2024-10-08 20:03:10.644 [INFO][4430] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:10.647202 containerd[1448]: 2024-10-08 20:03:10.645 [INFO][4423] k8s.go 621: Teardown processing complete. ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Oct 8 20:03:10.650355 containerd[1448]: time="2024-10-08T20:03:10.649067281Z" level=info msg="TearDown network for sandbox \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\" successfully" Oct 8 20:03:10.650355 containerd[1448]: time="2024-10-08T20:03:10.649109440Z" level=info msg="StopPodSandbox for \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\" returns successfully" Oct 8 20:03:10.650355 containerd[1448]: time="2024-10-08T20:03:10.649945047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b597657d9-5blcx,Uid:5180ec07-7d2b-4c4f-81ef-844e22c96463,Namespace:calico-system,Attempt:1,}" Oct 8 20:03:10.651685 systemd[1]: run-netns-cni\x2d288b22a4\x2d92fb\x2d827e\x2da04f\x2d36aa8ae1cbf5.mount: Deactivated successfully. Oct 8 20:03:10.799155 systemd-networkd[1361]: cali639e9b52b8f: Link UP Oct 8 20:03:10.799957 systemd-networkd[1361]: cali639e9b52b8f: Gained carrier Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.710 [INFO][4438] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0 calico-kube-controllers-5b597657d9- calico-system 5180ec07-7d2b-4c4f-81ef-844e22c96463 794 0 2024-10-08 20:02:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b597657d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-1-0-b-d257b8cc02.novalocal calico-kube-controllers-5b597657d9-5blcx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali639e9b52b8f [] []}} ContainerID="176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" Namespace="calico-system" Pod="calico-kube-controllers-5b597657d9-5blcx" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-" Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.710 [INFO][4438] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" Namespace="calico-system" Pod="calico-kube-controllers-5b597657d9-5blcx" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.741 [INFO][4449] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" HandleID="k8s-pod-network.176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.753 [INFO][4449] ipam_plugin.go 270: Auto assigning IP ContainerID="176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" HandleID="k8s-pod-network.176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003182f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-1-0-b-d257b8cc02.novalocal", "pod":"calico-kube-controllers-5b597657d9-5blcx", "timestamp":"2024-10-08 20:03:10.741862776 +0000 UTC"}, Hostname:"ci-4081-1-0-b-d257b8cc02.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.753 [INFO][4449] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.754 [INFO][4449] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.754 [INFO][4449] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-b-d257b8cc02.novalocal' Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.756 [INFO][4449] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.760 [INFO][4449] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.769 [INFO][4449] ipam.go 489: Trying affinity for 192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.771 [INFO][4449] ipam.go 155: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.774 [INFO][4449] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.774 [INFO][4449] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.776 [INFO][4449] ipam.go 1685: Creating new handle: k8s-pod-network.176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.784 [INFO][4449] ipam.go 1203: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.792 [INFO][4449] ipam.go 1216: Successfully claimed IPs: [192.168.61.68/26] block=192.168.61.64/26 handle="k8s-pod-network.176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.792 [INFO][4449] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.68/26] handle="k8s-pod-network.176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.793 [INFO][4449] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:10.829788 containerd[1448]: 2024-10-08 20:03:10.793 [INFO][4449] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.61.68/26] IPv6=[] ContainerID="176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" HandleID="k8s-pod-network.176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:10.830500 containerd[1448]: 2024-10-08 20:03:10.795 [INFO][4438] k8s.go 386: Populated endpoint ContainerID="176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" Namespace="calico-system" Pod="calico-kube-controllers-5b597657d9-5blcx" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0", GenerateName:"calico-kube-controllers-5b597657d9-", Namespace:"calico-system", SelfLink:"", UID:"5180ec07-7d2b-4c4f-81ef-844e22c96463", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b597657d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"", Pod:"calico-kube-controllers-5b597657d9-5blcx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali639e9b52b8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:10.830500 containerd[1448]: 2024-10-08 20:03:10.795 [INFO][4438] k8s.go 387: Calico CNI using IPs: [192.168.61.68/32] ContainerID="176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" Namespace="calico-system" Pod="calico-kube-controllers-5b597657d9-5blcx" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:10.830500 containerd[1448]: 2024-10-08 20:03:10.795 [INFO][4438] dataplane_linux.go 68: Setting the host side veth name to cali639e9b52b8f ContainerID="176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" Namespace="calico-system" Pod="calico-kube-controllers-5b597657d9-5blcx" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:10.830500 containerd[1448]: 2024-10-08 20:03:10.799 [INFO][4438] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" Namespace="calico-system" Pod="calico-kube-controllers-5b597657d9-5blcx" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:10.830500 containerd[1448]: 2024-10-08 20:03:10.800 [INFO][4438] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" Namespace="calico-system" Pod="calico-kube-controllers-5b597657d9-5blcx" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0", GenerateName:"calico-kube-controllers-5b597657d9-", Namespace:"calico-system", SelfLink:"", UID:"5180ec07-7d2b-4c4f-81ef-844e22c96463", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b597657d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a", Pod:"calico-kube-controllers-5b597657d9-5blcx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali639e9b52b8f", MAC:"32:f7:47:79:34:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:10.830500 containerd[1448]: 2024-10-08 20:03:10.821 [INFO][4438] k8s.go 500: Wrote updated endpoint to datastore ContainerID="176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a" Namespace="calico-system" Pod="calico-kube-controllers-5b597657d9-5blcx" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:10.860520 containerd[1448]: time="2024-10-08T20:03:10.860410937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:10.860520 containerd[1448]: time="2024-10-08T20:03:10.860472312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:10.860520 containerd[1448]: time="2024-10-08T20:03:10.860493051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:10.861163 containerd[1448]: time="2024-10-08T20:03:10.860623095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:10.892953 systemd[1]: Started cri-containerd-176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a.scope - libcontainer container 176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a. Oct 8 20:03:10.937874 containerd[1448]: time="2024-10-08T20:03:10.937842952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b597657d9-5blcx,Uid:5180ec07-7d2b-4c4f-81ef-844e22c96463,Namespace:calico-system,Attempt:1,} returns sandbox id \"176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a\"" Oct 8 20:03:10.939887 containerd[1448]: time="2024-10-08T20:03:10.939521902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 20:03:12.802878 systemd-networkd[1361]: cali639e9b52b8f: Gained IPv6LL Oct 8 20:03:13.511376 containerd[1448]: time="2024-10-08T20:03:13.511330540Z" level=info msg="StopPodSandbox for \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\"" Oct 8 20:03:13.669058 containerd[1448]: 2024-10-08 20:03:13.601 [WARNING][4531] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"abaa7ba7-8fa9-43d2-952d-c6239abb7b81", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e", Pod:"coredns-6f6b679f8f-pk7qz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif48558c668b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:13.669058 containerd[1448]: 2024-10-08 20:03:13.601 [INFO][4531] k8s.go 608: Cleaning up netns ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Oct 8 20:03:13.669058 containerd[1448]: 2024-10-08 20:03:13.601 [INFO][4531] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" iface="eth0" netns="" Oct 8 20:03:13.669058 containerd[1448]: 2024-10-08 20:03:13.601 [INFO][4531] k8s.go 615: Releasing IP address(es) ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Oct 8 20:03:13.669058 containerd[1448]: 2024-10-08 20:03:13.601 [INFO][4531] utils.go 188: Calico CNI releasing IP address ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Oct 8 20:03:13.669058 containerd[1448]: 2024-10-08 20:03:13.644 [INFO][4537] ipam_plugin.go 417: Releasing address using handleID ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" HandleID="k8s-pod-network.345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:03:13.669058 containerd[1448]: 2024-10-08 20:03:13.644 [INFO][4537] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:13.669058 containerd[1448]: 2024-10-08 20:03:13.645 [INFO][4537] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:13.669058 containerd[1448]: 2024-10-08 20:03:13.662 [WARNING][4537] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" HandleID="k8s-pod-network.345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:03:13.669058 containerd[1448]: 2024-10-08 20:03:13.662 [INFO][4537] ipam_plugin.go 445: Releasing address using workloadID ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" HandleID="k8s-pod-network.345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:03:13.669058 containerd[1448]: 2024-10-08 20:03:13.665 [INFO][4537] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:13.669058 containerd[1448]: 2024-10-08 20:03:13.666 [INFO][4531] k8s.go 621: Teardown processing complete. ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Oct 8 20:03:13.669058 containerd[1448]: time="2024-10-08T20:03:13.668901059Z" level=info msg="TearDown network for sandbox \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\" successfully" Oct 8 20:03:13.669058 containerd[1448]: time="2024-10-08T20:03:13.668932538Z" level=info msg="StopPodSandbox for \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\" returns successfully" Oct 8 20:03:13.679401 containerd[1448]: time="2024-10-08T20:03:13.678883626Z" level=info msg="RemovePodSandbox for \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\"" Oct 8 20:03:13.685709 containerd[1448]: time="2024-10-08T20:03:13.685584224Z" level=info msg="Forcibly stopping sandbox \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\"" Oct 8 20:03:13.790549 containerd[1448]: 2024-10-08 20:03:13.735 [WARNING][4557] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"abaa7ba7-8fa9-43d2-952d-c6239abb7b81", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"a1aa75ef2f3ca7792dbbe4f07b53a797fe8d36b7c3d1fcdee1c931a4325b583e", Pod:"coredns-6f6b679f8f-pk7qz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif48558c668b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:13.790549 containerd[1448]: 2024-10-08 20:03:13.736 [INFO][4557] k8s.go 608: Cleaning up netns ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Oct 8 20:03:13.790549 containerd[1448]: 2024-10-08 20:03:13.736 [INFO][4557] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" iface="eth0" netns="" Oct 8 20:03:13.790549 containerd[1448]: 2024-10-08 20:03:13.736 [INFO][4557] k8s.go 615: Releasing IP address(es) ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Oct 8 20:03:13.790549 containerd[1448]: 2024-10-08 20:03:13.736 [INFO][4557] utils.go 188: Calico CNI releasing IP address ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Oct 8 20:03:13.790549 containerd[1448]: 2024-10-08 20:03:13.773 [INFO][4564] ipam_plugin.go 417: Releasing address using handleID ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" HandleID="k8s-pod-network.345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:03:13.790549 containerd[1448]: 2024-10-08 20:03:13.774 [INFO][4564] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:13.790549 containerd[1448]: 2024-10-08 20:03:13.774 [INFO][4564] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:13.790549 containerd[1448]: 2024-10-08 20:03:13.783 [WARNING][4564] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" HandleID="k8s-pod-network.345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:03:13.790549 containerd[1448]: 2024-10-08 20:03:13.783 [INFO][4564] ipam_plugin.go 445: Releasing address using workloadID ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" HandleID="k8s-pod-network.345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--pk7qz-eth0" Oct 8 20:03:13.790549 containerd[1448]: 2024-10-08 20:03:13.786 [INFO][4564] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:13.790549 containerd[1448]: 2024-10-08 20:03:13.788 [INFO][4557] k8s.go 621: Teardown processing complete. ContainerID="345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40" Oct 8 20:03:13.790549 containerd[1448]: time="2024-10-08T20:03:13.790513131Z" level=info msg="TearDown network for sandbox \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\" successfully" Oct 8 20:03:14.785857 containerd[1448]: time="2024-10-08T20:03:14.785629874Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:03:14.792060 containerd[1448]: time="2024-10-08T20:03:14.785864977Z" level=info msg="RemovePodSandbox \"345c7a23dfdbeba43f0471ccbc3f6466bb6a829dfe5e8671768efa8794addb40\" returns successfully" Oct 8 20:03:14.792060 containerd[1448]: time="2024-10-08T20:03:14.786847732Z" level=info msg="StopPodSandbox for \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\"" Oct 8 20:03:14.960875 containerd[1448]: 2024-10-08 20:03:14.896 [WARNING][4582] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0", GenerateName:"calico-kube-controllers-5b597657d9-", Namespace:"calico-system", SelfLink:"", UID:"5180ec07-7d2b-4c4f-81ef-844e22c96463", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b597657d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a", Pod:"calico-kube-controllers-5b597657d9-5blcx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali639e9b52b8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:14.960875 containerd[1448]: 2024-10-08 20:03:14.897 [INFO][4582] k8s.go 608: Cleaning up netns ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Oct 8 20:03:14.960875 containerd[1448]: 2024-10-08 20:03:14.897 [INFO][4582] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" iface="eth0" netns="" Oct 8 20:03:14.960875 containerd[1448]: 2024-10-08 20:03:14.897 [INFO][4582] k8s.go 615: Releasing IP address(es) ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Oct 8 20:03:14.960875 containerd[1448]: 2024-10-08 20:03:14.897 [INFO][4582] utils.go 188: Calico CNI releasing IP address ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Oct 8 20:03:14.960875 containerd[1448]: 2024-10-08 20:03:14.946 [INFO][4588] ipam_plugin.go 417: Releasing address using handleID ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" HandleID="k8s-pod-network.88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:14.960875 containerd[1448]: 2024-10-08 20:03:14.947 [INFO][4588] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:14.960875 containerd[1448]: 2024-10-08 20:03:14.947 [INFO][4588] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:14.960875 containerd[1448]: 2024-10-08 20:03:14.955 [WARNING][4588] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" HandleID="k8s-pod-network.88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:14.960875 containerd[1448]: 2024-10-08 20:03:14.955 [INFO][4588] ipam_plugin.go 445: Releasing address using workloadID ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" HandleID="k8s-pod-network.88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:14.960875 containerd[1448]: 2024-10-08 20:03:14.958 [INFO][4588] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:14.960875 containerd[1448]: 2024-10-08 20:03:14.959 [INFO][4582] k8s.go 621: Teardown processing complete. ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Oct 8 20:03:14.961593 containerd[1448]: time="2024-10-08T20:03:14.961065618Z" level=info msg="TearDown network for sandbox \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\" successfully" Oct 8 20:03:14.961593 containerd[1448]: time="2024-10-08T20:03:14.961090955Z" level=info msg="StopPodSandbox for \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\" returns successfully" Oct 8 20:03:14.962250 containerd[1448]: time="2024-10-08T20:03:14.962001514Z" level=info msg="RemovePodSandbox for \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\"" Oct 8 20:03:14.962250 containerd[1448]: time="2024-10-08T20:03:14.962031500Z" level=info msg="Forcibly stopping sandbox \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\"" Oct 8 20:03:15.111351 containerd[1448]: 2024-10-08 20:03:15.048 [WARNING][4606] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0", GenerateName:"calico-kube-controllers-5b597657d9-", Namespace:"calico-system", SelfLink:"", UID:"5180ec07-7d2b-4c4f-81ef-844e22c96463", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b597657d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a", Pod:"calico-kube-controllers-5b597657d9-5blcx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali639e9b52b8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:15.111351 containerd[1448]: 2024-10-08 20:03:15.049 [INFO][4606] k8s.go 608: Cleaning up netns ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Oct 8 20:03:15.111351 containerd[1448]: 2024-10-08 20:03:15.049 [INFO][4606] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" iface="eth0" netns="" Oct 8 20:03:15.111351 containerd[1448]: 2024-10-08 20:03:15.049 [INFO][4606] k8s.go 615: Releasing IP address(es) ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Oct 8 20:03:15.111351 containerd[1448]: 2024-10-08 20:03:15.049 [INFO][4606] utils.go 188: Calico CNI releasing IP address ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Oct 8 20:03:15.111351 containerd[1448]: 2024-10-08 20:03:15.091 [INFO][4612] ipam_plugin.go 417: Releasing address using handleID ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" HandleID="k8s-pod-network.88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:15.111351 containerd[1448]: 2024-10-08 20:03:15.091 [INFO][4612] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:15.111351 containerd[1448]: 2024-10-08 20:03:15.091 [INFO][4612] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:15.111351 containerd[1448]: 2024-10-08 20:03:15.100 [WARNING][4612] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" HandleID="k8s-pod-network.88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:15.111351 containerd[1448]: 2024-10-08 20:03:15.100 [INFO][4612] ipam_plugin.go 445: Releasing address using workloadID ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" HandleID="k8s-pod-network.88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--kube--controllers--5b597657d9--5blcx-eth0" Oct 8 20:03:15.111351 containerd[1448]: 2024-10-08 20:03:15.103 [INFO][4612] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:15.111351 containerd[1448]: 2024-10-08 20:03:15.108 [INFO][4606] k8s.go 621: Teardown processing complete. ContainerID="88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96" Oct 8 20:03:15.112018 containerd[1448]: time="2024-10-08T20:03:15.111980355Z" level=info msg="TearDown network for sandbox \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\" successfully" Oct 8 20:03:15.185928 containerd[1448]: time="2024-10-08T20:03:15.185884155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:15.204545 containerd[1448]: time="2024-10-08T20:03:15.204494614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 8 20:03:15.206195 containerd[1448]: time="2024-10-08T20:03:15.206170968Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:15.207018 containerd[1448]: time="2024-10-08T20:03:15.206994832Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:03:15.207370 containerd[1448]: time="2024-10-08T20:03:15.207350673Z" level=info msg="RemovePodSandbox \"88546961e9eaad4c77029906d3501005342a5294997c9c30f56991bd5361de96\" returns successfully" Oct 8 20:03:15.208831 containerd[1448]: time="2024-10-08T20:03:15.208700050Z" level=info msg="StopPodSandbox for \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\"" Oct 8 20:03:15.212662 containerd[1448]: time="2024-10-08T20:03:15.211224313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:15.212662 containerd[1448]: time="2024-10-08T20:03:15.212283892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 4.272217599s" Oct 8 20:03:15.212662 containerd[1448]: time="2024-10-08T20:03:15.212313990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 8 20:03:15.254567 containerd[1448]: time="2024-10-08T20:03:15.254518171Z" level=info msg="CreateContainer within sandbox \"176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 20:03:15.284156 containerd[1448]: time="2024-10-08T20:03:15.284115453Z" level=info msg="CreateContainer within sandbox \"176cfa3585fdcdde724fe43a81ddbe33d932ac5b08204160710749e4bf48cd8a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c784977670e60f82278b61a222e4a07afffd0a8a03aa6dfd3eacd13e77d9d1a9\"" Oct 8 20:03:15.285071 containerd[1448]: time="2024-10-08T20:03:15.285050957Z" level=info msg="StartContainer for \"c784977670e60f82278b61a222e4a07afffd0a8a03aa6dfd3eacd13e77d9d1a9\"" Oct 8 20:03:15.352139 containerd[1448]: 2024-10-08 20:03:15.303 [WARNING][4633] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b58e5c77-b9e1-40fc-b328-51c2c4af45f8", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1", Pod:"csi-node-driver-7lz5k", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib3148c6ba8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:15.352139 containerd[1448]: 2024-10-08 20:03:15.304 [INFO][4633] k8s.go 608: Cleaning up netns ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Oct 8 20:03:15.352139 containerd[1448]: 2024-10-08 20:03:15.304 [INFO][4633] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" iface="eth0" netns="" Oct 8 20:03:15.352139 containerd[1448]: 2024-10-08 20:03:15.304 [INFO][4633] k8s.go 615: Releasing IP address(es) ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Oct 8 20:03:15.352139 containerd[1448]: 2024-10-08 20:03:15.304 [INFO][4633] utils.go 188: Calico CNI releasing IP address ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Oct 8 20:03:15.352139 containerd[1448]: 2024-10-08 20:03:15.331 [INFO][4642] ipam_plugin.go 417: Releasing address using handleID ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" HandleID="k8s-pod-network.1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:03:15.352139 containerd[1448]: 2024-10-08 20:03:15.331 [INFO][4642] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:15.352139 containerd[1448]: 2024-10-08 20:03:15.331 [INFO][4642] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:15.352139 containerd[1448]: 2024-10-08 20:03:15.346 [WARNING][4642] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" HandleID="k8s-pod-network.1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:03:15.352139 containerd[1448]: 2024-10-08 20:03:15.346 [INFO][4642] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" HandleID="k8s-pod-network.1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:03:15.352139 containerd[1448]: 2024-10-08 20:03:15.348 [INFO][4642] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:15.352139 containerd[1448]: 2024-10-08 20:03:15.349 [INFO][4633] k8s.go 621: Teardown processing complete. ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Oct 8 20:03:15.352670 containerd[1448]: time="2024-10-08T20:03:15.352643648Z" level=info msg="TearDown network for sandbox \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\" successfully" Oct 8 20:03:15.354142 containerd[1448]: time="2024-10-08T20:03:15.352727646Z" level=info msg="StopPodSandbox for \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\" returns successfully" Oct 8 20:03:15.355587 containerd[1448]: time="2024-10-08T20:03:15.355261519Z" level=info msg="RemovePodSandbox for \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\"" Oct 8 20:03:15.355587 containerd[1448]: time="2024-10-08T20:03:15.355297636Z" level=info msg="Forcibly stopping sandbox \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\"" Oct 8 20:03:15.403018 systemd[1]: Started cri-containerd-c784977670e60f82278b61a222e4a07afffd0a8a03aa6dfd3eacd13e77d9d1a9.scope - libcontainer container c784977670e60f82278b61a222e4a07afffd0a8a03aa6dfd3eacd13e77d9d1a9. Oct 8 20:03:15.450783 containerd[1448]: 2024-10-08 20:03:15.406 [WARNING][4668] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b58e5c77-b9e1-40fc-b328-51c2c4af45f8", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"ce76a9970d1fc3546453d2332784b331dbed9b351c4c4dfa1064f12ceb0044e1", Pod:"csi-node-driver-7lz5k", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib3148c6ba8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:15.450783 containerd[1448]: 2024-10-08 20:03:15.406 [INFO][4668] k8s.go 608: Cleaning up netns ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Oct 8 20:03:15.450783 containerd[1448]: 2024-10-08 20:03:15.406 [INFO][4668] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" iface="eth0" netns="" Oct 8 20:03:15.450783 containerd[1448]: 2024-10-08 20:03:15.406 [INFO][4668] k8s.go 615: Releasing IP address(es) ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Oct 8 20:03:15.450783 containerd[1448]: 2024-10-08 20:03:15.407 [INFO][4668] utils.go 188: Calico CNI releasing IP address ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Oct 8 20:03:15.450783 containerd[1448]: 2024-10-08 20:03:15.434 [INFO][4683] ipam_plugin.go 417: Releasing address using handleID ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" HandleID="k8s-pod-network.1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:03:15.450783 containerd[1448]: 2024-10-08 20:03:15.435 [INFO][4683] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:15.450783 containerd[1448]: 2024-10-08 20:03:15.435 [INFO][4683] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:15.450783 containerd[1448]: 2024-10-08 20:03:15.442 [WARNING][4683] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" HandleID="k8s-pod-network.1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:03:15.450783 containerd[1448]: 2024-10-08 20:03:15.442 [INFO][4683] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" HandleID="k8s-pod-network.1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-csi--node--driver--7lz5k-eth0" Oct 8 20:03:15.450783 containerd[1448]: 2024-10-08 20:03:15.444 [INFO][4683] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:15.450783 containerd[1448]: 2024-10-08 20:03:15.446 [INFO][4668] k8s.go 621: Teardown processing complete. ContainerID="1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb" Oct 8 20:03:15.450783 containerd[1448]: time="2024-10-08T20:03:15.448982635Z" level=info msg="TearDown network for sandbox \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\" successfully" Oct 8 20:03:15.480437 containerd[1448]: time="2024-10-08T20:03:15.479883416Z" level=info msg="StartContainer for \"c784977670e60f82278b61a222e4a07afffd0a8a03aa6dfd3eacd13e77d9d1a9\" returns successfully" Oct 8 20:03:15.480728 containerd[1448]: time="2024-10-08T20:03:15.480685900Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:03:15.480908 containerd[1448]: time="2024-10-08T20:03:15.480778305Z" level=info msg="RemovePodSandbox \"1f4689eaa4355e4e18497a4303699609ca05c9a77e64dec38b3b4f4c2a0366fb\" returns successfully" Oct 8 20:03:15.481815 containerd[1448]: time="2024-10-08T20:03:15.481515296Z" level=info msg="StopPodSandbox for \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\"" Oct 8 20:03:15.602104 containerd[1448]: 2024-10-08 20:03:15.540 [WARNING][4718] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f8917b79-40cc-443c-bc04-f6952b0b2955", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b", Pod:"coredns-6f6b679f8f-7g2zh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35288cd761a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:15.602104 containerd[1448]: 2024-10-08 20:03:15.541 [INFO][4718] k8s.go 608: Cleaning up netns ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Oct 8 20:03:15.602104 containerd[1448]: 2024-10-08 20:03:15.541 [INFO][4718] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" iface="eth0" netns="" Oct 8 20:03:15.602104 containerd[1448]: 2024-10-08 20:03:15.541 [INFO][4718] k8s.go 615: Releasing IP address(es) ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Oct 8 20:03:15.602104 containerd[1448]: 2024-10-08 20:03:15.541 [INFO][4718] utils.go 188: Calico CNI releasing IP address ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Oct 8 20:03:15.602104 containerd[1448]: 2024-10-08 20:03:15.588 [INFO][4725] ipam_plugin.go 417: Releasing address using handleID ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" HandleID="k8s-pod-network.2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:03:15.602104 containerd[1448]: 2024-10-08 20:03:15.588 [INFO][4725] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:15.602104 containerd[1448]: 2024-10-08 20:03:15.588 [INFO][4725] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:15.602104 containerd[1448]: 2024-10-08 20:03:15.595 [WARNING][4725] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" HandleID="k8s-pod-network.2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:03:15.602104 containerd[1448]: 2024-10-08 20:03:15.595 [INFO][4725] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" HandleID="k8s-pod-network.2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:03:15.602104 containerd[1448]: 2024-10-08 20:03:15.597 [INFO][4725] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:15.602104 containerd[1448]: 2024-10-08 20:03:15.600 [INFO][4718] k8s.go 621: Teardown processing complete. ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Oct 8 20:03:15.606783 containerd[1448]: time="2024-10-08T20:03:15.602814072Z" level=info msg="TearDown network for sandbox \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\" successfully" Oct 8 20:03:15.606783 containerd[1448]: time="2024-10-08T20:03:15.602855791Z" level=info msg="StopPodSandbox for \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\" returns successfully" Oct 8 20:03:15.606783 containerd[1448]: time="2024-10-08T20:03:15.603916733Z" level=info msg="RemovePodSandbox for \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\"" Oct 8 20:03:15.606783 containerd[1448]: time="2024-10-08T20:03:15.603963712Z" level=info msg="Forcibly stopping sandbox \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\"" Oct 8 20:03:15.713286 containerd[1448]: 2024-10-08 20:03:15.664 [WARNING][4744] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f8917b79-40cc-443c-bc04-f6952b0b2955", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 2, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"22b1ebe9ffff2dc29478bc6f0330915630744f6ee97d622132826a357d1f613b", Pod:"coredns-6f6b679f8f-7g2zh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35288cd761a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:15.713286 containerd[1448]: 2024-10-08 20:03:15.666 [INFO][4744] k8s.go 608: Cleaning up netns ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Oct 8 20:03:15.713286 containerd[1448]: 2024-10-08 20:03:15.666 [INFO][4744] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" iface="eth0" netns="" Oct 8 20:03:15.713286 containerd[1448]: 2024-10-08 20:03:15.666 [INFO][4744] k8s.go 615: Releasing IP address(es) ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Oct 8 20:03:15.713286 containerd[1448]: 2024-10-08 20:03:15.666 [INFO][4744] utils.go 188: Calico CNI releasing IP address ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Oct 8 20:03:15.713286 containerd[1448]: 2024-10-08 20:03:15.689 [INFO][4751] ipam_plugin.go 417: Releasing address using handleID ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" HandleID="k8s-pod-network.2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:03:15.713286 containerd[1448]: 2024-10-08 20:03:15.690 [INFO][4751] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:15.713286 containerd[1448]: 2024-10-08 20:03:15.690 [INFO][4751] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:15.713286 containerd[1448]: 2024-10-08 20:03:15.703 [WARNING][4751] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" HandleID="k8s-pod-network.2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:03:15.713286 containerd[1448]: 2024-10-08 20:03:15.703 [INFO][4751] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" HandleID="k8s-pod-network.2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-coredns--6f6b679f8f--7g2zh-eth0" Oct 8 20:03:15.713286 containerd[1448]: 2024-10-08 20:03:15.708 [INFO][4751] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:15.713286 containerd[1448]: 2024-10-08 20:03:15.709 [INFO][4744] k8s.go 621: Teardown processing complete. ContainerID="2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504" Oct 8 20:03:15.713286 containerd[1448]: time="2024-10-08T20:03:15.712039596Z" level=info msg="TearDown network for sandbox \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\" successfully" Oct 8 20:03:15.715984 containerd[1448]: time="2024-10-08T20:03:15.715947190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:03:15.716110 containerd[1448]: time="2024-10-08T20:03:15.716083708Z" level=info msg="RemovePodSandbox \"2cb33fc17e8fc524d7b1f63c893cd78371df46a32b1a39c5b79671b9ea26b504\" returns successfully" Oct 8 20:03:15.926113 kubelet[2584]: I1008 20:03:15.926043 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b597657d9-5blcx" podStartSLOduration=45.650743446 podStartE2EDuration="49.925914539s" podCreationTimestamp="2024-10-08 20:02:26 +0000 UTC" firstStartedPulling="2024-10-08 20:03:10.939283836 +0000 UTC m=+57.693921369" lastFinishedPulling="2024-10-08 20:03:15.214454859 +0000 UTC m=+61.969092462" observedRunningTime="2024-10-08 20:03:15.921524915 +0000 UTC m=+62.676162458" watchObservedRunningTime="2024-10-08 20:03:15.925914539 +0000 UTC m=+62.680552082" Oct 8 20:03:21.351078 systemd[1]: Created slice kubepods-besteffort-pod936310b0_7030_42e4_b6f5_919022601c1e.slice - libcontainer container kubepods-besteffort-pod936310b0_7030_42e4_b6f5_919022601c1e.slice. Oct 8 20:03:21.372915 systemd[1]: Created slice kubepods-besteffort-pod9538ed6d_6984_4374_8bed_8bbc3e5caa84.slice - libcontainer container kubepods-besteffort-pod9538ed6d_6984_4374_8bed_8bbc3e5caa84.slice. Oct 8 20:03:21.377926 kubelet[2584]: I1008 20:03:21.375632 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/936310b0-7030-42e4-b6f5-919022601c1e-calico-apiserver-certs\") pod \"calico-apiserver-f66f9974-2t6dh\" (UID: \"936310b0-7030-42e4-b6f5-919022601c1e\") " pod="calico-apiserver/calico-apiserver-f66f9974-2t6dh" Oct 8 20:03:21.377926 kubelet[2584]: I1008 20:03:21.375679 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt4sj\" (UniqueName: \"kubernetes.io/projected/9538ed6d-6984-4374-8bed-8bbc3e5caa84-kube-api-access-dt4sj\") pod \"calico-apiserver-f66f9974-mshqd\" (UID: \"9538ed6d-6984-4374-8bed-8bbc3e5caa84\") " pod="calico-apiserver/calico-apiserver-f66f9974-mshqd" Oct 8 20:03:21.377926 kubelet[2584]: I1008 20:03:21.375700 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpnzv\" (UniqueName: \"kubernetes.io/projected/936310b0-7030-42e4-b6f5-919022601c1e-kube-api-access-qpnzv\") pod \"calico-apiserver-f66f9974-2t6dh\" (UID: \"936310b0-7030-42e4-b6f5-919022601c1e\") " pod="calico-apiserver/calico-apiserver-f66f9974-2t6dh" Oct 8 20:03:21.377926 kubelet[2584]: I1008 20:03:21.375723 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9538ed6d-6984-4374-8bed-8bbc3e5caa84-calico-apiserver-certs\") pod \"calico-apiserver-f66f9974-mshqd\" (UID: \"9538ed6d-6984-4374-8bed-8bbc3e5caa84\") " pod="calico-apiserver/calico-apiserver-f66f9974-mshqd" Oct 8 20:03:21.670380 containerd[1448]: time="2024-10-08T20:03:21.670259407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f66f9974-2t6dh,Uid:936310b0-7030-42e4-b6f5-919022601c1e,Namespace:calico-apiserver,Attempt:0,}" Oct 8 20:03:21.680448 containerd[1448]: time="2024-10-08T20:03:21.680371471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f66f9974-mshqd,Uid:9538ed6d-6984-4374-8bed-8bbc3e5caa84,Namespace:calico-apiserver,Attempt:0,}" Oct 8 20:03:21.862877 systemd-networkd[1361]: cali0b149f54395: Link UP Oct 8 20:03:21.864030 systemd-networkd[1361]: cali0b149f54395: Gained carrier Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.744 [INFO][4823] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--2t6dh-eth0 calico-apiserver-f66f9974- calico-apiserver 936310b0-7030-42e4-b6f5-919022601c1e 882 0 2024-10-08 20:03:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f66f9974 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-1-0-b-d257b8cc02.novalocal calico-apiserver-f66f9974-2t6dh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0b149f54395 [] []}} ContainerID="00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-2t6dh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--2t6dh-" Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.745 [INFO][4823] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-2t6dh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--2t6dh-eth0" Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.794 [INFO][4846] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" HandleID="k8s-pod-network.00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--2t6dh-eth0" Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.812 [INFO][4846] ipam_plugin.go 270: Auto assigning IP ContainerID="00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" HandleID="k8s-pod-network.00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--2t6dh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ada0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-1-0-b-d257b8cc02.novalocal", "pod":"calico-apiserver-f66f9974-2t6dh", "timestamp":"2024-10-08 20:03:21.794840237 +0000 UTC"}, Hostname:"ci-4081-1-0-b-d257b8cc02.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.813 [INFO][4846] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.813 [INFO][4846] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.813 [INFO][4846] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-b-d257b8cc02.novalocal' Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.816 [INFO][4846] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.821 [INFO][4846] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.826 [INFO][4846] ipam.go 489: Trying affinity for 192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.829 [INFO][4846] ipam.go 155: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.833 [INFO][4846] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.834 [INFO][4846] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.836 [INFO][4846] ipam.go 1685: Creating new handle: k8s-pod-network.00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2 Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.843 [INFO][4846] ipam.go 1203: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.853 [INFO][4846] ipam.go 1216: Successfully claimed IPs: [192.168.61.69/26] block=192.168.61.64/26 handle="k8s-pod-network.00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.853 [INFO][4846] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.69/26] handle="k8s-pod-network.00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.853 [INFO][4846] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:21.887678 containerd[1448]: 2024-10-08 20:03:21.853 [INFO][4846] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.61.69/26] IPv6=[] ContainerID="00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" HandleID="k8s-pod-network.00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--2t6dh-eth0" Oct 8 20:03:21.889481 containerd[1448]: 2024-10-08 20:03:21.858 [INFO][4823] k8s.go 386: Populated endpoint ContainerID="00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-2t6dh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--2t6dh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--2t6dh-eth0", GenerateName:"calico-apiserver-f66f9974-", Namespace:"calico-apiserver", SelfLink:"", UID:"936310b0-7030-42e4-b6f5-919022601c1e", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f66f9974", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"", Pod:"calico-apiserver-f66f9974-2t6dh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b149f54395", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:21.889481 containerd[1448]: 2024-10-08 20:03:21.858 [INFO][4823] k8s.go 387: Calico CNI using IPs: [192.168.61.69/32] ContainerID="00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-2t6dh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--2t6dh-eth0" Oct 8 20:03:21.889481 containerd[1448]: 2024-10-08 20:03:21.858 [INFO][4823] dataplane_linux.go 68: Setting the host side veth name to cali0b149f54395 ContainerID="00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-2t6dh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--2t6dh-eth0" Oct 8 20:03:21.889481 containerd[1448]: 2024-10-08 20:03:21.863 [INFO][4823] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-2t6dh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--2t6dh-eth0" Oct 8 20:03:21.889481 containerd[1448]: 2024-10-08 20:03:21.865 [INFO][4823] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-2t6dh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--2t6dh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--2t6dh-eth0", GenerateName:"calico-apiserver-f66f9974-", Namespace:"calico-apiserver", SelfLink:"", UID:"936310b0-7030-42e4-b6f5-919022601c1e", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f66f9974", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2", Pod:"calico-apiserver-f66f9974-2t6dh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b149f54395", MAC:"4e:ea:01:e7:39:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:21.889481 containerd[1448]: 2024-10-08 20:03:21.878 [INFO][4823] k8s.go 500: Wrote updated endpoint to datastore ContainerID="00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-2t6dh" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--2t6dh-eth0" Oct 8 20:03:21.917205 containerd[1448]: time="2024-10-08T20:03:21.916921764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:21.917205 containerd[1448]: time="2024-10-08T20:03:21.916981547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:21.917205 containerd[1448]: time="2024-10-08T20:03:21.917004480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:21.917205 containerd[1448]: time="2024-10-08T20:03:21.917328371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:21.949177 systemd[1]: Started cri-containerd-00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2.scope - libcontainer container 00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2. Oct 8 20:03:21.978075 systemd-networkd[1361]: cali435cf43bcb6: Link UP Oct 8 20:03:21.978804 systemd-networkd[1361]: cali435cf43bcb6: Gained carrier Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.780 [INFO][4827] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--mshqd-eth0 calico-apiserver-f66f9974- calico-apiserver 9538ed6d-6984-4374-8bed-8bbc3e5caa84 880 0 2024-10-08 20:03:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f66f9974 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-1-0-b-d257b8cc02.novalocal calico-apiserver-f66f9974-mshqd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali435cf43bcb6 [] []}} ContainerID="97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-mshqd" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--mshqd-" Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.780 [INFO][4827] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-mshqd" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--mshqd-eth0" Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.825 [INFO][4853] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" HandleID="k8s-pod-network.97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--mshqd-eth0" Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.840 [INFO][4853] ipam_plugin.go 270: Auto assigning IP ContainerID="97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" HandleID="k8s-pod-network.97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--mshqd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ad0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-1-0-b-d257b8cc02.novalocal", "pod":"calico-apiserver-f66f9974-mshqd", "timestamp":"2024-10-08 20:03:21.825161731 +0000 UTC"}, Hostname:"ci-4081-1-0-b-d257b8cc02.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.840 [INFO][4853] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.854 [INFO][4853] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.854 [INFO][4853] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-b-d257b8cc02.novalocal' Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.918 [INFO][4853] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.928 [INFO][4853] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.939 [INFO][4853] ipam.go 489: Trying affinity for 192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.942 [INFO][4853] ipam.go 155: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.951 [INFO][4853] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.951 [INFO][4853] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.953 [INFO][4853] ipam.go 1685: Creating new handle: k8s-pod-network.97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.962 [INFO][4853] ipam.go 1203: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.971 [INFO][4853] ipam.go 1216: Successfully claimed IPs: [192.168.61.70/26] block=192.168.61.64/26 handle="k8s-pod-network.97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.972 [INFO][4853] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.70/26] handle="k8s-pod-network.97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" host="ci-4081-1-0-b-d257b8cc02.novalocal" Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.973 [INFO][4853] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:22.013161 containerd[1448]: 2024-10-08 20:03:21.973 [INFO][4853] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.61.70/26] IPv6=[] ContainerID="97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" HandleID="k8s-pod-network.97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" Workload="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--mshqd-eth0" Oct 8 20:03:22.013933 containerd[1448]: 2024-10-08 20:03:21.975 [INFO][4827] k8s.go 386: Populated endpoint ContainerID="97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-mshqd" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--mshqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--mshqd-eth0", GenerateName:"calico-apiserver-f66f9974-", Namespace:"calico-apiserver", SelfLink:"", UID:"9538ed6d-6984-4374-8bed-8bbc3e5caa84", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f66f9974", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"", Pod:"calico-apiserver-f66f9974-mshqd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali435cf43bcb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:22.013933 containerd[1448]: 2024-10-08 20:03:21.975 [INFO][4827] k8s.go 387: Calico CNI using IPs: [192.168.61.70/32] ContainerID="97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-mshqd" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--mshqd-eth0" Oct 8 20:03:22.013933 containerd[1448]: 2024-10-08 20:03:21.975 [INFO][4827] dataplane_linux.go 68: Setting the host side veth name to cali435cf43bcb6 ContainerID="97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-mshqd" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--mshqd-eth0" Oct 8 20:03:22.013933 containerd[1448]: 2024-10-08 20:03:21.979 [INFO][4827] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-mshqd" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--mshqd-eth0" Oct 8 20:03:22.013933 containerd[1448]: 2024-10-08 20:03:21.983 [INFO][4827] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-mshqd" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--mshqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--mshqd-eth0", GenerateName:"calico-apiserver-f66f9974-", Namespace:"calico-apiserver", SelfLink:"", UID:"9538ed6d-6984-4374-8bed-8bbc3e5caa84", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f66f9974", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-b-d257b8cc02.novalocal", ContainerID:"97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af", Pod:"calico-apiserver-f66f9974-mshqd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali435cf43bcb6", MAC:"3a:ec:6b:1f:57:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:22.013933 containerd[1448]: 2024-10-08 20:03:22.007 [INFO][4827] k8s.go 500: Wrote updated endpoint to datastore ContainerID="97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af" Namespace="calico-apiserver" Pod="calico-apiserver-f66f9974-mshqd" WorkloadEndpoint="ci--4081--1--0--b--d257b8cc02.novalocal-k8s-calico--apiserver--f66f9974--mshqd-eth0" Oct 8 20:03:22.055474 containerd[1448]: time="2024-10-08T20:03:22.052870221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f66f9974-2t6dh,Uid:936310b0-7030-42e4-b6f5-919022601c1e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2\"" Oct 8 20:03:22.055474 containerd[1448]: time="2024-10-08T20:03:22.055018601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 20:03:22.097563 containerd[1448]: time="2024-10-08T20:03:22.097075037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:22.097563 containerd[1448]: time="2024-10-08T20:03:22.097192228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:22.097563 containerd[1448]: time="2024-10-08T20:03:22.097245559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:22.098289 containerd[1448]: time="2024-10-08T20:03:22.098164451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:22.117901 systemd[1]: Started cri-containerd-97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af.scope - libcontainer container 97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af. Oct 8 20:03:22.165204 containerd[1448]: time="2024-10-08T20:03:22.165166711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f66f9974-mshqd,Uid:9538ed6d-6984-4374-8bed-8bbc3e5caa84,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af\"" Oct 8 20:03:23.682704 systemd-networkd[1361]: cali0b149f54395: Gained IPv6LL Oct 8 20:03:23.745384 systemd-networkd[1361]: cali435cf43bcb6: Gained IPv6LL Oct 8 20:03:26.357652 containerd[1448]: time="2024-10-08T20:03:26.357168868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:26.359675 containerd[1448]: time="2024-10-08T20:03:26.359630496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 8 20:03:26.362837 containerd[1448]: time="2024-10-08T20:03:26.361863493Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:26.371326 containerd[1448]: time="2024-10-08T20:03:26.370924375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:26.372856 containerd[1448]: time="2024-10-08T20:03:26.372820276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 4.317697219s" Oct 8 20:03:26.372953 containerd[1448]: time="2024-10-08T20:03:26.372857196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 20:03:26.374215 containerd[1448]: time="2024-10-08T20:03:26.374186971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 20:03:26.375466 containerd[1448]: time="2024-10-08T20:03:26.375356575Z" level=info msg="CreateContainer within sandbox \"00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 20:03:26.413982 containerd[1448]: time="2024-10-08T20:03:26.413944495Z" level=info msg="CreateContainer within sandbox \"00aa784bdb17cb5100899ef9dc49696d6689c910267f08fca5adacc60ccb46e2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"70dea3ab049099bbae4f7a916dbf40c8560304347f0153523f83b36ca5c26c4b\"" Oct 8 20:03:26.417189 containerd[1448]: time="2024-10-08T20:03:26.414687706Z" level=info msg="StartContainer for \"70dea3ab049099bbae4f7a916dbf40c8560304347f0153523f83b36ca5c26c4b\"" Oct 8 20:03:26.452954 systemd[1]: Started cri-containerd-70dea3ab049099bbae4f7a916dbf40c8560304347f0153523f83b36ca5c26c4b.scope - libcontainer container 70dea3ab049099bbae4f7a916dbf40c8560304347f0153523f83b36ca5c26c4b. Oct 8 20:03:26.511283 containerd[1448]: time="2024-10-08T20:03:26.511246686Z" level=info msg="StartContainer for \"70dea3ab049099bbae4f7a916dbf40c8560304347f0153523f83b36ca5c26c4b\" returns successfully" Oct 8 20:03:26.810539 containerd[1448]: time="2024-10-08T20:03:26.810086371Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:26.814996 containerd[1448]: time="2024-10-08T20:03:26.814926279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 8 20:03:26.821835 containerd[1448]: time="2024-10-08T20:03:26.821726170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 447.447525ms" Oct 8 20:03:26.821941 containerd[1448]: time="2024-10-08T20:03:26.821853300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 20:03:26.831761 containerd[1448]: time="2024-10-08T20:03:26.831685834Z" level=info msg="CreateContainer within sandbox \"97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 20:03:26.856341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2417825571.mount: Deactivated successfully. Oct 8 20:03:26.869594 containerd[1448]: time="2024-10-08T20:03:26.869484418Z" level=info msg="CreateContainer within sandbox \"97a83c8a47a0f5f2a00056357b2f95f77903ac7dcc2b1e7cc8eac2cdf59802af\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fc81953f05cd2854cfda2d46e23acecdfe1a1c1cc3f39ad1d93b0a6c4c496326\"" Oct 8 20:03:26.875042 containerd[1448]: time="2024-10-08T20:03:26.875000211Z" level=info msg="StartContainer for \"fc81953f05cd2854cfda2d46e23acecdfe1a1c1cc3f39ad1d93b0a6c4c496326\"" Oct 8 20:03:26.908012 systemd[1]: Started cri-containerd-fc81953f05cd2854cfda2d46e23acecdfe1a1c1cc3f39ad1d93b0a6c4c496326.scope - libcontainer container fc81953f05cd2854cfda2d46e23acecdfe1a1c1cc3f39ad1d93b0a6c4c496326. Oct 8 20:03:26.967543 containerd[1448]: time="2024-10-08T20:03:26.967497267Z" level=info msg="StartContainer for \"fc81953f05cd2854cfda2d46e23acecdfe1a1c1cc3f39ad1d93b0a6c4c496326\" returns successfully" Oct 8 20:03:27.167855 kubelet[2584]: I1008 20:03:27.167689 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f66f9974-2t6dh" podStartSLOduration=1.8483301380000001 podStartE2EDuration="6.167669351s" podCreationTimestamp="2024-10-08 20:03:21 +0000 UTC" firstStartedPulling="2024-10-08 20:03:22.054320715 +0000 UTC m=+68.808958248" lastFinishedPulling="2024-10-08 20:03:26.373659918 +0000 UTC m=+73.128297461" observedRunningTime="2024-10-08 20:03:27.143771692 +0000 UTC m=+73.898409235" watchObservedRunningTime="2024-10-08 20:03:27.167669351 +0000 UTC m=+73.922306894" Oct 8 20:03:27.488524 kubelet[2584]: I1008 20:03:27.488037 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f66f9974-mshqd" podStartSLOduration=1.83075853 podStartE2EDuration="6.488023016s" podCreationTimestamp="2024-10-08 20:03:21 +0000 UTC" firstStartedPulling="2024-10-08 20:03:22.166413181 +0000 UTC m=+68.921050724" lastFinishedPulling="2024-10-08 20:03:26.823677627 +0000 UTC m=+73.578315210" observedRunningTime="2024-10-08 20:03:27.167497868 +0000 UTC m=+73.922135401" watchObservedRunningTime="2024-10-08 20:03:27.488023016 +0000 UTC m=+74.242660559" Oct 8 20:03:34.236051 systemd[1]: Started sshd@9-172.24.4.139:22-172.24.4.1:58514.service - OpenSSH per-connection server daemon (172.24.4.1:58514). Oct 8 20:03:35.501891 sshd[5076]: Accepted publickey for core from 172.24.4.1 port 58514 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:03:35.510344 sshd[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:35.627233 systemd-logind[1429]: New session 12 of user core. Oct 8 20:03:35.636607 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 20:03:38.850952 sshd[5076]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:38.862182 systemd[1]: sshd@9-172.24.4.139:22-172.24.4.1:58514.service: Deactivated successfully. Oct 8 20:03:38.867697 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 20:03:38.870799 systemd-logind[1429]: Session 12 logged out. Waiting for processes to exit. Oct 8 20:03:38.876369 systemd-logind[1429]: Removed session 12. Oct 8 20:03:43.876249 systemd[1]: Started sshd@10-172.24.4.139:22-172.24.4.1:35998.service - OpenSSH per-connection server daemon (172.24.4.1:35998). Oct 8 20:03:45.490365 sshd[5107]: Accepted publickey for core from 172.24.4.1 port 35998 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:03:45.492973 sshd[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:45.509883 systemd-logind[1429]: New session 13 of user core. Oct 8 20:03:45.516357 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 20:03:46.411030 systemd[1]: run-containerd-runc-k8s.io-fe37dd6a37401a4dba4ca663e04fb11cae77d561d4d5af2666c7d760528ce9df-runc.Z9P3zN.mount: Deactivated successfully. Oct 8 20:03:46.412141 sshd[5107]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:46.420605 systemd-logind[1429]: Session 13 logged out. Waiting for processes to exit. Oct 8 20:03:46.422135 systemd[1]: sshd@10-172.24.4.139:22-172.24.4.1:35998.service: Deactivated successfully. Oct 8 20:03:46.427472 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 20:03:46.429248 systemd-logind[1429]: Removed session 13. Oct 8 20:03:51.439259 systemd[1]: Started sshd@11-172.24.4.139:22-172.24.4.1:55960.service - OpenSSH per-connection server daemon (172.24.4.1:55960). Oct 8 20:03:53.015313 sshd[5170]: Accepted publickey for core from 172.24.4.1 port 55960 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:03:53.017669 sshd[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:53.023994 systemd-logind[1429]: New session 14 of user core. Oct 8 20:03:53.032963 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 20:03:54.119680 sshd[5170]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:54.143356 systemd[1]: Started sshd@12-172.24.4.139:22-172.24.4.1:55970.service - OpenSSH per-connection server daemon (172.24.4.1:55970). Oct 8 20:03:54.145636 systemd[1]: sshd@11-172.24.4.139:22-172.24.4.1:55960.service: Deactivated successfully. Oct 8 20:03:54.154584 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 20:03:54.161361 systemd-logind[1429]: Session 14 logged out. Waiting for processes to exit. Oct 8 20:03:54.166692 systemd-logind[1429]: Removed session 14. Oct 8 20:03:55.967114 sshd[5182]: Accepted publickey for core from 172.24.4.1 port 55970 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:03:55.969833 sshd[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:55.980930 systemd-logind[1429]: New session 15 of user core. Oct 8 20:03:55.993054 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 20:03:57.140084 systemd[1]: run-containerd-runc-k8s.io-c784977670e60f82278b61a222e4a07afffd0a8a03aa6dfd3eacd13e77d9d1a9-runc.Y3R4Kl.mount: Deactivated successfully. Oct 8 20:03:57.411365 sshd[5182]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:57.424937 systemd[1]: sshd@12-172.24.4.139:22-172.24.4.1:55970.service: Deactivated successfully. Oct 8 20:03:57.428174 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 20:03:57.430141 systemd-logind[1429]: Session 15 logged out. Waiting for processes to exit. Oct 8 20:03:57.437295 systemd[1]: Started sshd@13-172.24.4.139:22-172.24.4.1:43482.service - OpenSSH per-connection server daemon (172.24.4.1:43482). Oct 8 20:03:57.438977 systemd-logind[1429]: Removed session 15. Oct 8 20:03:58.985235 sshd[5215]: Accepted publickey for core from 172.24.4.1 port 43482 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:03:59.053701 sshd[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:59.065719 systemd-logind[1429]: New session 16 of user core. Oct 8 20:03:59.070362 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 20:04:00.473225 sshd[5215]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:00.479932 systemd[1]: sshd@13-172.24.4.139:22-172.24.4.1:43482.service: Deactivated successfully. Oct 8 20:04:00.483602 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 20:04:00.486172 systemd-logind[1429]: Session 16 logged out. Waiting for processes to exit. Oct 8 20:04:00.488958 systemd-logind[1429]: Removed session 16. Oct 8 20:04:05.487997 systemd[1]: Started sshd@14-172.24.4.139:22-172.24.4.1:59994.service - OpenSSH per-connection server daemon (172.24.4.1:59994). Oct 8 20:04:06.931495 sshd[5239]: Accepted publickey for core from 172.24.4.1 port 59994 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:04:06.934297 sshd[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:06.940824 systemd-logind[1429]: New session 17 of user core. Oct 8 20:04:06.946896 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 20:04:07.753895 sshd[5239]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:07.787872 systemd-logind[1429]: Session 17 logged out. Waiting for processes to exit. Oct 8 20:04:07.790137 systemd[1]: sshd@14-172.24.4.139:22-172.24.4.1:59994.service: Deactivated successfully. Oct 8 20:04:07.798065 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 20:04:07.801650 systemd-logind[1429]: Removed session 17. Oct 8 20:04:12.770400 systemd[1]: Started sshd@15-172.24.4.139:22-172.24.4.1:60000.service - OpenSSH per-connection server daemon (172.24.4.1:60000). Oct 8 20:04:14.005064 sshd[5256]: Accepted publickey for core from 172.24.4.1 port 60000 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:04:14.009180 sshd[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:14.021526 systemd-logind[1429]: New session 18 of user core. Oct 8 20:04:14.030243 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 20:04:15.418980 sshd[5256]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:15.429426 systemd-logind[1429]: Session 18 logged out. Waiting for processes to exit. Oct 8 20:04:15.430959 systemd[1]: sshd@15-172.24.4.139:22-172.24.4.1:60000.service: Deactivated successfully. Oct 8 20:04:15.435148 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 20:04:15.438168 systemd-logind[1429]: Removed session 18. Oct 8 20:04:20.441504 systemd[1]: Started sshd@16-172.24.4.139:22-172.24.4.1:38440.service - OpenSSH per-connection server daemon (172.24.4.1:38440). Oct 8 20:04:21.916922 sshd[5317]: Accepted publickey for core from 172.24.4.1 port 38440 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:04:21.922291 sshd[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:21.935926 systemd-logind[1429]: New session 19 of user core. Oct 8 20:04:21.942062 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 20:04:22.851326 sshd[5317]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:22.869372 systemd[1]: Started sshd@17-172.24.4.139:22-172.24.4.1:38452.service - OpenSSH per-connection server daemon (172.24.4.1:38452). Oct 8 20:04:22.870587 systemd[1]: sshd@16-172.24.4.139:22-172.24.4.1:38440.service: Deactivated successfully. Oct 8 20:04:22.878898 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 20:04:22.883191 systemd-logind[1429]: Session 19 logged out. Waiting for processes to exit. Oct 8 20:04:22.887792 systemd-logind[1429]: Removed session 19. Oct 8 20:04:25.177318 sshd[5335]: Accepted publickey for core from 172.24.4.1 port 38452 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:04:25.179970 sshd[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:25.191565 systemd-logind[1429]: New session 20 of user core. Oct 8 20:04:25.202128 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 20:04:26.639915 sshd[5335]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:26.658680 systemd[1]: Started sshd@18-172.24.4.139:22-172.24.4.1:37232.service - OpenSSH per-connection server daemon (172.24.4.1:37232). Oct 8 20:04:26.660057 systemd[1]: sshd@17-172.24.4.139:22-172.24.4.1:38452.service: Deactivated successfully. Oct 8 20:04:26.666022 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 20:04:26.671006 systemd-logind[1429]: Session 20 logged out. Waiting for processes to exit. Oct 8 20:04:26.675360 systemd-logind[1429]: Removed session 20. Oct 8 20:04:27.858970 sshd[5346]: Accepted publickey for core from 172.24.4.1 port 37232 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:04:27.863224 sshd[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:27.875795 systemd-logind[1429]: New session 21 of user core. Oct 8 20:04:27.884091 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 20:04:31.084822 sshd[5346]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:31.098870 systemd[1]: Started sshd@19-172.24.4.139:22-172.24.4.1:37234.service - OpenSSH per-connection server daemon (172.24.4.1:37234). Oct 8 20:04:31.115936 systemd[1]: sshd@18-172.24.4.139:22-172.24.4.1:37232.service: Deactivated successfully. Oct 8 20:04:31.120252 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 20:04:31.124041 systemd-logind[1429]: Session 21 logged out. Waiting for processes to exit. Oct 8 20:04:31.130948 systemd-logind[1429]: Removed session 21. Oct 8 20:04:32.503578 sshd[5365]: Accepted publickey for core from 172.24.4.1 port 37234 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:04:32.537714 sshd[5365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:32.551870 systemd-logind[1429]: New session 22 of user core. Oct 8 20:04:32.562138 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 20:04:35.402550 sshd[5365]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:35.413233 systemd[1]: sshd@19-172.24.4.139:22-172.24.4.1:37234.service: Deactivated successfully. Oct 8 20:04:35.415329 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 20:04:35.417521 systemd-logind[1429]: Session 22 logged out. Waiting for processes to exit. Oct 8 20:04:35.423254 systemd[1]: Started sshd@20-172.24.4.139:22-172.24.4.1:44934.service - OpenSSH per-connection server daemon (172.24.4.1:44934). Oct 8 20:04:35.426305 systemd-logind[1429]: Removed session 22. Oct 8 20:04:36.820802 sshd[5383]: Accepted publickey for core from 172.24.4.1 port 44934 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:04:36.824375 sshd[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:36.839650 systemd-logind[1429]: New session 23 of user core. Oct 8 20:04:36.844127 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 20:04:37.909596 sshd[5383]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:37.915282 systemd[1]: sshd@20-172.24.4.139:22-172.24.4.1:44934.service: Deactivated successfully. Oct 8 20:04:37.919409 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 20:04:37.921433 systemd-logind[1429]: Session 23 logged out. Waiting for processes to exit. Oct 8 20:04:37.923356 systemd-logind[1429]: Removed session 23. Oct 8 20:04:42.930458 systemd[1]: Started sshd@21-172.24.4.139:22-172.24.4.1:44950.service - OpenSSH per-connection server daemon (172.24.4.1:44950). Oct 8 20:04:44.473976 sshd[5417]: Accepted publickey for core from 172.24.4.1 port 44950 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:04:44.476359 sshd[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:44.487267 systemd-logind[1429]: New session 24 of user core. Oct 8 20:04:44.506017 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 20:04:45.277106 sshd[5417]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:45.280965 systemd-logind[1429]: Session 24 logged out. Waiting for processes to exit. Oct 8 20:04:45.282322 systemd[1]: sshd@21-172.24.4.139:22-172.24.4.1:44950.service: Deactivated successfully. Oct 8 20:04:45.285232 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 20:04:45.289605 systemd-logind[1429]: Removed session 24. Oct 8 20:04:50.294246 systemd[1]: Started sshd@22-172.24.4.139:22-172.24.4.1:36604.service - OpenSSH per-connection server daemon (172.24.4.1:36604). Oct 8 20:04:51.962915 sshd[5473]: Accepted publickey for core from 172.24.4.1 port 36604 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:04:51.967956 sshd[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:51.980117 systemd-logind[1429]: New session 25 of user core. Oct 8 20:04:51.984368 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 20:04:52.820101 sshd[5473]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:52.828365 systemd[1]: sshd@22-172.24.4.139:22-172.24.4.1:36604.service: Deactivated successfully. Oct 8 20:04:52.834219 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 20:04:52.836378 systemd-logind[1429]: Session 25 logged out. Waiting for processes to exit. Oct 8 20:04:52.839378 systemd-logind[1429]: Removed session 25. Oct 8 20:04:57.142252 systemd[1]: run-containerd-runc-k8s.io-c784977670e60f82278b61a222e4a07afffd0a8a03aa6dfd3eacd13e77d9d1a9-runc.nw43i0.mount: Deactivated successfully. Oct 8 20:04:57.841526 systemd[1]: Started sshd@23-172.24.4.139:22-172.24.4.1:51914.service - OpenSSH per-connection server daemon (172.24.4.1:51914). Oct 8 20:04:59.160134 sshd[5512]: Accepted publickey for core from 172.24.4.1 port 51914 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:04:59.164481 sshd[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:59.208219 systemd-logind[1429]: New session 26 of user core. Oct 8 20:04:59.219102 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 20:04:59.953278 sshd[5512]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:59.962564 systemd[1]: sshd@23-172.24.4.139:22-172.24.4.1:51914.service: Deactivated successfully. Oct 8 20:04:59.966818 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 20:04:59.970580 systemd-logind[1429]: Session 26 logged out. Waiting for processes to exit. Oct 8 20:04:59.973391 systemd-logind[1429]: Removed session 26. Oct 8 20:05:04.977417 systemd[1]: Started sshd@24-172.24.4.139:22-172.24.4.1:48444.service - OpenSSH per-connection server daemon (172.24.4.1:48444). Oct 8 20:05:06.507296 sshd[5530]: Accepted publickey for core from 172.24.4.1 port 48444 ssh2: RSA SHA256:N4tAxOYyt600zP8LzVHN9krjQqk3csZTCmZq/eMm2uA Oct 8 20:05:06.512267 sshd[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:06.523947 systemd-logind[1429]: New session 27 of user core. Oct 8 20:05:06.534120 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 8 20:05:07.236132 sshd[5530]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:07.244173 systemd-logind[1429]: Session 27 logged out. Waiting for processes to exit. Oct 8 20:05:07.245654 systemd[1]: sshd@24-172.24.4.139:22-172.24.4.1:48444.service: Deactivated successfully. Oct 8 20:05:07.249996 systemd[1]: session-27.scope: Deactivated successfully. Oct 8 20:05:07.254992 systemd-logind[1429]: Removed session 27.